Kuka Sunriseos 114 Si en
Kuka Sunriseos 114 Si en
Kuka Sunriseos 114 Si en
KUKA Sun-
rise.OS 1.14
KUKA Sun-
rise.Wor...
Issued: 14.08.2017
© Copyright 2017
KUKA Roboter GmbH
Zugspitzstraße 140
D-86165 Augsburg
Germany
This documentation or excerpts therefrom may not be reproduced or disclosed to third parties without
the express permission of KUKA Roboter GmbH.
Other functions not described in this documentation may be operable in the controller. The user has
no claims to these functions, however, in the case of a replacement or service work.
We have checked the content of this documentation for conformity with the hardware and software
described. Nevertheless, discrepancies cannot be precluded, for which reason we are not able to
guarantee total conformity. The information in this documentation is checked on a regular basis, how-
ever, and necessary corrections will be incorporated in the subsequent edition.
Subject to technical alterations without an effect on the function.
Translation of the original documentation
KIM-PS5-DOC
Contents
1 Introduction .................................................................................................. 19
1.1 Target group .............................................................................................................. 19
1.2 Industrial robot documentation ................................................................................... 19
1.3 Representation of warnings and notes ...................................................................... 19
1.4 Trademarks ................................................................................................................ 20
1.5 Terms used ................................................................................................................ 20
1.6 Licenses ..................................................................................................................... 21
3 Safety ............................................................................................................ 27
3.1 Legal framework ........................................................................................................ 27
3.1.1 Liability .................................................................................................................. 27
3.1.2 Intended use of the industrial robot ...................................................................... 27
3.1.3 EC declaration of conformity and declaration of incorporation ............................. 28
3.2 Safety functions ......................................................................................................... 28
3.2.1 Terms used ........................................................................................................... 29
3.2.2 Personnel .............................................................................................................. 31
3.2.3 Workspace, safety zone and danger zone ........................................................... 32
3.2.4 Safety-oriented functions ...................................................................................... 33
3.2.4.1 EMERGENCY STOP device ........................................................................... 33
3.2.4.2 Enabling device ............................................................................................... 34
3.2.4.3 “Operator safety” signal ................................................................................... 34
3.2.4.4 External EMERGENCY STOP device ............................................................. 35
3.2.4.5 External safety stop 1 (path-maintaining) ....................................................... 35
3.2.4.6 External enabling device .................................................................................. 35
3.2.4.7 External safe operational stop ......................................................................... 35
3.2.5 Triggers for safety-oriented stop reactions ........................................................... 36
3.2.6 Non-safety-oriented functions ............................................................................... 37
3.2.6.1 Mode selection ................................................................................................. 37
3.2.6.2 Software limit switches .................................................................................... 38
3.3 Additional protective equipment ................................................................................. 38
3.3.1 Jog mode .............................................................................................................. 38
3.3.2 Labeling on the industrial robot ............................................................................. 38
3.3.3 External safeguards .............................................................................................. 39
3.4 Safety measures ........................................................................................................ 39
3.4.1 General safety measures ...................................................................................... 39
3.4.2 Transportation ....................................................................................................... 41
3.4.3 Start-up and recommissioning .............................................................................. 41
3.4.4 Manual mode ........................................................................................................ 43
3.4.5 Automatic mode .................................................................................................... 44
3.4.6 Maintenance and repair ........................................................................................ 44
3.4.7 Decommissioning, storage and disposal .............................................................. 45
15.10.5 Transferring workpiece load data to the safety controller ..................................... 376
15.11 Using inputs/outputs in the program .......................................................................... 378
15.11.1 Integrating an I/O group ........................................................................................ 380
15.11.2 Reading inputs/outputs ......................................................................................... 380
15.11.3 Setting outputs ...................................................................................................... 381
15.12 Requesting axis torques ............................................................................................ 382
15.13 Reading Cartesian forces and torques ...................................................................... 383
15.13.1 Requesting external Cartesian forces and torques ............................................... 383
15.13.2 Requesting forces and torques individually .......................................................... 384
15.13.3 Checking the reliability of the calculated values ................................................... 385
15.14 Requesting the robot position .................................................................................... 386
15.14.1 Requesting the axis-specific actual or setpoint position ....................................... 387
15.14.2 Requesting the Cartesian actual or setpoint position ........................................... 388
15.14.3 Requesting the Cartesian setpoint/actual value difference ................................... 389
15.15 HOME position ........................................................................................................... 390
15.15.1 Changing the HOME position ............................................................................... 390
15.16 Requesting system states .......................................................................................... 391
15.16.1 Requesting the HOME position ............................................................................ 391
15.16.2 Requesting the mastering state ............................................................................ 392
15.16.3 Checking “ready for motion” ................................................................................. 392
15.16.3.1 Reacting to changes in the “ready for motion” signal ...................................... 393
15.16.4 Checking the robot activity .................................................................................... 394
15.16.5 Requesting the state of safety signals .................................................................. 394
15.16.5.1 Requesting the referencing state ..................................................................... 396
15.16.5.2 Reacting to a change in state of safety signals ............................................... 397
15.17 Changing and requesting the program run mode ...................................................... 398
15.18 Changing and requesting the override ....................................................................... 399
15.18.1 Reacting to an override change ............................................................................ 400
15.19 Overview of conditions ............................................................................................... 401
15.19.1 Complex conditions .............................................................................................. 403
15.19.2 Axis torque condition ............................................................................................ 404
15.19.3 Force condition ..................................................................................................... 405
15.19.3.1 Condition for Cartesian force from all directions .............................................. 406
15.19.3.2 Condition for normal force ............................................................................... 408
15.19.3.3 Condition for shear force ................................................................................. 409
15.19.4 Force component condition .................................................................................. 411
15.19.5 Condition for Cartesian torque .............................................................................. 413
15.19.5.1 Condition for Cartesian torque from all directions ............................................ 414
15.19.5.2 Condition for torque ......................................................................................... 415
15.19.5.3 Condition for tilting torque ................................................................................ 416
15.19.6 Torque component condition ................................................................................ 417
15.19.7 Path-related condition ........................................................................................... 418
15.19.8 Distance condition ................................................................................................ 421
15.19.8.1 Distance component condition ......................................................................... 421
15.19.9 Condition for Boolean signals ............................................................................... 422
15.19.10 Condition for the range of values of a signal ........................................................ 423
15.20 Break conditions for motion commands ..................................................................... 424
15.20.1 Defining break conditions ..................................................................................... 424
15.20.2 Evaluating the break conditions ............................................................................ 425
1 Introduction
t
This documentation is aimed at users with the following knowledge and skills:
Advanced knowledge of the robot controller system
Advanced Java programming skills
Notices These notices serve to make your work easier or contain references to further
information.
1.4 Trademarks
Term Description
AMF Atomic Monitoring Function
Smallest unit of a monitoring function
API Application Programming Interface
Interface for programming applications.
ESM Event-Driven Safety Monitoring
Safety monitoring functions which are activated using defined events
EtherCAT An Ethernet-based field bus suitable for real-time requirements.
EVC Enhanced Velocity Controller
Installable option for limitation of the Cartesian robot velocity
EVC automatically adapts the robot velocity so that safety-oriented and
application-specific Cartesian velocity limits are adhered to.
Exception Exception or exceptional situation
An exception describes a procedure for forwarding information about
certain program statuses, mainly error states, to other program levels for
further processing.
Frame A frame is a 3-dimensional coordinate system that is described by its
position and orientation relative to a reference system.
Points in space can be easily defined using frames. Frames are often
arranged hierarchically in a tree structure.
FSoE Fail Safe over EtherCAT
FSoE is a protocol for transferring safety-relevant data via EtherCAT. An
FSoE master and an FSoE slave are used for this.
GMS (German: Joint torque sensor
Gelenkmomenten-
The KUKA LBR iiwa has a joint torque sensor in each axis. The torques
sensor)
on the output side in each axis are measured using these joint torque
sensors.
HRC Human-robot collaboration
Javadoc Javadoc is a documentation generated from specific Java comments.
JRE Java Runtime Environment
Runtime environment of the Java programming language
KLI KUKA Line Interface
Ethernet interface of the robot controller (not real-time-capable) for
external communication.
Term Description
KMP KUKA Mobile Platform
Designation for mobile platforms from KUKA
KUKA RoboticsAPI Java programming interface for KUKA robots
KUKA RoboticsAPI is an object-oriented Java interface for controlling
robots and peripheral devices.
KUKA smartHMI see “smartHMI”
KUKA smartPAD see “smartPAD”
KUKA Sunrise Control hardware for operating industrial robots
Cabinet
KUKA Sunrise.OS KUKA Sunrise.Operating System
System software for industrial robots which are operated with the robot
controller KUKA Sunrise Cabinet
HRC Human-robot collaboration
PROFINET PROFINET is an Ethernet-based field bus.
PROFIsafe PROFIsafe is a PROFINET-based safety interface for connecting a
safety PLC to the robot controller. (PLC = master, robot controller =
slave)
PSM Permanent Safety Monitoring
Safety monitoring functions which are permanently active
smartHMI Smart human-machine interface
The smartHMI is the user interface of the robot controller.
smartPAD The smartPAD is the hand-held control panel for the robot cell (station).
It has all the operator control and display functions required for opera-
tion of the station.
PLC Programmable logic controller
Sunrise project A Sunrise project is a specialization of a Java project. All Sunrise-spe-
cific functions are only possible in Sunrise projects and not in conven-
tional Java projects. These functions include, for example:
Creation of a safety configuration
Creation of an I/O configuration
Creation of robot and background applications
Configuration of the Automatic External interface
TCP Tool center point
The TCP is the working point of a tool. Multiple working points can be
defined for a tool.
1.6 Licenses
KUKA Sunrise.OS uses open-source software. The license terms are stored
in the licenses folder in the installation directory of KUKA Sunrise.Workbench.
2 Product description
2
2.1
t
Overview of the robot system
A robot system (>>> Fig. 2-1 ) comprises all the assemblies of an industrial
s
Description KUKA Sunrise.OS is a system software package for industrial robots in which
programming and operator control tasks are strictly separated from one anoth-
er.
Robot applications are programmed with KUKA Sunrise.Workbench.
A robot cell (station) is operated using the KUKA smartPAD control panel.
A station consists of a robot controller, a manipulator and further devices.
A station may carry out multiple applications (tasks).
Division of tasks KUKA Sunrise.Workbench is the tool for the start-up of a station and the de-
velopment of robot applications. WorkVisual is used for bus configuration and
bus mapping.
The smartPAD is only required in the start-up phase for tasks which for prac-
tical or safety reasons cannot be carried out using KUKA Sunrise.Workbench.
The smartPAD is used e.g. for mastering axes, calibrating tools and teaching
points.
After start-up and application development, the operator can carry out simple
servicing work and operating tasks using the smartPAD. The operator cannot
change the station and safety configuration or the programming.
Overview
Task WorkVisual Workbench smartPAD
Station configuration
Software installation
Bus configuration/diagnosis
Bus mapping
Programming
Remote debugging
Creating frames
Teaching frames
Jogging
Mastering
Calibration
Setting/polling outputs
Polling inputs
Use The system software is intended exclusively for the operation of KUKA axes
in an industrial setting in conjunction with KUKA Sunrise Cabinet. KUKA axes
include, for example, industrial robots and mobile platforms.
Each version of the system software may be operated exclusively in accor-
dance with the specified system requirements.
Misuse Any use or application deviating from the intended use is deemed to be misuse
and is not allowed. KUKA Roboter GmbH is not liable for any damage resulting
from such misuse. The risk lies entirely with the user.
Examples of such misuse include:
Operating axes that are not KUKA axes
Operation of the system software not in accordance with the specified sys-
tem requirements
Use of any debugger other than that provided by Sunrise.Workbench
Use for non-industrial applications for which specific product require-
ments/standards exist (e.g. medical applications)
3 Safety
f
3.1.1 Liability
Safety infor- Safety information cannot be held against KUKA Roboter GmbH. Even if all
mation safety instructions are followed, this is not a guarantee that the industrial robot
will not cause personal injuries or material damage.
No modifications may be carried out to the industrial robot without the autho-
rization of KUKA Roboter GmbH. Additional components (tools, software,
etc.), not supplied by KUKA Roboter GmbH, may be integrated into the indus-
trial robot. The user is liable for any damage these components may cause to
the industrial robot or to other material property.
In addition to the Safety chapter, this document contains further safety instruc-
tions. These must also be observed.
The industrial robot is intended exclusively for the use designated in the “Pur-
pose” chapter of the operating instructions or assembly instructions.
Any use or application deviating from the intended use is deemed to be misuse
and is not allowed. The manufacturer is not liable for any damage resulting
from such misuse. The risk lies entirely with the user.
Operation of the industrial robot in accordance with its intended use also re-
quires compliance with the operating and assembly instructions for the individ-
ual components, with particular reference to the maintenance specifications.
The user is responsible for carrying out a risk assessment. This indicates the
additional safety equipment that is required, the installation of which is also the
responsibility of the user.
Misuse Any use or application deviating from the intended use is deemed to be misuse
and is not allowed. This includes e.g.:
EC declaration of The system integrator must issue an EC declaration of conformity for the com-
conformity plete system in accordance with the Machinery Directive. The EC declaration
of conformity forms the basis for the CE mark for the system. The industrial
robot must always be operated in accordance with the applicable national
laws, regulations and standards.
The robot controller has a CE mark in accordance with the EMC Directive and
the Low Voltage Directive.
Term Description
Axis range Range within which the axis may move The axis range must be defined
for each axis.
Stopping distance Stopping distance = reaction distance + braking distance
The stopping distance is part of the danger zone.
Workspace The manipulator is allowed to move within its workspace. The work-
space is derived from the individual axis ranges.
Automatic (AUT) Operating mode for program execution. The manipulator moves at the
programmed velocity.
Operator The user of the industrial robot can be the management, employer or
(User) delegated person responsible for use of the industrial robot.
Danger zone The danger zone consists of the workspace and the stopping distances.
Service life The service life of a safety-relevant component begins at the time of
delivery of the component to the customer.
The service life is not affected by whether the component is used in a
robot controller or elsewhere or not, as safety-relevant components are
also subject to aging during storage.
Term Description
CRR Controlled Robot Retraction
CRR is an operating mode which can be selected when the industrial
robot is stopped by the safety controller for one of the following reasons:
Industrial robot violates an axis-specific or Cartesian monitoring
space.
Orientation of a safety-oriented tool is outside the monitored range.
Industrial robot violates a force or torque monitoring function.
A position sensor is not mastered or referenced.
A joint torque sensor is not referenced.
After changing to CRR mode, the industrial robot may once again be
moved.
KUKA smartPAD See “smartPAD”
Manipulator The robot arm and the associated electrical installations
Safety zone The manipulator is not allowed to move within the safety zone. The
safety zone is the area outside the danger zone.
Safety stop The safety stop is triggered by the safety controller, interrupts the work
procedure and causes all robot motions to come to a standstill. The pro-
gram data are retained in the case of a safety stop and the program can
be resumed from the point of interruption.
The safety stop can be executed as a Stop category 0, Stop category 1
or Stop category 1 (path-maintaining).
Note: In this document, a safety stop of Stop category 0 is referred to as
safety stop 0, a safety stop of Stop category 1 as safety stop 1 and a
safety stop of Stop category 1 (path-maintaining) as safety stop 1 (path-
maintaining).
smartPAD The smartPAD is the hand-held control panel for the robot cell (station).
It has all the operator control and display functions required for opera-
tion of the station.
Stop category 0 The drives are deactivated immediately and the brakes are applied.
Stop category 1 The manipulator is braked and does not stay on the programmed path.
The manipulator is brought to a standstill with the drives. As soon as an
axis is at a standstill, the drive is switched off and the brake is applied.
The internal electronic drive system of the robot performs safety-ori-
ented monitoring of the braking process. Stop category 0 is executed in
the event of a fault.
Note: Stop category 1 is currently only supported by the LBR iiwa. For
other manipulators, Stop category 0 is executed.
Stop category 1 (path- The manipulator is braked and stays on the programmed path. At stand-
maintaining) still, the drives are deactivated and the brakes are applied.
If Stop category 1 (path-maintaining) is triggered by the safety controller,
the safety controller monitors the braking process. The brakes are
applied and the drives are switched off after 1 s at the latest. Stop cate-
gory 1 is executed in the event of a fault.
System integrator System integrators are people who safely integrate the industrial robot
(plant integrator) into a complete system and commission it.
Term Description
T1 Test mode, Manual Reduced Velocity (<= 250 mm/s)
Note: With manual guidance in T1, the velocity is not reduced, but
rather limited through a safety-oriented velocity monitoring in accor-
dance with the safety configuration.
Note: The maximum velocity of 250 mm/s does not apply to a mobile
platform.
T2 Test mode, Manual High Velocity (> 250 mm/s permissible)
3.2.2 Personnel
The following persons or groups of persons are defined for the industrial robot:
User
Personnel
All persons working with the industrial robot must have read and un-
derstood the industrial robot documentation, including the safety
chapter.
User The user must observe the labor laws and regulations. This includes e.g.:
The user must comply with his monitoring obligations.
The user must carry out briefing at defined intervals.
Personnel Personnel must be instructed, before any work is commenced, in the type of
work involved and what exactly it entails as well as any hazards which may ex-
ist. Instruction must be carried out regularly. Instruction is also required after
particular incidents or technical modifications.
Personnel includes:
System integrator
Operators, subdivided into:
Start-up, maintenance and service personnel
Operating personnel
Cleaning personnel
System integrator The industrial robot is safely integrated into a complete system by the system
integrator.
The system integrator is responsible for the following tasks:
Installing the industrial robot
Connecting the industrial robot
Performing risk assessment
Implementing the required safety functions and safeguards
Issuing the EC declaration of conformity
Attaching the CE mark
Creating the operating instructions for the system
Work on the system must only be carried out by qualified personnel. These
are people who, due to their specialist training, knowledge and experi-
ence, and their familiarization with the relevant standards, are able to as-
sess the work to be carried out and detect any potential hazards.
The danger zone consists of the workspace and the stopping distances of the
manipulator. In the event of a stop, the manipulator is braked and comes to a
stop within the danger zone. The safety zone is the area outside the danger
zone.
The danger zone must be protected by means of physical safeguards, e.g. by
light barriers, light curtains or safety fences. If there are no physical safe-
guards present, the requirements for collaborative operation in accordance
with EN ISO 10218 must be met. There must be no shearing or crushing haz-
ards at the loading and transfer areas.
The EMERGENCY STOP device for the industrial robot is the EMERGENCY
STOP device on the smartPAD. The device must be pressed in the event of a
hazardous situation or emergency.
Reaction of the industrial robot if the EMERGENCY STOP device is pressed:
The manipulator stops with a safety stop 1 (path-maintaining).
Before operation can be resumed, the EMERGENCY STOP device must be
turned to release it.
If a holder is used for the smartPAD and conceals the EMERGENCY STOP
device on the smartPAD, an external EMERGENCY STOP device must be in-
stalled that is accessible at all times.
(>>> 3.2.4.4 "External EMERGENCY STOP device" Page 35)
The enabling devices of the industrial robot are the enabling switches on the
smartPAD.
There are 3 enabling switches installed on the smartPAD. The enabling
switches have 3 positions:
Not pressed
Center position
Fully pressed (panic position)
In the test modes and in CRR, the manipulator can only be moved if one of the
enabling switches is held in the central position.
Releasing the enabling switch triggers a safety stop 1 (path-maintaining).
Fully pressing the enabling switch triggers a safety stop 1 (path-maintain-
ing).
It is possible to hold 2 enabling switches in the center position simultane-
ously for several seconds. This makes it possible to adjust grip from one
enabling switch to another one. If 2 enabling switches are held simultane-
ously in the center position for longer than 15 seconds, this triggers a safe-
ty stop 1 (path-maintaining).
If an enabling switch malfunctions (e.g. jams in the central position), the indus-
trial robot can be stopped using the following methods:
Press the enabling switch down fully.
Actuate the EMERGENCY STOP device.
Release the Start key.
The “operator safety” signal is used for monitoring physical safeguards, e.g.
safety gates. In the default configuration, T2 and automatic operation are not
possible without this signal. Alternatively, the requirements for collaborative
operation in accordance with EN ISO 10218 must be met.
Reaction of the industrial robot in the event of a loss of signal during T2 or au-
tomatic operation (default configuration):
The manipulator stops with a safety stop 1 (path-maintaining).
By default, operator safety is not active in the modes T1 (Manual Reduced Ve-
locity) and CRR, i.e. the signal is not evaluated.
Every operator station that can initiate a robot motion or other potentially haz-
ardous situation must be equipped with an EMERGENCY STOP device. The
system integrator is responsible for ensuring this.
Reaction of the industrial robot if the external EMERGENCY STOP device is
pressed (default configuration):
The manipulator stops with a safety stop 1 (path-maintaining).
External EMERGENCY STOP devices are connected via the safety interface
of the robot controller. External EMERGENCY STOP devices are not included
in the scope of supply of the industrial robot.
External enabling devices are required if it is necessary for more than one per-
son to be in the danger zone of the industrial robot.
Multiple external enabling devices can be connected via the safety interface of
the robot controller. External enabling devices are not included in the scope of
supply of the industrial robot.
An external enabling device can be used for manual guidance of the robot.
When enabling is active, the robot may only be moved at reduced velocity.
For manual guidance, safety-oriented velocity monitoring with a maximum
permissible velocity of 250 mm/s is preconfigured. The maximum permissible
velocity can be adapted.
The value for the maximum permissible velocity must be determined as part
of a risk assessment.
The safe operational stop is a standstill monitoring function. It does not stop
the robot motion, but monitors whether the robot axes are stationary.
The safe operational stop can be triggered via an input on the safety interface.
The state is maintained as long as the external signal is FALSE. If the external
signal is TRUE, the manipulator can be moved again. No acknowledgement is
required.
Permanently The following triggers for stop reactions are permanently defined:
defined triggers
Trigger T1, T2, CRR AUT
Operating mode changed Safety stop 1 (path-maintaining)
during operation
Enabling switch released Safety stop 1 (path- -
maintaining)
Enabling switch pressed Safety stop 1 (path- -
fully down (panic position) maintaining)
Local E-STOP pressed Safety stop 1 (path-maintaining)
Error in safety controller Safety stop 1
User-specific When creating a new Sunrise project, the system automatically generates a
triggers project-specific safety configuration. This contains the following user-specific
stop reaction triggers preconfigured by KUKA (in addition to the permanently
defined triggers):
This default safety configuration is valid for the system software with-
out additionally installed option packages or catalog elements. If ad-
ditional option packages or catalog elements have been installed, the
default safety configuration may be modified.
Triggers for If an enabling device is configured for manual guidance, the following addition-
manual guidance al triggers for stop reactions are permanently defined:
Operating
Use Velocities
mode
T1 Programming, teaching and testing of Program verification:
programs. Reduced programmed velocity,
maximum 250 mm/s
Manual mode:
Jog velocity, maximum 250 mm/s
Manual guidance:
No limitation of the velocity, but
safety-oriented velocity monitoring
in accordance with the safety con-
figuration
Note: The maximum velocity of
250 mm/s does not apply to a mobile
platform.
T2 Testing of programs Program verification:
Programmed velocity
Manual mode: Not possible
Operating
Use Velocities
mode
AUT Automatic execution of programs Program mode:
For industrial robots with and without Programmed velocity
higher-level controllers Manual mode: Not possible
CRR CRR is an operating mode which can Program verification:
be selected when the industrial robot is Reduced programmed velocity,
stopped by the safety controller for one maximum 250 mm/s
of the following reasons:
Manual mode:
Industrial robot violates an axis-spe- Jog velocity, maximum 250 mm/s
cific or Cartesian monitoring space.
Manual guidance:
Orientation of a safety-oriented tool
No limitation of the velocity, but
is outside the monitored range.
safety-oriented velocity monitoring
Industrial robot violates a force or in accordance with the safety con-
torque monitoring function. figuration
A position sensor is not mastered or
referenced.
A joint torque sensor is not refer-
enced.
After changing to CRR mode, the
industrial robot may once again be
moved.
The axis ranges of all manipulator axes are limited by means of non-safety-
oriented software limit switches. These software limit switches only serve as
machine protection and are preset in such a way that the manipulator is
stopped under servo control if the axis limit is exceeded, thereby preventing
damage to the mechanical equipment.
All plates, labels, symbols and marks constitute safety-relevant parts of the in-
dustrial robot. They must not be modified or removed.
Labeling on the industrial robot consists of:
Identification plates
Warning signs
Safety symbols
Designation labels
Cable markings
Rating plates
The access of persons to the danger zone of the industrial robot must be pre-
vented by means of safeguards. Alternatively, the requirements for collabora-
tive operation in accordance with EN ISO 10218 must be met. It is the
responsibility of the system integrator to ensure this.
Physical safeguards must meet the following requirements:
They meet the requirements of EN ISO 14120.
They prevent access of persons to the danger zone and cannot be easily
circumvented.
They are sufficiently fastened and can withstand all forces that are likely
to occur in the course of operation, whether from inside or outside the en-
closure.
They do not, themselves, represent a hazard or potential hazard.
The prescribed minimum clearance from the danger zone is maintained.
Safety gates (maintenance gates) must meet the following requirements:
They are reduced to an absolute minimum.
The interlocks (e.g. safety gate switches) are linked to the configured op-
erator safety inputs of the robot controller.
Switching devices, switches and the type of switching conform to the re-
quirements of Performance Level d and category 3 according to EN ISO
13849-1.
Depending on the risk situation: the safety gate is additionally safeguarded
by means of a locking mechanism that only allows the gate to be opened
if the manipulator is safely at a standstill.
The device for setting the signal for operator safety, e.g. the button for ac-
knowledging the safety gate, is located outside the space limited by the
safeguards.
Other safety Other safety equipment must be integrated into the system in accordance with
equipment the corresponding standards and regulations.
The industrial robot may only be used in perfect technical condition in accor-
dance with its intended use and only by safety-conscious persons. Operator
errors can result in personal injury and damage to property.
smartPAD The user must ensure that the industrial robot is only operated with the smart-
PAD by authorized persons.
If more than one smartPAD is used in the overall system, it must be ensured
that each smartPAD is unambiguously assigned to the corresponding indus-
trial robot. It must be ensured that 2 smartPADs are not interchanged.
The smartPAD can be configured as unpluggable.
Modifications After modifications to the industrial robot, checks must be carried out to ensure
the required safety level. The valid national or regional work safety regulations
must be observed for this check. The correct functioning of all safety functions
must also be tested.
New or modified programs must always be tested first in Manual Reduced Ve-
locity mode (T1).
After modifications to the industrial robot, existing programs must always be
tested first in Manual Reduced Velocity mode (T1). This applies to all compo-
nents of the industrial robot and includes modifications to the software and
configuration settings.
The robot may not be connected and disconnected when the robot controller
is running.
Faults The following tasks must be carried out in the case of faults in the industrial
robot:
Switch off the robot controller and secure it (e.g. with a padlock) to prevent
unauthorized persons from switching it on again.
3.4.2 Transportation
Manipulator The prescribed transport position of the manipulator must be observed. Trans-
portation must be carried out in accordance with the operating instructions or
assembly instructions of the robot.
Avoid vibrations and impacts during transportation in order to prevent damage
to the manipulator.
Robot controller The prescribed transport position of the robot controller must be observed.
Transportation must be carried out in accordance with the operating instruc-
tions or assembly instructions of the robot controller.
Avoid vibrations and impacts during transportation in order to prevent damage
to the robot controller.
Before starting up systems and devices for the first time, a check must be car-
ried out to ensure that the systems and devices are complete and operational,
that they can be operated safely and that any damage is detected.
The valid national or regional work safety regulations must be observed for this
check. The correct functioning of all safety functions must also be tested.
Prior to start-up, the passwords for the user groups must be modified
by the administrator, transferred to the robot controller in an installa-
tion procedure and activated. The passwords must only be communi-
cated to authorized personnel. (>>> 9.4.2 "Changing and activating the
password" Page 171)
If additional components (e.g. cables), which are not part of the scope
of supply of KUKA Roboter GmbH, are integrated into the industrial
robot, the user is responsible for ensuring that these components do
not adversely affect or disable safety functions.
Function test The following tests must be carried out before start-up and recommissioning:
General test:
The brake test ensures that any impairment of the braking function is detected,
e.g. due to wear, overheating, fouling or damage, thereby eliminating avoid-
able risks.
The brake test must be performed regularly, unless an application-specific risk
assessment has established that a malfunction of the mechanical brakes will
not result in inadmissibly high risks. Determination of the interval at which the
brake test is to be performed also constitutes part of the risk assessment.
In the absence of a corresponding risk assessment, the following applies:
The brake test must be carried out for each axis during start-up and recom-
missioning of the industrial robot.
The brake test must be performed daily during operation.
General Manual mode is the mode for setup work. Setup work is all the tasks that have
to be carried out on the industrial robot to enable automatic operation. Setup
work includes:
Jog mode
Teaching
Program verification
The following must be taken into consideration in manual mode:
New or modified programs must always be tested first in Manual Reduced
Velocity mode (T1).
The manipulator and its tooling must never touch or project beyond the
safety fence.
Workpieces, tooling and other objects must not become jammed as a re-
sult of the industrial robot motion, nor must they lead to short-circuits or be
liable to fall off.
All setup work must be carried out, where possible, from outside the safe-
guarded area.
Setup work in T1 If it is necessary to carry out setup work from inside the safeguarded area, the
following must be taken into consideration in the operating mode Manual Re-
duced Velocity (T1):
If it can be avoided, there must be no other persons inside the safeguard-
ed area.
If it is necessary for there to be several persons inside the safeguarded ar-
ea, the following must be observed:
Each person must have an enabling device.
All persons must have an unimpeded view of the industrial robot.
Eye-contact between all persons must be possible at all times.
The operator must be so positioned that he can see into the danger area
and get out of harm’s way.
Unexpected motions of the manipulator cannot be ruled out, e.g. in the
event of a fault. For this reason, an appropriate clearance must be main-
tained between persons and the manipulator (including tool). Guide value:
50 cm.
The minimum clearance may vary depending on local circumstances, the
motion program and other factors. The minimum clearance that is to apply
for the specific application must be decided by the user on the basis of a
risk assessment.
Setup work in T2 If it is necessary to carry out setup work from inside the safeguarded area, the
following must be taken into consideration in the operating mode Manual High
Velocity (T2):
This mode may only be used if the application requires a test at a velocity
higher than that possible in T1 mode.
Teaching is not permissible in this operating mode.
Before commencing the test, the operator must ensure that the enabling
devices are operational.
The operator must be positioned outside the danger zone.
There must be no-one present inside the safeguarded area. It is the re-
sponsibility of the operator to ensure this.
After maintenance and repair work, checks must be carried out to ensure the
required safety level. The valid national or regional work safety regulations
must be observed for this check. The correct functioning of all safety functions
must also be tested.
The purpose of maintenance and repair work is to ensure that the system is
kept operational or, in the event of a fault, to return the system to an operation-
al state. Repair work includes troubleshooting in addition to the actual repair
itself.
The following safety measures must be carried out when working on the indus-
trial robot:
Carry out work outside the danger zone. If work inside the danger zone is
necessary, the user must define additional safety measures to ensure the
safe protection of personnel.
Switch off the industrial robot and secure it (e.g. with a padlock) to prevent
it from being switched on again. If it is necessary to carry out work with the
robot controller switched on, the user must define additional safety mea-
sures to ensure the safe protection of personnel.
If it is necessary to carry out work with the robot controller switched on, this
may only be done in operating mode T1.
Label the system with a sign indicating that work is in progress. This sign
must remain in place, even during temporary interruptions to the work.
The EMERGENCY STOP devices must remain active. If safety functions
or safeguards are deactivated during maintenance or repair work, they
must be reactivated immediately after the work is completed.
Faulty components must be replaced using new components with the same
article numbers or equivalent components approved by KUKA Roboter GmbH
for this purpose.
Cleaning and preventive maintenance work is to be carried out in accordance
with the operating instructions.
Robot controller Even when the robot controller is switched off, parts connected to peripheral
devices may still carry voltage. The external power sources must therefore be
switched off if work is to be carried out on the robot controller.
The ESD regulations must be adhered to when working on components in the
robot controller.
Voltages in excess of 60 V can be present in various components for several
minutes after the robot controller has been switched off! To prevent life-threat-
ening injuries, no work may be carried out on the industrial robot in this time.
Water and dust must be prevented from entering the robot controller.
Overview If certain components in the industrial robot are operated, safety measures
must be taken to ensure complete implementation of the principle of “single
point of control” (SPOC).
Components:
Tools for configuration of bus systems with online functionality
Since only the system integrator knows the safe states of actuators in the pe-
riphery of the robot controller, it is his task to set these actuators to a safe
state.
T1, T2, CRR In modes T1, T2 and CRR, a robot motion can only be initiated if an enabling
switch is held down.
Tools for configu- If these components have an online functionality, they can be used with write
ration of bus access to modify programs, outputs or other parameters of the robot control-
systems ler, without this being noticed by any persons located inside the system.
Such tools include:
KUKA Sunrise.Workbench
WorkVisual from KUKA
Name/Edition Definition
Software Windows 7
Both the 32-bit version and the 64-bit version can be used.
The following software is required for bus configuration:
WorkVisual 5.0
Description Uninstallation removes all program files from the computer. User-specific files
are retained, e.g. the workspace with the Sunrise projects.
Procedure 1. Call the list of installed programs in the Windows Control Panel.
2. In the list, select the program Sunrise Workbench and uninstall it.
Alternative In the Windows Start menu, open the installation directory of Sun-
procedure rise.Workbench and click on Uninstall.
Procedure
f
1. Double-click on the Sunrise.Workbench icon on the desktop.
Alternative:
In the Windows Start menu, open the installation directory and double-
click on Sunrise Workbench.
The Workspace Launcher window opens.
2. In the Workspace box, specify the directory for the workspace in which
projects are to be saved.
A default directory is suggested. The directory can be changed by
clicking on the Browse… button.
If the workspace should not be queried the next time Sunrise.Work-
bench is started, activate the option Use this as the default value[…]
(set check mark).
Confirm the settings with OK.
3. A welcome screen opens the first time Sunrise.Workbench is started.
There are different options here.
Click on Workbench to open the user interface of Sunrise.Work-
bench.
(>>> 5.2 "Overview of the user interface of Sunrise.Workbench"
Page 51)
Click on New Sunrise project to create a new Sunrise project directly.
The project creation wizard opens.
(>>> 5.3 "Creating a Sunrise project with a template" Page 55)
Item Description
1 Menu bar
2 Toolbars
(>>> 5.2.4 "Toolbar of the Programming perspective" Page 54)
3 Editor area
Opened files, e.g. robot applications, can be displayed and edited
in the editor area.
4 Application data view
This view displays the frames created for a project in a tree struc-
ture.
5 Object templates view
This view displays the geometrical objects, tools and workpieces
created for a project in a tree structure.
6 Perspective selection
It is possible to switch between different perspectives that are al-
ready in use by clicking on the name of the desired perspective or
selecting it via the Open perspective icon.
(>>> 5.2.3 "Displaying perspectives" Page 53)
7 Package Explorer view
This view contains the projects created and their corresponding
files.
Item Description
8 Tasks view
The tasks that a user has created are displayed in this view.
Javadoc view
The Javadoc comments about the selected elements of a Java ap-
plication are displayed in this view.
9 Properties view
The properties of the object, e.g. project, frame or tool, selected in
a different view, are displayed in this view.
Procedure 1. Grip the view by the title bar while holding down the left mouse button and
move it to the desired position on the user interface.
The possible positions for the view are indicated here by a gray frame.
2. Release the mouse button when the desired position for the view is select-
ed.
Procedure Click on the “X” at the top right of the corresponding tab.
Description The user interface can be displayed in different perspectives. These can be
selected via the menu sequence Window > Open Perspective or by clicking
on the Open Perspective icon.
The perspectives are tailored to different types of work:
The buttons available as standard on the toolbar depend on the active per-
spective. The buttons of the Programming perspective are described here.
Procedure 1. Select the menu sequence File > New > Sunrise project. The wizard for
creating a new Sunrise project is opened.
2. Enter the IP address of the robot controller to be created for the Sunrise
project in the IP address of controller: box.
It is possible to change the address again during subsequent project con-
figuration.
The following IP address ranges are used by default by the robot con-
troller for internal purposes. IP addresses from these ranges cannot
therefore be assigned.
169.254.0.0 … 169.254.255.255
172.16.0.0 … 172.16.255.255
172.17.0.0 … 172.17.255.255
192.168.0.0 … 192.168.0.255
The weight and height of the selected media flange are automatically
taken into consideration by the system software.
Description The figure shows the structure of a newly created Sunrise project, for which
no Sunrise applications have yet been created or other changes made. The
robot configured for the Sunrise project has a media flange.
Element Description
src Source folder of the project
The created Sunrise applications and Java classes are stored in
the source folder.
The Java package com.kuka.generated.ioAccess contains
the Java class MediaFlangeIOGroup.java. The class already
contains the methods required for programming in order to
access the inputs/outputs of the media flange.
(>>> 15.11 "Using inputs/outputs in the program" Page 378)
The source folder also contains various XML files in which, in
addition to the configuration data, the runtime data are saved,
e.g. the frames and tools created by the user.
The XML files can be displayed but not edited.
JRE System Library System library for Java Runtime Environment
The system library contains the Java class libraries which can
be used for standard Java programming.
Referenced libraries Referenced libraries
The referenced libraries can be used in the project. As stan-
dard, the robot-specific Java class libraries are automatically
added when a Sunrise project is created. The user has the
option of adding further libraries.
generatedFiles Folder with subfolder IODescriptions
The data for the inputs/outputs configured for the media flange
are saved in an XML file.
The XML file can be displayed but not edited.
KUKAJavaLib Folder with special libraries required for robot programming
IOConfiguration.wvs I/O configuration for the media flange
The I/O configuration contains the complete bus structure of the
media flange, including the I/O mapping.
The I/O configuration can be opened, edited and re-exported
into the Sunrise project in WorkVisual.
Note: The I/O configuration is only carried out automatically for
the inputs/outputs on the media flange. Further EtherCAT
devices connected to the media flange must be configured with
WorkVisual.
(>>> 11 "Bus configuration" Page 189)
SafetyConfiguration.sconf Safety configuration
The file contains the safety functions preconfigured by KUKA.
The configuration can be displayed and edited.
(>>> 13 "Safety configuration" Page 215)
StationSetup.cat Station configuration
The file contains the station configuration for the station (con-
troller) selected when the project was created. The configura-
tion can be displayed and edited.
The system software can be installed on the robot controller via
the station configuration.
(>>> 10 "Station configuration and installation" Page 175)
Sunrise applications are Java programs. They define tasks that are to be ex-
ecuted in a station. They are transferred to the robot controller with the Sunrise
project and can be selected and executed using the smartPAD.
There are 2 kinds of Sunrise applications:
Robot applications
Only one robot application can be executed on the robot controller at any
given time.
Background applications
Multiple background applications can run simultaneously and indepen-
dently of the running robot application.
Sunrise applications are grouped into Java packages. This makes program-
ming more transparent and makes it easier to use a Java package later in oth-
er projects.
Description A robot application can be created together with the Java package into which
the application is to be inserted.
Description If the Java package into which a robotic application is to be inserted already
exists, the application can be created for the existing package.
Procedure 1. In the Package Explorer view, select the desired package in the project.
2. Select the menu sequence File > New > Sunrise application. The wizard
for creating a new Sunrise application is opened.
3. Select the template RoboticsAPI Application and click on Finish. The
wizard for creating a new robot application is opened.
4. Enter a name for the package in the Name: box.
5. Click on Finish. The application is created and inserted into the package.
The Name.java application is opened in the editor area.
Background applications are Java programs that are executed on the robot
controller parallel to the robot application. For example, they can perform con-
trol tasks for peripheral devices.
The use and programming of background applications are described here:
(>>> 16 "Background tasks" Page 473)
The following properties are defined when the application is created:
Start type:
Automatic
The background application is automatically started after the robot
controller has booted (default).
Manual
The background application must be started manually via the smart-
PAD. (This function is not yet supported.)
Execution type:
Task template Cyclic background task
Template for background applications that are to be executed cyclical-
ly (default)
Task template Non-cyclic background task
Template for background applications that are to be executed once
Description A background application can be created together with the Java package into
which the application is to be inserted.
8. Click on Finish. The application and package are created and inserted
into the source folder of the project. The Name.java application is opened
in the editor area.
Description If the Java package into which a background application is to be inserted al-
ready exists, the application can be created for the existing package.
Procedure 1. In the Package Explorer view, select the desired package in the project.
2. Select the menu sequence File > New > Sunrise application. The wizard
for creating a new Sunrise application is opened.
3. Select the template Background task and click on Finish. The wizard for
creating a new background application is opened.
4. Enter a name for the package in the Name: box.
5. Click on Next > and select the desired start type.
6. Click on Next > and select the desired execution type (task template).
7. Click on Finish. The application is created and inserted into the package.
The Name.java application is opened in the editor area.
Description A default application can be defined for every Sunrise project; it is automati-
cally selected after a reboot of the robot controller or synchronization of the
project.
In the case of an externally controlled project, it is essential to define a default
application. This is automatically selected when the operating mode is
switched to Automatic.
Procedure Right-click on the desired robot application in the Package Explorer view
and select Sunrise > Set as default application from the context menu.
The robot application is indicated as the default application in the Package
Explorer view and automatically set as the default application in the proj-
ect settings.
Example
5.5 Workspace
The directory in which the created projects and user-defined settings for Sun-
rise.Workbench are saved is called the workspace. The directory for the work-
space must be defined by the user when Sunrise.Workbench is started for the
first time. It is possible to create additional workspaces in Sunrise.Workbench
and to switch between them.
Procedure 1. Select the menu sequence File > Switch Workspace > Other.... The
Workspace Launcher window opens.
2. In the Workspace box, manually enter the path to the new project direc-
tory.
Alternative:
Click on Browse... to navigate to the directory where the new work-
space should be created.
Create the new project directory by clicking on Create new folder.
Confirm with OK.
The path to the new project directory is inserted in the Workspace
box.
3. Click on OK to confirm the new workspace. Sunrise.Workbench restarts
and the welcome screen opens.
Procedure 1. Select the menu sequence File > Switch Workspace > Other.... The
Workspace Launcher window opens.
2. Navigate to the desired workspace using Browse… and select it.
3. Confirm with OK. The path to the new project directory is applied in the
Workspace Launcher window.
4. Confirm the selected workspace with OK. Sunrise.Workbench restarts
and opens the selected workspace.
Procedure 1. Select the menu sequence File > Switch Workspace. The most recently
used workspaces are displayed in a list (max. 4).
2. Select the desired workspace from the list. Sunrise.Workbench restarts
and opens the selected workspace.
Procedure 1. Select the menu sequence File > Export.... The file export wizard opens.
2. In the General folder, select the Archive File option and click on Next >.
3. All the projects in the workspace are displayed in a list in the top left-hand
area of the screen. Select the projects to be archived (set check mark).
4. Click on Browse… to navigate to the desired file location, enter the file
name for the archive and click on Save.
5. Click on Finish. The archive file is being created.
Precondition An archive file (e.g. a ZIP file) with the projects to be loaded is available.
The workspace does not contain any project with the name of the project
to be loaded.
Procedure 1. Select the menu sequence File > Import…. The file import wizard opens.
2. In the General folder, select the Existing Projects into Workspace op-
tion and click on Next >.
3. Activate the Select archive file radio button, click on Browse… to navi-
gate to the desired archive file and select it.
4. Click on Open. All the projects in the archive are displayed in a list under
Projects.
5. Select projects to be loaded to the workspace (check mark must be set).
6. Click on Finish. The selected projects are loaded.
Procedure 1. Select the menu sequence File > Import…. The file import wizard opens.
2. In the General folder, select the Existing Projects into Workspace op-
tion and click on Next >.
3. Activate the Select root directory radio button, click on Browse… to nav-
igate to the desired directory and select it.
4. Click on OK. All the projects in the selected directory are displayed in a list
under Projects.
5. Select projects to be loaded to the workspace (check mark must be set).
6. Click on Finish. The selected projects are loaded.
One or more Java projects can be referenced within a Sunrise project. The ref-
erencing of Java projects allows them to be used in any number of Sunrise
projects and thus on different robot controllers.
The referenced Java projects can in turn reference further Java projects. Only
one Sunrise project may exist among all the cross-referenced projects.
Procedure 1. Select the menu sequence File > New > Project.... The project creation
wizard opens.
2. In the Java folder, select the Java Project option and click on Next >.
3. Enter the name of the Java project in the Project name box.
4. In the JRE area, select the JRE version that corresponds to the JRE ver-
sion of the Sunrise project. This is generally JavaSE-1.6.
5. Click on Next > and then on Finish.
6. The first time a Java project is created in the workspace – or if the user’s
preference has not yet been specified in previous Java projects – a query
is displayed asking whether the Java perspective should be opened.
Select Yes or No as appropriate.
If the query should not be displayed when the next Java project is cre-
ated in the workspace, activate the Remember my decision option
(set check mark).
Description If a Java project is used for robot programming, the specific KUKA libraries re-
quired for this purpose must be inserted into the project. As standard, these
libraries are not contained in a Java project.
The KUKA libraries must be copied from a compatible Sunrise project. Ideally,
this should be a Sunrise project in which the Java project is referenced or will
be referenced. The precondition for compatibility of referenced projects is that
the RoboticsAPI versions match.
Precondition The referenced classes are saved in a defined Java package (not in the
standard package).
For Java projects which use referenced KUKA libraries: In the referenced
projects, the RoboticsAPI versions must match.
Description References to inadvertently added projects or projects that are not required
(any longer) can be removed.
Procedure 1. In the Package Explorer, right-click on the project from which referenced
projects should be removed.
2. Select Properties from the context menu. The Properties for Project
window opens.
3. Select the Projects tab in the Java Build Path.
4. Select the projects that are not required and click on Remove.
5. Close the window by clicking on OK.
Procedure 1. Right-click on the desired project or Java package. Select Refactoring >
Rename in the context menu. The Rename Java Project or Rename
Java Package window opens.
2. In the New name box, enter the desired name. Confirm with OK.
Procedure 1. Right-click on the desired Java file. Select Refactoring > Rename in the
context menu. The Rename Compilation Unit window opens.
2. In the New name box, enter the desired name. Click on Finish.
3. Possible conflicts are indicated before the renaming is completed. After
acknowledging and checking these, click on Finish once more.
In the Package Explorer view, inserted elements can be removed again, e.g.
entire projects or individual Java packages and Java files of a project.
Description Elements created for a project can be deleted again. The elements are perma-
nently deleted from the workspace and cannot be restored.
It is also possible to remove some – but not all – of the default elements of a
project.
Description With this procedure, a project is only removed from the Package Explorer and
is retained in the directory for the workspace on the data storage medium.
If required, the project can be reloaded from the directory into the workspace.
The project is then available again in the Package Explorer.
(>>> 5.5.6 "Loading projects from the directory to the workspace" Page 62)
Procedure 1. Right-click on the desired project. Select Delete in the context menu. A re-
quest for confirmation is displayed, asking if the project is really to be de-
leted.
2. The check box next to Delete project content on disk (cannot be un-
done) is activated by default. Leave it like this.
3. Confirm the request for confirmation with OK.
Description With this procedure, a project is removed from the Package Explorer and per-
manently deleted from the directory for the workspace on the data storage me-
dium. The project cannot be restored.
Procedure 1. Right-click on the desired project. Select Delete in the context menu. A re-
quest for confirmation is displayed, asking if the project is really to be de-
leted.
2. Activate the check box next to Delete project content on disk (cannot
be undone).
3. Confirm the request for confirmation with OK.
Procedure 1. Select the menu sequence Window > User definitions. The User defini-
tions window is opened.
2. Select General > Workspace in the directory in the left area of the win-
dow.
3. Activate the check box Update via native hooks or polling to activate the
automatic change recognition.
Description The release notes contain information about the versions of the system soft-
ware, e.g. new functions or system requirements. They can be displayed in the
editor.
Procedure Select the menu sequence Help > Sunrise.OS Release Notes.
Function The smartPAD is the hand-held control panel for the industrial robot. The
smartPAD has all the operator control and display functions required for oper-
ation.
The smartPAD has a touch screen: the smartHMI can be operated with a fin-
ger or stylus. An external mouse or external keyboard is not necessary.
Overview
Item Description
1 Button for disconnecting the smartPAD
(>>> 6.2 "Disconnecting and connecting the smartPAD" Page 70)
2 Keyswitch
The connection manager is called by means of the keyswitch. The
switch can only be turned if the key is inserted.
The connection manager is used to change the operating mode.
(>>> 6.8 "Changing the operating mode" Page 83)
Item Description
3 EMERGENCY STOP device
The robot can be stopped in hazardous situations using the
EMERGENCY STOP device. The EMERGENCY STOP device
locks itself in place when it is pressed.
4 Space Mouse
No function
5 Jog keys
The jog keys are used to move the robot manually.
(>>> 6.14 "Jogging the robot" Page 89)
6 Key for setting the override
7 Main menu key
The main menu key shows and hides the main menu on the
smartHMI.
(>>> 6.4 "Calling the main menu" Page 79)
8 User keys
The function of the user keys is freely programmable. Uses of the
user keys include controlling peripheral devices or triggering
application-specific actions.
9 Start key
The Start key is used to start a program. The Start key is also
used to manually address frames and to move the robot back onto
the path.
10 Start backwards key
No function
11 STOP key
The STOP key is used to stop a program that is running.
12 Keyboard key
No function
The following applies to the jog keys, the user keys and the Start,
Start backwards and STOP keys:
The current function is displayed next to the key on the smartHMI.
If there is no display, the key is currently without function.
Overview
Description
Element Description
Identification
Identification plate
plate
The Start key is used to start a program. The Start key
Start key is also used to manually address frames and to move
the robot back onto the path.
Element Description
The enabling switch has 3 positions:
Not pressed
Center position
Fully pressed (panic position)
Enabling
switch The enabling switch must be held in the center position
in operating modes T1, T2 and CRR in order to be able
to jog the manipulator.
As standard, the enabling switch has no function in Au-
tomatic mode.
The USB connection is used for archiving data, for ex-
USB connec- ample.
tion
Only for FAT32-formatted USB sticks.
Description A smartPAD can be connected at any time. The connected smartPAD as-
sumes the current operating mode of the robot controller. The smartHMI is au-
tomatically displayed again.
The user connecting a smartPAD to the robot controller must subsequently
check whether the smartPAD is operational once again. The smartPAD is not
operational in the following cases:
smartHMI is not displayed again.
It may take more than 30 seconds before the smartHMI is displayed again.
An error message is displayed in the Safety tile, indicating that there is a
connection error to the smartPAD.
Item Description
1 Navigation bar: Main menu and status display
(>>> 6.3.1 "Navigation bar" Page 73)
2 Display area
Display of the level selected in the navigation bar, here the Station
level
3 Jogging options button
Displays the current coordinate system for jogging with the jog
keys. Touching the button opens the Jogging options window, in
which the reference coordinate system and further parameters for
jogging can be set.
(>>> 6.14.1 "“Jogging options” window" Page 89)
Item Description
4 Jog keys display
If axis-specific jogging is selected, the axis numbers are displayed
here (A1, A2, etc.). If Cartesian jogging is selected, the coordinate
system axes are displayed here (X, Y, Z, A, B, C). In the case of
an LBR iiwa, the elbow angle (R) for executing a null space
motion is additionally displayed.
(>>> 6.14 "Jogging the robot" Page 89)
5 Override button
Indicates the current override. Touching the button opens the
Override window, in which the override can be set.
(>>> 6.12 "“Override” window" Page 87)
6 Life sign display
A steadily flashing life sign indicates that the smartHMI is active.
7 Language selection button
Indicates the currently set language. Touching the button opens
the Language selection menu, in which the language of the user
interface can be changed.
8 User group button
Indicates the currently logged-on user group. Touching the button
opens the Login window, in which the user group can be
changed.
(>>> 6.6.1 "Changing user group" Page 82)
9 User key selection button
Touching the button opens the User key selection window, in
which the currently available user key bars can be selected.
(>>> 6.9 "Activating the user keys" Page 84)
10 Clock button
The clock displays the system time. Touching the button displays
the system time in digital format, together with the current date.
11 Jogging type button
Displays the currently set mode of the Start key. Touching the but-
ton opens the Jogging type window, in which the mode can be
changed.
(>>> 6.13 "“Jogging type” window" Page 87)
12 Back button
Return to the previous view by touching this button.
The navigation bar is the main menu of the user interface and is divided into 4
levels. It is used for navigating between the different levels.
Some of the levels are divided into two parts:
Lower selection list: Opens a list for selecting an application, a robot or an
I/O group, depending on the level.
Upper button: If a selection has been made in the list, this button shows
the selected application, robot or I/O group.
Alternatively, the main menu can be called using the main menu key on the
smartPAD. The main menu contains further menus which cannot be accessed
from the navigation bar.
(>>> 6.4 "Calling the main menu" Page 79)
Overview
Item Description
1 Station level
Displays the controller name and the selected operating mode
(>>> 6.3.4 "Station level" Page 75)
2 Applications level
Displays the selected robot application
(>>> 6.17.1 "Selecting a robot application" Page 100)
All robot and background applications are listed under Applica-
tions.
3 Robot level
Displays the selected robot
(>>> 6.3.5 "Robot level" Page 77)
4 I/O groups level
Displays the selected I/O group
(>>> 6.18.5 "Displaying an I/O group and changing the value of
an output" Page 108)
The status of the system components is indicated by colored circles on the sm-
artHMI.
The “collective status” is displayed in the lower part of the navigation bar. The
status of each of the selected components is displayed in the upper part. For
example, it is possible for one application to be executed while another appli-
cation is in the error state.
Status Description
Serious error
The system component cannot be used. The reason for this
may be an operator error or an error in the system component.
Warning
There is a warning for the system component. The operability of
the component may be restricted. It is therefore advisable to
remedy the problem.
For applications, the yellow status indicator means that the
application is paused.
Status Description
Status OK
There are no warnings or faults for the system component.
Status unknown
The status of the system component cannot be determined.
6.3.3 Keypad
There is a keypad on the smartHMI for entering letters and numbers. The sm-
artHMI detects when the entry of letters or numbers is required and automati-
cally displays the appropriate keypad.
The Station level provides access to information and functionalities which af-
fect the entire station.
Item Description
1 Process data tile
Opens the Process data view. The configuration of process data
is not yet possible.
2 Safety tile
Indicates the safety status of the station and opens the Safety
sublevel. The sublevel contains the following tiles:
Activation
Opens the Activation view for activating and deactivating the
safety configuration. A precondition for activation/deactivation
is the user group “Safety maintenance”.
State
Opens the State view and displays error messages relating to
the safety controller.
3 Frames tile
Opens the Frames view. The view contains the frames created for
the station.
(>>> 6.16.1 "“Frames” view" Page 94)
Item Description
4 KUKA_Sunrise_Cabinet_1 tile
Indicates the status of the robot controller and opens a sublevel.
The sublevel contains the following tiles:
Boot state
Indicates the boot status of the robot controller.
Field buses
Indicates the status of the field buses. The tile is only displayed
if I/O groups have been created and corresponding signals
have been mapped with WorkVisual.
Backup Manager
Opens the Backup Manager view. The tile is only displayed if
the Backup Manager has been installed.
(>>> 6.19 "Backup Manager" Page 111)
Virus scanner
Opens the Virus scanner view. The tile is only displayed if the
virus scanner has been installed.
(>>> 20.4 "Displaying messages of the virus scanner"
Page 533)
5 HMI state tile
Displays the connection status between the smartHMI and the
robot controller.
6 Information tile
Opens the Information view and displays system information,
e.g. the IP address of the robot controller.
(>>> 6.18.6 "Displaying information about the robot and robot
controller" Page 110)
7 Log tile
Opens the Log view and displays the logged events and changes
in state of the system. The display can be filtered based on vari-
ous criteria.
(>>> 20.2 "Displaying a log" Page 527)
The Robot level gives access to information and functionalities which affect
the selected robot.
Item Description
1 Axis position tile
Opens the Axis position view. The axis-specific actual position of
the robot is displayed.
(>>> 6.18.2 "Displaying the axis-specific actual position"
Page 106)
2 Cartesian position tile
Opens the Cartesian position view. The Cartesian actual posi-
tion of the robot is displayed.
(>>> 6.18.3 "Displaying the Cartesian actual position" Page 107)
3 Axis torques tile
Opens the Axis torques view. The axis torques of the robot are
displayed.
(>>> 6.18.4 "Displaying axis-specific torques" Page 108)
4 Mastering tile
Opens the Mastering view. The mastering status of the robot
axes is displayed. The axes can be mastered or unmastered indi-
vidually.
(>>> 7.4 "Position mastering" Page 118)
Item Description
5 Load data tile
Opens the Load data view for automatic load data determination.
(>>> 7.6 "Determining tool load data" Page 127)
6 Motion enable tile
Displays whether the robot has received the motion enable.
7 Log tile
Opens the Log view and displays the logged events and changes
in state of the system. The display can be filtered based on vari-
ous criteria. As standard, the Source(s) filter is already set on the
robot in question.
(>>> 20.2 "Displaying a log" Page 527)
8 Device state tile
The status of the robot drive system is displayed.
9 Calibration tile
Opens the Calibration sublevel which contains the Base calibra-
tion and Tool calibration tiles.
(>>> 7.5 "Calibration" Page 119)
Procedure Press the main menu key on the smartPAD. The Main menu view opens.
Item Description
1 Back button
Touch this button to return to the view which was visible before the
main menu was opened.
2 Home button
Closes all opened areas.
3 Button for closing the level
Closes the lowest opened level.
4 The views most recently opened from the main menu are dis-
played here (maximum 3).
By touching the view in question, it is possible to switch to these
views again without having to navigate the main menu.
Procedure 1. Touch the Language selection button on the side panel of the smartHMI
(bottom left). The Language selection menu is opened.
Description The user interface on the smartHMI is available in the following languages:
Description Different functions can be executed on the robot controller, depending on the
user group.
The following user groups are available as standard:
Operator
The user group “Operator” is the default user group.
Safety maintenance technician
The user “Safety maintenance” is responsible for starting up the safety
equipment of the industrial robot. Only he can modify the safety configura-
tion on the robot controller.
The user group is protected by means of a password.
If the option Sunrise.RolesRights is used, there is a further user group:
Expert
The user group “Expert” can perform protected functions that can no lon-
ger be performed by the “Operator”.
The user group is protected by means of a password.
User privileges If the user group “Expert” is installed, the user rights of the operator are re-
stricted. The user rights are then assigned as follows:
Safety mainte-
Function Operator Expert
nance
Selecting/deselecting an application
Pausing an application
Teaching frames
Safety mainte-
Function Operator Expert
nance
Creating a new frame
Robot mastering/unmastering
Description When the robot controller is rebooted, the default user group is selected. The
User group button can be used to switch to a different user group. The button
is labeled with the name of the active user group.
If no actions are carried out on the user interface within 5 minutes, the robot
controller switches to the default user group for safety reasons.
3. Enter the password and confirm with Login. The Login window closes
and the selected user group is active.
Logging off a user group:
1. Touch the User group button. The Login window opens.
2. Touch the Log off button. The Login window closes and the default user
group is active again.
Description CRR is an operating mode to which the system can be switched when the ro-
bot is stopped by the safety controller for one of the following reasons:
Robot violates an axis-specific or Cartesian monitoring space.
Orientation of a safety-oriented tool is outside the monitored range.
Robot violates a force or torque monitoring function.
A position sensor is not mastered or referenced.
A joint torque sensor is not referenced.
Once the operating mode has been switched to CRR, the robot can be moved
again.
Use CRR mode can be used, for example, to retract the robot in the case of a
space or force monitoring violation or to master the robot with a Cartesian ve-
locity monitoring function active.
If the cause of the stop is no longer present and if no further stop is requested
for 4 seconds by one of the specified causes, the operating mode automatical-
ly changes to T1.
Motion velocity The motion velocity of the set working point in CRR mode corresponds to the
jog velocity in T1 mode:
Program mode: Reduced programmed velocity, maximum 250 mm/s
Jog mode: Jog velocity, maximum 250 mm/s
Manual guidance: No limitation of the velocity, but safety-oriented velocity
monitoring functions in accordance with the safety configuration
Description The operating mode can be set with the smartPAD using the connection man-
ager.
Precondition The key is in the switch for calling the connection manager
Procedure 1. On the smartPAD, turn the switch for the connection manager to the right.
The connection manager is displayed.
2. Select the operating mode.
3. Turn the switch for the connection manager to the left.
The selected operating mode is now active and is displayed in the naviga-
tion bar of the smartHMI.
Operating
Use Velocities
mode
T1 Programming, teaching and testing of Program verification:
programs. Reduced programmed velocity,
maximum 250 mm/s
Manual mode:
Jog velocity, maximum 250 mm/s
Manual guidance:
No limitation of the velocity, but
safety-oriented velocity monitoring
in accordance with the safety con-
figuration
Note: The maximum velocity of
250 mm/s does not apply to a mobile
platform.
T2 Testing of programs Program verification:
Programmed velocity
Manual mode: Not possible
Operating
Use Velocities
mode
AUT Automatic execution of programs Program mode:
For industrial robots with and without Programmed velocity
higher-level controllers Manual mode: Not possible
CRR CRR is an operating mode which can Program verification:
be selected when the industrial robot is Reduced programmed velocity,
stopped by the safety controller for one maximum 250 mm/s
of the following reasons:
Manual mode:
Industrial robot violates an axis-spe- Jog velocity, maximum 250 mm/s
cific or Cartesian monitoring space.
Manual guidance:
Orientation of a safety-oriented tool
No limitation of the velocity, but
is outside the monitored range.
safety-oriented velocity monitoring
Industrial robot violates a force or in accordance with the safety con-
torque monitoring function. figuration
A position sensor is not mastered or
referenced.
A joint torque sensor is not refer-
enced.
After changing to CRR mode, the
industrial robot may once again be
moved.
Description The user keys on the smartPAD can be assigned functions. All the user key
functions of a running application are available to the operator. In order to be
able to use the desired functions, the operator must activate the corresponding
user key bar.
Example
Description If there are connection or periphery errors, the safety controller is paused (af-
ter one or more occurrences depending on the error). Pausing the safety con-
troller causes the robot to stop and all safe outputs to be switched off. The
application can resume once the error has been eliminated.
Procedure 1. Select Safety > State at the Station level. The State view opens.
The cause of the error is displayed in the view. The Resume safety con-
troller button is not active.
2. Eliminate the error. The Resume safety controller button is now activat-
ed.
3. Press Resume safety controller. The safety controller is resumed.
Overview The following coordinate systems are relevant for the robot controller:
World
Robot base
Base
Flange
Tool
Translation
Coordinate Description
Distance X Translation along the X axis of the reference system
Distance Y Translation along the Y axis of the reference system
Distance Z Translation along the Z axis of the reference system
Rotation
Coordinate Description
Angle A Rotation about the Z axis of the reference system
Angle B Rotation about the Y axis of the reference system
Angle C Rotation about the X axis of the reference system
Description
Item Description
1 Override button
The display on the button depends on the selected option.
2 Set the jog override.
(>>> 6.14.2 "Setting the jog override" Page 91)
3 Display of application override
If an application override set by the application is programmed,
this is displayed during program execution.
4 Set the manual override.
(>>> 6.17.3 "Setting the manual override" Page 103)
If no application override is active, the manual override that can
be set here corresponds to the effective program override.
5 Display of effective program override
Description The functionality of the Start key can be configured in the Jogging type win-
dow.
Item Description
1 Jogging type button
The display on the button depends on the selected jogging type.
2 Application mode jogging type
In this jogging mode an application can be started by means of
the Start key.
Note: When switching to T2 or Automatic mode, Application
mode is set automatically.
3 Changing program run mode
(>>> 6.17.2 "Setting the program run mode" Page 102)
4 Frame name display
The name of the frame is displayed if a frame has been selected
in the Frames view.
Item Description
5 Move PTP jogging type
A taught frame can be addressed with a PTP motion by means of
the Start key.
(>>> 6.16.5 "Manually addressing frames" Page 99)
The button for selecting the jogging type is only active if a frame
has been selected in the Frames view.
Note: In the Move PTP jogging type, the Status of the end frame
is taken into consideration. This can cause the axes to move,
even if the end point has already been reached in Cartesian form.
6 Move LIN jogging type
A taught frame can be addressed with a LIN motion by means of
the Start key.
(>>> 6.16.5 "Manually addressing frames" Page 99)
The button for selecting the jogging type is only active if a frame
has been selected in the Frames view.
Note: In the Move LIN jogging type, the Status of the end frame is
not taken into consideration.
7 Open frames view button
Press the button to switch to the Frames view.
Icons The following icons are displayed on the Jogging type button depending on
the jogging type set:
Icon Description
Jogging type Application mode
Description All parameters for jogging the robot can be set in the Jogging Options win-
dow.
Item Description
1 Jogging options button
The icon displayed depends on the programmed jogging type.
2 Select the jogging type.
Axis-specific jogging or Cartesian jogging of the robot in different
coordinate systems is possible. The selected jogging type is indi-
cated in green and displayed on the Jogging options button.
Axes: The robot is moved by axis-specific jogging.
World: The selected TCP is moved in the world coordinate
system by means of Cartesian jogging.
Base: The selected TCP is moved in the selected base coordi-
nate system by means of Cartesian jogging.
Tool: The selected TCP is moved in its own tool coordinate
system by means of Cartesian jogging.
3 Select the robot flange or mounted tool. Not possible while an
application is being executed.
The frames of the selected tool can be selected as the TCP for
Cartesian jogging. The set load data of the tool are taken into con-
sideration.
If a robot application is paused, the tool currently being used in
the application is available under the name Application tool.
(>>> "Application tool" Page 91)
Item Description
4 Select the TCP.
All the frames of the selected tool are available as the TCP. The
TCP set here is retained. This is also the case if a different TCP is
active in a paused application.
Exception: If a robot application is paused and the application
tool is set, the manually set TCP is not retained when the applica-
tion is resumed. The TCP changes according to the TCP currently
used in the application.
(>>> "Application tool" Page 91)
5 Base selection. Only possible when the jogging type Base is
selected.
All frames which were designated in Sunrise.Workbench as a
base are available as a base.
Application tool The application tool consists of all the frames located below the robot flange
during the runtime. These can be the frames of a tool or workpiece, for exam-
ple, that are connected to the robot flange with the attachTo command. They
may also include frames generated in the application and linked directly or in-
directly to the flange during the runtime.
The application tool is then only available in the jogging options when a robot
application is paused, and if a motion command was sent to the robot control-
ler prior to pausing.
If the application tool is set in the jogging options, all frames located hier-
archically under the flange coordinate system during the runtime can be
selected as the TCP for jogging. The origin frame of the application tool on
the robot flange is available under the name ApplicationTool(Root) for
selection as the TCP for jogging.
If the application tool is set in the jogging options and the application re-
sumed, the following occurs: the frame with which the current motion com-
mand is executed in the application is automatically set as the TCP.
Description The jog override determines the velocity of the robot during jogging. The ve-
locity actually achieved by the robot with a jog override setting of 100% de-
pends on various factors, including the robot type. However, the velocity of the
set working point cannot exceed 250 mm/s.
Option Description
Set jog override option activated
3. Set the desired jog override. It can be set using either the plus/minus keys
or by means of the slider.
Plus/minus keys: The override can be set in steps to the following val-
ues: 100%, 75%, 50%, 30%, 10%, 5%, 3%, 1%, 0%.
Slider: The override can be adjusted in 1% steps.
4. Touch the Override button or an area outside the window to close the win-
dow.
Alternative Alternatively, the override can be set using the plus/minus key on the right of
procedure the smartPAD.
The value can be set in the following steps: 100%, 75%, 50%, 30%, 10%, 5%,
3%, 1%.
Procedure 1. Select the jogging type Axes from the jogging options.
Axes A1 to A7 are displayed next to the jog keys.
2. Set the jog override.
3. Hold down the enabling switch.
When motion is enabled, the display elements next to the jog keys are
highlighted in white.
4. Press the plus or minus jog key to move an axis in the positive or negative
direction.
Description
The positive direction of rotation of the robot axes can be determined using the
right-hand rule. Imagine the cable bundle which runs inside the robot from the
base to the flange. Mentally close the fingers of your right hand around the ca-
ble bundle at the axis in question. Keep your thumb extended while doing so.
Your thumb is now positioned on the cable bundle so that it points in the same
direction as the cable bundle runs inside the axis on its way to the flange. The
other fingers of your right hand point in the positive direction of rotation of the
robot axis.
Procedure 1. Select the desired coordinate system from the jogging options as the jog-
ging type. World, Base and Tool are available.
The following designations are displayed next to the jog keys:
X, Y, Z: for the linear motions along the axes of the selected coordinate
system
A, B, C: for the rotational motions about the axes of the selected coor-
dinate system
R: for the null space motion
2. Select the desired tool and TCP.
3. If the Base coordinate system is selected as the jogging type, select the
desired base.
Description The lightweight robot has 7 axes, making it kinematically redundant. This
means that theoretically, it can move to every point in the work envelope with
an infinite number of axis configurations.
Due to the kinematic redundancy, a so-called null space motion can be carried
out during Cartesian jogging. In the null space motion, the axes are rotated in
such a way that the position and orientation of the set TCP are retained during
the motion.
Properties The null space motion is carried out via the “elbow” of the robot arm.
The position of the elbow is defined by the elbow angle (R).
The position of the elbow angle (R) can be modified using the jog keys dur-
ing Cartesian jogging.
Areas of appli- The optimal axis configuration can be set for a given position and orienta-
cation tion of the TCP. This is especially useful in a limited working space.
When a software limit switch is reached, you can attempt to move the robot
out of the range of the limit switches by changing the elbow angle.
Procedure 1. Press and hold down the enabling switch on the hand guiding device.
2. Guide the TCP to the desired position.
3. Once the position has been reached, release the enabling switch.
Description The view contains the frames created for the station. Additional frames can be
created and the frames taught here. The position and orientation of a frame in
space and the associated redundancy information are recorded during teach-
ing.
Taught frames can be addressed manually.
Taught frames can be used as end points of motions. If an application is
run and the end frame of a motion is addressed, this is selected in the
Frames view.
(>>> 6.18.1 "Displaying the end frame of the motion currently being exe-
cuted" Page 106)
Item Description
1 Frame path
Path to the frames of the currently displayed hierarchy level: Goes
from World to the direct parent frame (here Box)
2 Frames of the current hierarchy level
A frame can be selected by touching it. The frame selected here is
marked with a hand icon. The hand icon means that this frame
can be used as the base for jogging and can be calibrated.
3 Properties of the selected frame
Name of the frame
Comment
Tool used while teaching the frame
Date and time of the last modification
4 Create frame button
Creates a frame at the currently displayed hierarchy level.
5 Create child frame button
The button can be used to create a child frame for a selected
frame. If no frame is selected, the button is disabled.
Item Description
6 Set base for jogging button
The button sets the selected frame as the base for jogging in the
jogging options.
(>>> 6.14.1 "“Jogging options” window" Page 89)
The button is only active if the Base jogging type is selected from
the jogging options and the selected frame is marked as the base
in Sunrise.Workbench.
7 Touchup button
A selected frame can be taught. If no frame is selected, the button
is disabled.
8 Display child frames button
The button displays the direct child elements of a frame.
The button is only active if a frame has child elements.
9 Frame coordinates with reference to the parent frame
10 Magnifying glass button
The magnifying glass button is only active if an application is run-
ning and the end frame of a motion is being addressed. Use the
button to switch to this end frame if it is not yet displayed.
Description If the desired TCP is moved to the position of a new frame, the frame is taught
directly on creation. In other words, when a frame is created, the position and
orientation of the TCP that is currently selected in the jogging options are au-
tomatically applied as frame coordinates.
Precondition The tool with the desired TCP is set in the jogging options.
(>>> 6.14.1 "“Jogging options” window" Page 89)
The application tool is only available in the jogging options if the robot
application is paused. For this reason, use of the application tool for
teaching frames is not recommended.
The tool corresponding to the current application tool (object template of the
tool) is also available for selection in the jogging options. Teaching can be
carried out with this tool instead of the application tool.
Operating mode T1
Description The coordinates of a frame can be modified on the smartHMI. This is done by
moving to the new position of the frame with the desired TCP and teaching the
frame. In the process, the new position and orientation are applied.
Precondition The tool with the desired TCP is set in the jogging options.
(>>> 6.14.1 "“Jogging options” window" Page 89)
The application tool is only available in the jogging options if the robot
application is paused. For this reason, use of the application tool for
teaching frames is not recommended.
The tool corresponding to the current application tool (object template of the
tool) is also available for selection in the jogging options. Teaching can be
carried out with this tool instead of the application tool.
Operating mode T1
Item Description
1 Values saved up to now
2 New values
3 Changes between the values saved until now and new values
4 Base for jogging
All coordinate values of the frame which are displayed in the dia-
log refer to the jogging base set in the jogging options. These val-
ues generally differ from the coordinate values of the frame with
respect to its parent frame.
(>>> 6.14.1 "“Jogging options” window" Page 89)
5 Information on the robot and tool used during teaching
These frame properties are adopted by Sunrise.Workbench when
the project is synchronized.
6 Redundancy informationon on the taught point
These frame properties are adopted by Sunrise.Workbench when
the project is synchronized.
7 Cartesian distance between the current and new position of the
frame
Description Frames can be taught using a hand guiding device. Here, the TCP is moved
by hand to the desired position.
Manual guidance is supported as standard in all operating modes except CRR
mode. In the station configuration, it is possible to configure manual guidance
as not allowed in Test mode and/or Automatic mode.
Procedure 1. Press and hold down the enabling switch on the hand guiding device.
2. Guide the TCP to the desired position.
3. Once the position has been reached, release the enabling switch.
4. In the Frames view, select the frame whose position is to be taught.
5. Press Touchup to apply the current TCP coordinates to the selected
frame.
The coordinates and redundancy information of the taught point are dis-
played in the Apply touchup data dialog.
6. Press Apply to save the new values.
Description Taught frames can be manually addressed with a PTP or LIN motion. In a PTP
motion, the frame is approached by the quickest route, whereas in a LIN mo-
tion it is approached on a predictable path.
When a frame is being addressed, a warning message is displayed in the fol-
lowing cases:
The selected tool does not correspond to the tool with which the frame was
taught.
The selected TCP does not correspond to the TCP with which the frame
was taught.
The transformation of the TCP frame has been modified.
If the frame can still be reached, it is possible to move to it.
Procedure Select the desired robot application in the navigation bar under Applica-
tions.
The Applications view opens and the robot application goes into the Se-
lected state.
Description
Item Description
1 Current status of the robot application
The status is displayed as text and as an icon.
(>>> "Status display" Page 101)
2 Display of robot application
The name of the selected robot application is displayed, here
Motions.
3 Message window
Error messages and user messages programmed in the robot
application are displayed here.
Status display The robot application can have the following states:
Start key An icon on the side panel of the smartHMI indicates the function that can be
executed using the Start key.
Icon Description
Start application.
A selected application can be started or a paused applica-
tion can be continued.
Reposition robot.
If the robot has left the path, it must be repositioned in
order to continue the application.
STOP key An icon on the side panel of the smartHMI indicates the function that can be
executed using the STOP key.
Icon Description
Pause application.
A running application can be paused in Automatic mode.
If a robot application is paused, the robot can be jogged. The tool and
TCP currently used in the paused application are not automatically
set as the tool and TCP for Cartesian jogging.
(>>> 6.14.1 "“Jogging options” window" Page 89)
Precondition No robot application is selected or the robot application has one of the fol-
lowing states:
Selected
Motion paused
Error
T1 or T2 mode
Button Description
Standard mode
The program is executed through to the end without stop-
ping.
Step mode
The program is executed with a stop after each motion
command. The Start key must be pressed again for each
motion command.
The end point of an approximated motion is not approx-
imated but rather addressed with exact positioning.
Exception: Approximated motions which were sent to
the robot controller asynchronously before Step mode
was activated and which are waiting there to be execut-
ed will stop at the approximate positioning point. For
these motions, the approximate positioning arc will be
executed when the program is resumed.
In a spline motion, the entire spline block is executed as
one motion and then stopped.
In a MotionBatch, the entire batch is not executed but
rather exact positioning is carried out after each individ-
ual motion of the batch.
The program run mode can also be set and requested in the source
code of the application. (>>> 15.17 "Changing and requesting the
program run mode" Page 398)
Description The manual override determines the velocity of the robot during program exe-
cution.
The manual override is specified as a percentage of the programmed velocity.
In T1 mode, the maximum velocity is 250 mm/s, irrespective of the override
that is set.
If no application override set by the application is active, the manual override
corresponds to the effective program override with which the robot actually
moves.
If an application override set by the application is active, the effective program
override is calculated as follows:
Effective program override = manual override · application override
Option Description
Set manual override option activated
3. Set the desired manual override. It can be set using either the plus/minus
keys or by means of the slider.
Plus/minus keys: The override can be set in steps to the following val-
ues: 100%, 75%, 50%, 30%, 10%, 5%, 3%, 1%, 0%.
Slider: The override can be adjusted in 1% steps.
4. Touch the Override button or an area outside the window to close the win-
dow.
Alternative Alternatively, the override can be set using the plus/minus key on the right of
procedure the smartPAD.
The value can be set in the following steps: 100%, 75%, 50%, 30%, 10%, 5%,
3%, 1%.
Description In order to restart a paused robot application from the beginning, it must be re-
set. On resetting, the robot application is reset to the start of the program and
goes into the Selected state.
The button for resetting the application is available under Applications in the
navigation bar:
Button Description
Reset button
The button is only active when the robot application is
paused.
Alternative Select the Reset button in the navigation bar under Applications.
procedure
Description The following events can cause the robot to leave its planned path:
Triggering of a non-path-maintaining stop
Jogging during a paused application
The robot can be repositioned using the Start key. Repositioning means that
the robot is returned to the Cartesian position at which it left the path. The ap-
plication can then be resumed from there.
Characteristics of the motion which is used to return to the path:
A PTP motion is executed.
The path used to return to the path is different than that taken when leaving
the path.
The robot is moved at 20% of the maximum possible axis velocity and the
effective program override (effective program override = manual override
application override).
The robot is moved with the load data which were set when the application
was interrupted.
The robot is moved with the controller mode which was set when the ap-
plication was interrupted.
Additional forces or force oscillations overlaid by an impedance controller
are withdrawn during repositioning.
Procedure In the navigation bar under Applications touch the button with the back-
ground application to be stopped.
Buttons The button of a stoppable background application shows the Stop icon. The
status indicator is green.
Procedure In the navigation bar under Applications touch the button with the back-
ground application to be started.
Buttons The button of a startable background application shows the Start icon. The
status indicator can be gray or red.
6.18.1 Displaying the end frame of the motion currently being executed
Description If a frame from the frame tree is addressed in an application, this is indicated
in the Frames view. If the end frame of the motion currently being executed is
located at the displayed hierarchy level, the frame name is marked with an ar-
row icon (3 arrowheads):
Fig. 6-20: The arrow icon marks the current end frame
If the end frame is located hierarchically below a displayed frame, the Display
child frames button is marked with an additional arrow icon (3 arrowheads):
You can switch directly to the current end frame using the magnifying glass
button in the upper right-hand area of the Frames view:
Fig. 6-22: The magnifying glass button switches directly to the current
end frame
Procedure 1. Select Frames at the Station level. The Frames view opens.
2. Switch to the end frame using the Display child frames button or the
magnifying glass button.
Description The current position of axes A1 to A7 is displayed. In addition, the range within
which each axis can be moved (limitation by end stops) is indicated by a white
bar.
The actual position can also be displayed while the robot is moving.
Description The Cartesian actual position of the selected TCP is displayed. The values re-
fer to the base set in the jogging options.
The display contains the following data:
Current position (X, Y, Z)
Current orientation (A, B, C)
Current redundancy information: Status, Turn, redundancy angle (E1)
Current tool, TCP and base
The actual position can also be displayed while the robot is moving.
Description The current torque values for axes A1 to A7 are displayed. In addition, the sen-
sor measuring range for each axis is displayed (white bar).
If the maximum permissible torque on a joint is exceeded, the dark gray area
of the bar for the axis in question turns orange. Only the violated area is indi-
cated in color (either the negative or positive part).
The external torques are only displayed correctly if the correct tool
has been specified.
Current tool
The axis-specific torques can also be displayed while the robot is moving.
Procedure 1. In the navigation bar, select the desired I/O group from I/O groups. The
inputs/outputs of the selected group are displayed.
2. Select the output to be changed.
3. An input box is displayed for numeric outputs. Enter the desired value.
4. Press and hold down the enabling switch. Change the value of the input
with the appropriate button.
Description
Item Description
1 Name of the input/output
2 Type of input/output
3 Value of the input/output
The value is displayed as a decimal number.
4 Buttons for changing outputs
If an output is selected, its value can be changed. Precondition:
The enabling switch is pressed.
The buttons available depend on the output type.
Item Description
5 Signal properties
The properties and the current value of the selected input or out-
put are displayed.
6 Signal direction
The icons indicate whether the signal is an input or an output.
The following buttons are available depending on the type of the selected out-
put:
Button Description
True Buttons for changing Boolean outputs
False Sets the selected Boolean outputs to the value True (1) or
False (0).
Set Button for changing numeric outputs
Sets the selected numeric output to the entered value.
Icon Description
Icon for an output
Icon Description
Icon for an analog signal
Description The information is required, for example, when requesting help from KUKA
Customer Support.
The following information is displayed under the individual nodes:
Node Description
Station Station information
Software version: Version of the installed
System Software
Station server IP: IP address of the robot
controller
Serial number of controller: Serial number
of the robot controller
User interface Information about the smartHMI
Connection IP
Connection state
<Robot name>/Type Robot information
plate
Serial number: Serial number of the con-
nected robot
Connected robot: Type of the connected ro-
bot
Installed robot: Robot type specified in the
station configuration of Sunrise.Workbench
Operating time [h]
The operating hours meter is running as long
as the drives are switched on.
Overview Once the Backup Manager has been installed on the robot controller, a tile for
the Backup Manager is available on the smartHMI.
The Backup Manager makes it possible to back up and restore robot controller
data manually. Automatic backup of data at a predefined interval can also be
preconfigured in the station configuration.
The following data are backed up and restored:
Project data
Catalogs of the installed software
User-specific files (directory: C:\KRC\UserData)
The target directory for backups and the source directory for restorations is
preconfigured:
Directory D:\ProjectBackup on the robot controller
OR: Shared network directory
User privileges As standard, no special authorization is required for backing up and restoring
data. If the user group “Expert” is installed, the default user may no longer ex-
ecute these functions. The user must be logged on as “Expert” or higher.
Safety mainte-
Function Operator Expert
nance
Backing up data manually
Description
Item Description
1 Status indicator of the backup
Deactivated: Automatic backup is not configured.
Ready: Automatic backup is activated.
Running: A backup is in progress (started manually or auto-
matically).
2 Information about the next automatic backup (if activated)
Date and time
Target directory
3 “Manual backup/restoration” area
When the view is opened for the first time, only this area and the
status indicator are displayed. This is the default view.
The area contains the following buttons:
Backup now
(>>> 6.19.2 "Backing up data manually" Page 114)
Restore
The button cannot be activated until the backup copy that is to
be restored has been selected using the magnifying glass but-
ton.
(>>> 6.19.3 "Restoring data manually" Page 114)
Configure source path
Displays the “Configure source path” area. After this the button
is inactive.
Cancel
Hides the “Configure source path” area again. The button is in-
active in the default view.
4 Information about the most recent successful backup
Date and time
Target directory
5 “Configure source path” area
The source directory from which restoration is to be carried out
can be defined here. As standard, the source directory defined in
the station configuration is preset.
The following source directories are available for selection:
Local from D:\ProjectBackup: The source directory is the di-
rectory D:\ProjectBackup on the robot controller.
Network: The source directory is located on a network drive.
The network path to the source directory can be configured.
(>>> 6.19.4 "Configuring the network path for restoration"
Page 114)
6 Information about the backup copy selected for restoration
Project name
Date and time of the backup
Item Description
7 Magnifying glass button
Opens a dialog in which the backup copy to be restored can be
selected. The dialog displays all backup copies contained in the
configured source directory.
8 Load data set button
Opens a dialog which can be used to select and apply ready-
made restoration configurations.
The button is only active if the file for restoration configurations is
configured in the station configuration and the file is saved under
the configured path on the robot controller.
Description The backup copies are saved in the target directory in the following folder
structure:
IP address_Project name\BACKUP_No.
Element Description
IP address IP address of the robot controller
Project name Name of the project installed on the robot controller
No. Number of the backup copy
The BACKUP folder with the highest number always con-
tains the most recent backup copy.
Procedure Press Backup now in the Backup Manager view. The backup is carried
out.
Description The network parameters can be entered manually or loaded from a preconfig-
ured data set:
Parameter Description
Network path Network path to source directory, e.g. \\192.168.40.171\Backup\Restore
Server user name User name for the network path
The parameter is only relevant if authentication is required for network
access.
Server password Password for the network path
The parameter is only relevant if authentication is required for network
access.
IP address IP address of the robot controller to be restored
Subnet Mask Subnet mask in which the IP address of the robot controller is located
The IP addresses of the robot controller and the server must be locat-
ed in the same range. IP address and subnet mask of the robot con-
troller to be restored must be selected accordingly.
The robot controller is supplied with an operational version of the System Soft-
ware. Therefore, no installation is required during initial start-up.
Installation becomes necessary, for example, if the station configuration
changes.
(>>> 10 "Station configuration and installation" Page 175)
Description When the robot controller is switched on, the system software starts automat-
ically.
The robot controller is ready for operation when the status indicator for the
boot state of the robot controller lights up green:
Boot state tile at the Station level under the KUKA_Sunrise_Cabinet_1
tile .
Procedure Turn the main switch on the robot controller to the “I” position.
Procedure Turn the main switch on the robot controller to the “0” position.
When the robot controller is rebooted or the smartPAD is plugged into a run-
ning robot controller, the version of the smartPAD software is automatically
checked. If there are conflicts between the smartPAD software and the system
software on the robot controller, the smartPAD software must be updated.
Characteristics of the smartPAD software update:
The update is carried out automatically in T1, T2 and CRR modes.
No update is possible in Automatic mode.
If the smartPAD is connected in Automatic mode and a version conflict is
recognized, no user input may be entered on the smartPAD. The operating
mode must be switched to T1 or T2 to start the update automatically.
No user input may be entered during the smartPAD update.
Description If the robot controller is rebooted or the drive bus connection restored, the sys-
tem checks for every connected PDS whether the current PDS firmware ver-
sion matches the firmware version on the robot controller. If the firmware
version of at least one of the PDSs is older than the version on the robot con-
troller, a PDS firmware update must be performed.
The following error message is displayed under the Device state tile:
Firmware update is required. Select "Diagnosis" > "PDS firmware update" in
the main menu in order to update the firmware.
Procedure In the main menu, select Diagnosis > PDS firmware update.
The update is started and a blocking dialog is displayed. No user input may be
entered during the smartPAD update.
Once the update has been successfully completed, the dialog is closed.
Description An LBR has a Hall effect mastering sensor in every axis. The mastering posi-
tion of the axis (zero position) is located in the center of a defined series of
magnets. It is automatically detected by the mastering sensor when it passes
over the series of magnets during a rotation of the axis.
Before the actual mastering takes place, an automatic search run is performed
in order to find a defined premastering position.
If the search run is successful, the axis is moved into the premastering posi-
tion. The axis is then moved in such a way that the mastering sensor passes
over the series of magnets. The motor position at the moment when the mas-
tering position of the axis is detected is saved as the zero position of the motor.
Procedure 1. Select the Mastering tile at the Robot level. The Mastering view opens.
2. Press and hold down the enabling switch.
3. Press the Master button for the unmastered axis.
First of all, the premastering position is located by means of a search run.
The mastering run is then performed. Once mastering has been carried
out successfully, the axis moves to the calculated mastering position (zero
position).
If the search run or the mastering fails, the process is aborted and the
robot stops.
Description The saved mastering position of an axis can be deleted. This unmasters the
axis. No motion is executed during unmastering.
Procedure 1. Select the Mastering tile at the Robot level. The Mastering view opens.
2. Press the Unmaster button for the mastered axis. The axis is unmastered.
7.5 Calibration
Description During tool calibration, the user assigns a Cartesian coordinate system (tool
coordinate system) to a tool mounted on the mounting flange.
The tool coordinate system has its origin at a point defined by the user. This is
called the TCP (Tool Center Point). The TCP is generally situated at the work-
ing point of the tool. A tool can have multiple TCPs.
Advantages of tool calibration:
The tool can be moved in a straight line in the tool direction.
The tool can be rotated about the TCP without changing the position of the
TCP.
Step Description
1 Define the origin of the tool coordinate system
The following methods are available:
XYZ 4-point
(>>> 7.5.1.1 "TCP calibration: XYZ 4-point method"
Page 120)
2 Define the orientation of the tool coordinate system
The following methods are available:
ABC 2-point
(>>> 7.5.1.2 "Defining the orientation: ABC 2-point meth-
od" Page 122)
This method is not available for safety-oriented tools.
ABC world
(>>> 7.5.1.3 "Defining the orientation: ABC world method"
Page 124)
Description The TCP of the tool to be calibrated is moved to a reference point from 4 dif-
ferent directions. The reference point can be freely selected. The robot con-
troller calculates the TCP from the different flange positions.
The 4 flange positions with which the reference point is addressed must main-
tain a certain minimum distance between one another. If the points are too
close to one another, the position data cannot be saved. A corresponding error
message is generated.
The quality of the calibration can be assessed by means of the translational
calculation error which is determined during calibration. If this error exceeds a
defined limit value, it is advisable to calibrate the TCP once more.
The minimum distance and the maximum calculation error can be modified in
Sunrise.Workbench. (>>> 10.4.4 "Configuration parameters for calibration"
Page 179)
Procedure 1. Select Calibration > Tool calibration at the Robot level. The Tool cali-
bration view opens.
2. Select the tool to be calibrated and the corresponding TCP.
3. Select the TCP calibration(XYZ 4-point) method. The measuring points
of the method are displayed as buttons:
Measurement point 1 ... Measurement point 4
In order to be able to record a measuring point, it must be selected (button
is orange).
4. Move the TCP to any reference point. Press Record calibration point.
The position data are applied and displayed for the selected measuring
point.
5. Move the TCP to the reference point from a different direction. Press Re-
cord calibration point. The position data are applied and displayed for
the selected measuring point.
6. Repeat step 5 two more times.
7. Press Determine tool data. The calibration data and the calculation error
are displayed in the Apply tool data dialog.
8. If the calculation error exceeds the maximum permissible value, a warning
is displayed. Press Cancel and recalibrate the TCP.
9. If the calculation error is below the configured limit, press Apply to save
the calibration data.
10. Either close the Calibration view or define the orientation of the tool coor-
dinate system with the ABC 2-point or ABC World method.
(>>> 7.5.1.2 "Defining the orientation: ABC 2-point method" Page 122)
(>>> 7.5.1.3 "Defining the orientation: ABC world method" Page 124)
11. Synchronize the project in order to save the calibration data in Sun-
rise.Workbench.
Description The robot controller is notified of the axes of the tool coordinate system by ad-
dressing a point on the X axis and a point in the XY plane.
The points must maintain a defined minimum distance from one another. If the
points are too close to one another, the position data cannot be saved. A cor-
responding error message is generated.
The minimum distance can be modified in Sunrise.Workbench.
(>>> 10.4.4 "Configuration parameters for calibration" Page 179)
This method is used for tools with edges and corners which can be used for
orientation purposes. Furthermore, it is used if it is necessary to define the axis
directions with particular precision.
This method is not available for safety-oriented tools.
Procedure 1. Only if the Calibration view was closed following TCP calibration:
Select Calibration > Tool calibration at the Robot level. The Tool cali-
bration view opens.
2. Only if the Calibration view was closed following TCP calibration:
Select the mounted tool and the corresponding TCP of the tool.
3. Select the Defining the orientation(ABC 2-point) method. The measur-
ing points of the method are displayed as buttons:
TCP
Negative X axis
Positive Y value on XY plane
In order to be able to record a measuring point, it must be selected (button
is orange).
4. Move the TCP to any reference point. Press Record calibration point.
The position data are applied and displayed for the selected measuring
point.
5. Move the tool so that the reference point on the X axis has a negative X
value (i.e. move against the tool direction). Press Record calibration
point. The position data are applied and displayed for the selected mea-
suring point.
6. Move the tool so that the reference point in the XY plane has a positive Y
value. Press Record calibration point. The position data are applied and
displayed for the selected measuring point.
7. Press Determine tool data. The calibration data are displayed in the Ap-
ply tool data dialog.
8. Press Apply to save the calibration data.
9. Synchronize the project in order to save the calibration data in Sun-
rise.Workbench.
Description The user aligns the axes of the tool coordinate system parallel to the axes of
the world coordinate system. This communicates the orientation of the tool co-
ordinate system to the robot controller.
There are 2 variants of this method:
5D: The user communicates the tool direction to the robot controller. The
default tool direction is the X axis. The orientation of the other axes is de-
fined by the system and cannot be influenced by the user.
The system always defines the orientation of the other axes in the same
way. If the tool subsequently has to be calibrated again, e.g. after a crash,
it is therefore sufficient to define the tool direction again. Rotation about
the tool direction need not be taken into consideration.
6D: The user communicates the direction of all 3 axes to the robot control-
ler.
This method is used for tools that do not have corners which the user can em-
ploy for orientation, e.g rounded tools such as adhesive or welding nozzles.
Procedure 1. Only if the Calibration view was closed following TCP calibration:
Select Calibration > Tool calibration at the Robot level. The Tool cali-
bration view opens.
2. Only if the Calibration view was closed following TCP calibration:
Select the mounted tool and the corresponding TCP of the tool.
3. Select the Defining the orientation(ABC world) method.
4. Select the ABC World 5D or ABC world 6D option.
5. If ABC World 5D is selected:
Align +XTOOL parallel to -ZWORLD. (+XTOOL = tool direction)
If ABC world 6D is selected:
Align the axes of the tool coordinate system as follows.
+XTOOL parallel to -ZWORLD. (+XTOOL = tool direction)
+YTOOL parallel to +YWORLD
+ZTOOL parallel to +XWORLD
6. Press Determine tool data. The calibration data are displayed in the Ap-
ply tool data dialog.
7. Press Apply to save the calibration data.
Description During base calibration, the user assigns a Cartesian coordinate system (base
coordinate system) to a frame selected as the base. The base coordinate sys-
tem has its origin at a user-defined point.
Advantages of base calibration:
The TCP can be jogged along the edges of the work surface or workpiece.
Points can be taught relative to the base. If it is necessary to offset the
base, e.g. because the work surface has been offset, the points move with
it and do not need to be retaught.
The origin and 2 further points of a base are addressed with the 3-point meth-
od. These 3 points define the base.
The points must maintain a defined minimum distance from the origin and min-
imum angles between the straight lines (origin – X axis and origin – XY plane).
If the points are too close to one another or if the angle between the straight
lines is too small, the position data cannot be saved. A corresponding error
message is generated.
The minimum distance and angles can be modified in Sunrise.Workbench.
(>>> 10.4.4 "Configuration parameters for calibration" Page 179)
Procedure 1. Select Calibration > Base calibration at the Robot level. The Base cali-
bration view opens.
2. Select the base to be calibrated.
3. Select the mounted tool and the TCP of the tool with which the measuring
points of the base are addressed.
The measuring points of the 3-point method are displayed as buttons:
Origin
Positive X axis
Positive Y value on XY plane
In order to be able to record a measuring point, it must be selected (button
is orange).
4. Move the TCP to the origin of the base. Press Record calibration point.
The position data are applied and displayed for the selected measuring
point.
5. Move the TCP to a point on the positive X axis of the base. Press Record
calibration point. The position data are applied and displayed for the se-
lected measuring point.
6. Move the TCP to a point in the XY plane with a positive Y value. Press Re-
cord calibration point. The position data are applied and displayed for
the selected measuring point.
7. Press Determine base data. The calibration data are displayed in the Ap-
ply base data dialog.
8. Press Apply to save the calibration data.
9. Synchronize the project in order to save the calibration data in Sun-
rise.Workbench.
Description During load data determination, the robot performs multiple measurement
runs with different orientations of wrist axes A5, A6 and A7. The load data are
calculated from the data recorded during the measurement runs.
The mass and the position of the center of mass of the tool mounted on the
robot flange can currently be determined. It is also possible to specify the
mass and to determine the position of the center of mass on the basis of the
mass that is already known.
At the start of load data determination, axis A7 is moved to the zero position
and axis A5 is positioned in such a way that axis A6 is aligned perpendicular
to the weight. During the measurement runs, axis A6 has to be able to move
between -95° and +95°, while axis A7 has to be able to move from 0° to -90°.
The remaining robot axes (A1 to A4) are not moved during load data determi-
nation. They remain in the starting position during measurement.
Quality The quality of the load data determination may be influenced by the following
constraints:
Mass of the tool
Load data determination becomes more reliable as the mass of the tool in-
creases. This is because measurement uncertainties have a greater influ-
ence on a smaller mass.
Supplementary loads
Supplementary loads mounted on the robot, e.g. dress packages, lead to
incorrect load data.
Start position from which load data determination is started
A suitable start position should be determined first and meet the following
criteria:
Axes A1 to A5 are as far away as possible from singularity positions.
The criterion is relevant if the mass is to be determined during load
data determination. If load data determination is only possible in poses
for which axes A1 to A5 are close to singularity positions, the mass can
be specified. If only the center of mass is to be determined on the basis
of the specified mass, the criterion of axis position is irrelevant.
The suitability of the start position for load data determination in the
case of a robot for which automatic load data determination is to be
carried out must be checked before the load that is to be determined
is mounted on the robot.
Safety-oriented For the load data determination for safety-oriented tools, it must be ensured
tools that the modified load data are not automatically transferred to the safety con-
figuration located on the robot controller. (>>> 9.3.9 "Safety-oriented tools"
Page 162)
The load data determined for a safety-oriented tool must first be updated in
Sunrise.Workbench by means of project synchronization. This changes the
safety configuration of the project in Sunrise.Workbench; the project must then
be re-transferred to the robot controller by synchronizing the project again.
Preparation Determine the start position from which load data determination is to be
started.
Procedure 1. Select the Load data tile at the Robot level. The Load data view opens.
2. Select the mounted tool from the selection list.
3. If T1 or T2 mode is set, press and hold down the enabling switch until load
data determination has been completed.
4. Press Determining the load data.
5. If the tool already has a mass, the operator will be asked if the mass is to
be redetermined.
Select Use existing mass if the currently saved mass is to be re-
tained.
Select Redetermine mass if the mass is to be determined again.
6. The robot starts the measurement runs and the load data are determined.
A progress bar is displayed.
Once load data determination has been completed, the determined load
data are displayed in the Apply load data dialog.
Press Apply to save the determined load data.
7. Synchronize the project so that the load data are saved in Sunrise.Work-
bench.
8. When the load data for a safety-oriented tool have been determined, the
safety configuration changes as a result of the project synchronization.
Synchronize the project in order to transfer the changed safety configura-
tion to the robot controller.
Item Description
1 Tool selection list
The tools created in the object templates are available for selec-
tion here.
2 Load data display
Displays the current load data of the selected tool.
3 Display of axes used
Displays the axes that are moved for load data determination.
4 Determining the load data button
Starts load data determination. The button is only active if a tool
has been selected and the motion enable signal has been issued.
8 Brake test
8.1
t
Overview of the brake test
s
Description Each robot axis has a holding brake integrated into the drive train. The brakes
have 2 functions:
Stopping the robot when the servo control is deactivated or the robot is de-
energized.
Switching the robot to the safe state “Standstill” in the event of a fault.
The brake test checks whether the brake holding torque applied by each brake
is high enough, i.e. whether it exceeds a specific reference torque. This refer-
ence torque can be specified by the programmer or read from the motor data.
Execution A precondition for execution of the brake test is that the robot is at operating
temperature.
The brake test is manually executed by means of an application. A prepared
brake test application for the LBR iiwa is available from Sunrise.Workbench.
If the prepared brake test application is used, the robot is moved prior to the
actual brake test and the resulting maximum absolute torque is determined for
each axis. In the brake test application, the torque determined is communicat-
ed to the brake test as the reference holding torque.
The determination of the maximum absolute torques is referred to in the fol-
lowing as torque value determination.
Procedure When a brake is tested, the following steps are carried out as standard:
1. The axis moves at constant velocity over a small axis angle of max. 5° (on
the output side). The gravitation and friction are determined during this
motion.
2. When the axis has returned to its starting position and the axis drive is sta-
tionary, the brake is closed.
3. One of the following values is used as the holding torque to be tested: the
reference holding torque determined, the minimum brake holding torque
or the motor holding torque.
The holding torque to be tested is defined internally by the system accord-
ing to the following rules:
a. If the reference holding torque is greater than the lowest value of the
minimum brake and motor holding torques, then the lowest value of
the minimum brake and motor holding torques is used as the holding
torque to be tested.
b. If the reference holding torque is lower than 20% of the lowest value of
the minimum brake and motor holding torques, then 20% of the lowest
value of the minimum brake and motor holding torques is used as the
holding torque to be tested.
c. In all other cases, the reference holding torque is used.
At the start of the brake test, with the brake closed, the setpoint torque of
the drive is set to 80% of the holding torque to be tested.
The minimum and maximum brake holding torques are saved in the
motor data. The motor holding torque is derived from the motor data.
The brake test does not depend on the loads mounted on the robot,
as gravitation and friction are taken into consideration when the test
is carried out.
Overview The following describes the steps for executing the brake test with the tem-
plate available in Sunrise.Workbench.
The brake test application can be adapted and expanded. The comments con-
tained in the template must be observed.
Step Description
1 Create the brake test application from a template.
(>>> 8.2 "Creating the brake test application from the tem-
plate" Page 133)
2 In the brake test application, remove or adapt determination
of the application-specific maximum absolute torques.
At the start of the brake test application, 2 predefined axis
positions are addressed as standard. These positions are
addressed in order to determine the maximum absolute
torque for each axis and transfer it to the brake test as the ref-
erence holding torque.
It is advisable to test the brakes against the minimum brake
holding torque, which is stored in the motor data. To do so,
the prepared brake test application must be adapted.
(>>> 8.2.1 "Adapting the brake test application for testing
against the minimum holding torque" Page 136)
If the brake test requires the maximum absolute torques
which occur when a user-specific robot application is exe-
cuted, the user-specific robot application can be added to the
brake test application. Since the brakes are not tested against
the minimum brake holding torque in this case, a risk assess-
ment must be carried out before the test.
(>>> 8.2.2 "Changing the motion sequence for torque value
determination" Page 137)
3 Change the starting position for the brake test.
The default starting position is the vertical stretch position. If
required, a different starting position can be selected.
(>>> 8.2.3 "Changing the starting position for the brake test"
Page 137)
4 If necessary, make further user-specific adaptations in the
brake test application.
Examples:
Setting the output for a failed brake test.
Saving the test results in a file.
(>>> 8.3 "Programming interface for the brake test"
Page 137)
5 Synchronize the project in order to transfer the brake test
application to the robot controller.
6 Execute the brake test application.
(>>> 8.4 "Performing a brake test" Page 147)
Procedure 1. In the Package Explorer view, select the desired project or package in
which the application is to be created.
2. Select the menu sequence File > New > Sunrise application. The wizard
for creating a new Sunrise application is opened.
3. In the folder Application examples > LBR iiwa, select the application Ap-
plication for the brake test of LBR iiwa and click on Finish.
The BrakeTestApplication.java application is created in the source fold-
er of the project and opened in the editor area of Sunrise.Workbench.
46 allAxesOk = false;
47 }
48 }
49
50 if (allAxesOk){
51 getLogger().info("Brake test was successful for all axes.");
52 }
53 else{
54 getLogger().error("Brake test failed for at least one
axis.");
55 }
56 }
Line Description
3 Address the starting position from which the robot is moved to
determine the maximum absolute torque for each axis.
The default starting position is the vertical stretch position.
5 Prepare the data evaluation.
In order to perform an axis-specific evaluation of the torques
determined during a motion sequence, an instance of the
TorqueEvaluator class must be created.
7 Select the torques to be used for the data evaluation.
The measured torques are not used, but instead the torques
that are calculated using the robot model during the motion se-
quence. Measurements are susceptible to malfunctions. The
calculation of the torque values ensures that no interference
torques resulting from dynamic effects (e.g. robot accelera-
tion) are incorporated into the data evaluation.
9 Start the data evaluation.
The data evaluation is started with the startEvaluation() com-
mand of the TorqueEvaluator class.
11 … 16 Carry out the motion sequence to determine the maximum ab-
solute torques
2 predefined axis positions are each addressed with a PTP
motion.
18 End the data evaluation and request the data.
The stopEvaluation() command of the TorqueEvaluator class
ends the data evaluation and returns the result as a value of
type TorqueStatistic. The result is saved in the variable max-
TorqueData.
Line Description
20 Variable for the evaluation of the brake test
The result of the brake test is saved for later evaluation via the
variable allAxesOk. It is set to the value “false” if the brake test
of an axis fails or is aborted due to an error. Otherwise it re-
tains the value “true”.
22 … 54 Execute the brake test
The brakes are tested one after the other, starting with the
brake of axis A1.
Lines 24, 25: An object of type BrakeTest is created. In
the process, the corresponding axis and the previously de-
termined maximum absolute torque are transferred as the
reference holding torque.
Line 26: The brake test is executed as a motion command.
Lines 27 ... 54: The result of the brake test is evaluated and
displayed on the smartHMI.
8.2.1 Adapting the brake test application for testing against the minimum holding torque
Description The brake test checks whether the brakes apply the minimum brake holding
torque. It is therefore advisable to adapt the prepared brake test application in
accordance with the following description.
If the brake test is to be executed without reference holding torques being de-
termined and made available to the brake test, all the command lines relevant
for torque value determination must be removed from the brake test applica-
tion. The brake test application then starts with the motion to the starting posi-
tion.
In addition, when creating the BrakeTest instance, the parameter with which
the reference torque is transferred must be removed.
Fig. 8-1: Transferring the reference torque for the brake test
3. Save changes.
Description The brake test application created from the template contains a prepared mo-
tion sequence for determining the maximum absolute torques generated in
each axis.
As standard, the robot is moved from the vertical stretch position. A different
starting position can be selected.
2 predefined axis positions are each addressed from the starting position with
a PTP motion. In order to determine the maximum absolute torques that arise
in a specific robot application, and to use these as reference holding torques
for the brake test, application-specific motion sequences must be inserted into
the brake test application.
Description As standard, the brake test application created from the template executes the
brake test to the end position of the motion sequence in order to determine the
maximum absolute torque. If this position is not suitable for the brake test, a
motion to the desired starting position must be programmed before the brake
test is executed.
To determine the gravitation and friction, the axes of an LBR iiwa are
moved towards the mechanical zero position. The maximum travel is
5° on the output side.
With the BrakeTest class, the RoboticsAPI offers a programming interface for
the execution of the brake test. The brake test is executed as a motion com-
mand.
8.3.1 Evaluating the torques generated and determining the maximum absolute value
Explanation of
Element Description
the syntax
evaluator Type: TorqueEvaluator
Variable to which the created TorqueEvaluator instance is
assigned. The evaluation of the torques during a motion
sequence is started and ended via the variable.
isTorque Type: Boolean
Measured
Input parameter of the setTorqueMeasured(…) method:
Defines whether the measured torque values or the values
calculated using the static robot model are to be used for
the evaluation.
true: measured torques are used
false: static torques (model-based) are used
Note: When using the static (model-based) torques,
dynamic effects, which can for example be generated by
robot acceleration, have no influence on the determined
values.
lbr_iiwa Type: LBR
LBR instance of the application. Represents the robot for
which the maximum absolute torque values are to be
determined.
maxTorque Type: TorqueStatistic
Data
Variable for the return value of stopEvaluation(). The return
value contains the determined maximum absolute torque
values and further information for the evaluation.
When the evaluation of the maximum absolute torque values has ended, the
results of the evaluation can be requested.
Method Description
getMaxAbs Return value type: double[]; unit: Nm
TorqueValues()
Returns a double array containing the determined maximum absolute
torque values (output side) for all axes.
getSingleMaxAbs Return value type: double; unit: Nm
TorqueValue(...)
Returns the maximum absolute torque value (output side) for the axis
which is transferred as the parameter (type: JointEnum).
areDataValid() Return value type: Boolean
The system checks whether the determined data are valid (= true).
The data are valid if no errors occur during command processing.
getStartTimestamp() Return value type: java.util.Date
Returns the time at which the evaluation was started.
Method Description
getStopTimestamp() Return value type: java.util.Date
Returns the time at which the evaluation was ended.
isTorqueMeasured() Return value type: Boolean
Checks whether the measured torques or the torques calculated using
the static robot model were used for evaluating the maximum absolute
torque.
true: measured torques are used
false: static torques (model-based) are used
Example The maximum torques which occur during a joining task are to be used as ref-
erence torques in a brake test. For this purpose, the torques which are mea-
sured during the execution of the joining task are evaluated, and the maximum
absolute torque for each axis is determined.
Once the evaluation has been started, the motion commands of the joining
process are executed. When the joining process is completed, the evaluation
is ended and the results of the evaluation for axes A2 and A4 are saved in the
process data. If the determined data are invalid, an output is set.
testEvaluator.setTorqueMeasured(true);
private LBR testLBR;
private BrakeTestIOGroup brakeTestIOs;
private Tool testGripper;
private Workpiece testWorkpiece;
...
public void run() {
testGripper.attachTo(testLBR.getFlange());
testWorkpiece.attachTo(testGripper.getFrame("/GripPoint"));
// create TorqueEvaluator
TorqueEvaluator testEvaluator = new TorqueEvaluator(testLBR);
// start evaluation
testEvaluator.startEvaluation();
// save result
getApplicationData().getProcessData("maxTrqA2").setValue(maxTrqA2);
// save result
getApplicationData().getProcessData("maxTrqA4").setValue(maxTrqA4);
testLBR.move(ptp(getFrame("/StartAssembly")));
ForceCondition testForceCondition =
ForceCondition.createNormalForceCondition
(testWorkpiece.getDefaultMotionFrame(), CoordinateAxis.Z, 15.0);
testWorkpiece.move(linRel(0.0, 0.0, 100.0)
.breakWhen(testForceCondition));
CartesianSineImpedanceControlMode testAssemblyMode =
CartesianSineImpedanceControlMode.createLissajousPattern(
CartPlane.XY, 5.0, 10.0, 500.0);
testWorkpiece.move(positionHold(
testAssemblyMode, 3.0, TimeUnit.SECONDS));
openGripper();
testWorkpiece.detach();
Description In order to be able to execute the brake test, an object of the BrakeTest class
must first be created. The index of the axis for which the brake test is to be
executed is transferred to the constructor of the BrakeTest class.
Optionally, the torque parameter can be used to transfer a reference holding
torque, e.g. the maximum absolute axis torque which occurs in a specific ap-
plication.
As a general rule, the brake test must check whether the brakes apply the min-
imum brake holding torque. It is therefore advisable not to specify the torque
parameter.
Explanation of
Element Description
the syntax
brakeTest Type: BrakeTest
Variable to which the created BrakeTest instance is
assigned. The execution of the brake test is commanded
via the variable as a motion command.
Element Description
axis Type: int
Index of the axis whose brake is to be tested.
0 … 6: Axes A1 … A7
torque Type: double; unit: Nm
Reference holding torque (output side) specified by the
user, e.g. the maximum absolute torque that has been
determined beforehand for an axis-specific motion
sequence.
If no reference holding torque is specified, the brake test
uses the lowest of the following values as the holding
torque: minimum brake holding torque or motor holding
torque.
If a reference holding torque is specified, one of the follow-
ing values is used as the holding torque to be tested: the
specified reference holding torque (torque), the minimum
brake holding torque or the motor torque.
The holding torque to be tested is defined internally by the
system according to the following rules:
1. If the reference holding torque is greater than the lowest
value of the minimum brake and motor holding torques,
then the lowest value of the minimum brake and motor
holding torques is used as the holding torque to be test-
ed.
2. If the reference holding torque is lower than 20% of the
lowest value of the minimum brake and motor holding
torques, then 20% of the lowest value of the minimum
brake and motor holding torques is used as the holding
torque to be tested.
3. In all other cases, the reference holding torque is used.
Note: The minimum and maximum brake holding torques
are saved in the motor data. The motor holding torque is
derived from the motor data.
Description The brake test is executed by a motion command which is made available via
the BrakeTest class. In order to execute the brake test, the method move(…)
or moveAsync(…) is called with the robot instance used in the application, and
the object created for the brake test is transferred.
In order to evaluate the result of the brake test, the return value of the motion
command must be saved in a variable of type IMotionContainer.
If an error is detected while the brake test is being executed, the brake test is
aborted. In order to be able to react to errors in the program, it is advisable to
command the execution and evaluation of the brake test within a try block and
to deal with the CommandInvalidException arising from the error.
Syntax try{
BrakeTest brakeTest = ...;
IMotionContainer brakeTestMotionContainer =
robot.moveΙmoveAsync(brakeTest);
...
} catch(CommandInvalidException ex{
...
}
Explanation of
Element Description
the syntax
brakeTest Type: BrakeTest
Variable to which the created BrakeTest instance is
assigned. The instance defines the axis for which the brake
test is to be executed and can optionally define a reference
holding torque specified by the programmer.
brakeTest Type: IMotionContainer
Motion
Container Variable for the return value of the move(…) or move-
Async(…) motion command used to carry out the brake
test. When the brake test has ended, the result can be
evaluated using the variable.
robot Type: Robot
Instance of the robot used in the application. The brake test
is to be performed on the axes of this robot.
ex Type: CommandInvalidException
Exception which occurs when the brake test is aborted due
to an error. It is advisable to treat the exception within the
catch block in such a way that an aborted brake test for a
single brake does not cancel the entire brake test applica-
tion.
Description When the brake test has ended, the result can be evaluated. For this purpose,
the return value of the motion command used to carry out the brake test must
be assigned to a variable of type IMotionContainer.
In order to evaluate the brake test, the IMotionContainer instance of the corre-
sponding motion command is transferred to the static method evaluateRe-
sult(…). The method belongs to the BrakeTest class and returns an object of
type BrakeTestResult. Various information concerning the executed brake test
can be requested from this object.
Explanation of
Element Description
the syntax
brakeTest Type: IMotionContainer
Motion
Container Variable for the return value of the move(…) or move-
Async(…) motion command used to carry out the brake
test.
result Type: BrakeTestResult
Variable for the return value of evaluateResult(…). The
return value contains the results of the brake test and fur-
ther information concerning the brake test which can be
requested via the variable.
Overview The following methods of the BrakeTestResult class are available for evaluat-
ing the brake test:
Method Description
getAxis() Return value type: int
Returns the index of the axis whose brake has been tested. The index
starts with 0 (= axis A1).
getBrakeIndex() Return value type: int
Returns the index of the tested brake of the motor (starting with 0). In a
brake test for the LBR iiwa, the value 0 is always returned.
getFriction() Return value type: double; unit: Nm
Returns the frictional torque (output side) determined during the test
motion.
getGravity() Return value type: double; unit: Nm
Returns the gravitational torque (output side) determined during the test
motion.
getMaxBrake Return value type: double; unit: Nm
HoldingTorque()
Returns the torque (output side) determined from the motor data which
the brake must not exceed. (= maximum brake holding torque)
getMeasuredBrake Return value type: double; unit: Nm
HoldingTorque()
Returns the holding torque (output side) measured during the brake test.
This value is compared with the holding torque to be tested.
getMinBrake Return value type: double; unit: Nm
HoldingTorque()
Returns the minimum brake torque (output side) that can be reached, as
determined from the motor data. (= minimum brake holding torque)
getMotor Return value type: double; unit: Nm
HoldingTorque()
Returns the motor holding torque (output side) determined from the
motor data.
getMotorIndex() Return value type: int
Returns the index of the tested motor of the drive (starting with 0). In a
brake test for the LBR iiwa, the value 0 is always returned.
getMotor Return value type: double; unit: Nm
MaximalTorque()
Returns the maximum motor torque (output side) determined from the
motor data.
getState() Return value type: Enum of type BrakeState
Returns the results of the brake test.
(>>> 8.3.5.1 "Requesting the results of the brake test" Page 144)
getTestedTorque() Return value type: double; unit: Nm
Returns the test holding torque with which the holding torque (output
side) applied and measured during the brake test is compared.
getTimestamp() Return value type: java.util.Date
Returns the time at which the brake test was started.
Description The test results are requested via the BrakeTestResult method getState(). An
enum of type BrakeState is returned; its values describe the possible test re-
sults.
The possible test results are assigned to specific log levels. The log level cor-
responding to the test result can be requested with getLogLevel().
Explanation of
Element Description
the syntax
result Type: BrakeTestResult
Variable for the return value of the static method evalua-
teResult(...) which provides the BrakeTest class for evalua-
tion of the brake test. The return value contains the results
of the brake test and further information concerning the
brake test which can be requested via the variable.
state Type: Enum of type BrakeState
Variable for the return value of getState(). The return value
contains the test results.
(>>> "BrakeState" Page 145)
logLevel Type: Enum of type LogLevel
Variable for the return value of getLogLevel(). The return
value contains the log level of the test results.
LogLevel.Error: The brake test could not be executed
or has failed.
LogLevel.Info: The brake test has been executed suc-
cessfully.
LogLevel.Warning: The holding torque to be tested
has been reached, but problems occurred while the
brake test was being carried out.
BrakeState The enum of type BrakeState has the following values (with specification of the
corresponding log level):
Value Description
BrakeUntested The brake test could not be executed or was aborted
during execution due to faults.
Log level: LogLevel.Error
BrakeUnknown The brake test could not be executed because not
enough torque could be generated (e.g. due to exces-
sive friction).
Log level: LogLevel.Error
BrakeError The brake test has failed. The measured holding
torque falls below the holding torque to be tested. The
brake is defective.
Log level: LogLevel.Error
BrakeWarning The measured holding torque is less than 5% above
the holding torque to be tested. The brake has reached
the wear limit and will soon be identified as defective.
Log level: LogLevel.Warning
Value Description
BrakeMax The holding torque to be tested has been reached, but
Unknown the brake could not be tested against the maximum
brake holding torque.
Log level: LogLevel.Warning
BrakeExcessive The measured holding torque is greater than the maxi-
mum brake holding torque. Stopping using the brake
can cause damage to the machine.
Log level: LogLevel.Warning
BrakeReady The measured holding torque exceeds the holding
torque to be tested by more than 5 %. The brake is fully
operational.
Log level: LogLevel.Info
Example A brake test is executed for axis A2. If the brake test is aborted, this is indicat-
ed by a corresponding output signal. If the brake test is fully executed, a mes-
sage containing the measured holding torque is generated and the test results
are requested. Depending on whether the measured holding torque is too low,
within the tolerance range or in the ideal range, a corresponding output is also
set in each case.
private LBR exampleLBR_iiwa;
private BrakeTestIOGroup brakeTestIOs;
...
public void run() {
...
try {
int indexA2 = 1;
BrakeTest exampleBrakeTest = new BrakeTest(indexA2);
IMotionContainer exampleBrakeTestMotionContainer =
exampleLBR_iiwa.move(exampleBrakeTest);
double measuredTorque =
resultA2.getMeasuredBrakeHoldingTorque();
Description If the brake test application is paused while a brake is being tested, e.g. by
pressing the Start button on the smartPAD or due to a stop request, the brake
test of the axis is aborted.
If the brake test application is resumed, the aborted brake test will be repeated
for the axis in question. If the axis is no longer in the position in which the abort-
ed brake test was started, it must be repositioned by pressing the Start key.
Only then can the application be resumed.
Precondition The brake test application is configured and available on the robot control-
ler.
There are no persons or objects in the range of motion of the robot.
Program run mode Continuous (default mode)
The robot is at operating temperature.
Item Description
1 Validity
Indicates whether the determined data are valid. The data are
valid if no errors occur during command processing.
Item Description
2 Time indications
Start time, end time and overall duration of the evaluation.
3 Determined data
The maximum absolute torque determined from the evaluation is
displayed for each axis.
Item Description
1 Log level
Depending on the results of the brake test, the message is gener-
ated with a specific log level.
Info: The brake test has been executed successfully.
Warning: The holding torque to be tested has been reached,
but problems occurred while the brake test was being carried
out (see item 6 for descriptions of the possible test results).
Error: The brake test could not be executed or has failed.
2 Tested axis
3 Time stamp
Time stamp at which the brake test was started for the axis.
4 Holding torque to be tested
Item Description
5 Measured holding torque
6 Result of the brake test
Untested: The brake test could not be executed or was abort-
ed during execution due to faults.
Unknown: The brake test could not be executed because not
enough torque could be generated (e.g. due to excessive fric-
tion).
Failed: The brake test has failed. The measured holding
torque falls below the holding torque to be tested. The brake is
defective.
Warning: The measured holding torque is less than 5% above
the holding torque to be tested. The brake has reached the
wear limit and will soon be identified as defective.
Maximum unknown: The holding torque to be tested has
been reached, but the brake could not be tested against the
maximum brake holding torque.
Excessive: The measured holding torque is greater than the
maximum brake holding torque. Stopping using the brake can
cause damage to the machine.
Successful: The measured holding torque exceeds the hold-
ing torque to be tested by more than 5 %. The brake is fully op-
erational.
9 Project management
j
9.1
t
Overview of Sunrise project
A Sunrise project contains all the data which are required for the operation of
a station. A Sunrise project comprises:
Station configuration
The station configuration describes the static properties of the station. Ex-
amples include hardware and software components.
Sunrise applications
Sunrise applications contain the source code for executing a task for the
station. They are programmed in Java with KUKA Sunrise.Workbench and
are executed on the robot controller. A Sunrise project can have any num-
ber of Sunrise applications.
Runtime data
Runtime data are all the data which are used by the Sunrise applications
during the runtime. These include, for example, end points for motions,
tool data and process parameters.
Safety configuration
The safety configuration contains the configured safety functions.
I/O configuration (optional)
The I/O configuration contains the inputs/outputs of the used field buses
mapped in WorkVisual. The inputs/outputs can be used in the Sunrise ap-
plications.
Sunrise projects are created and managed with KUKA Sunrise.Workbench.
(>>> 5.3 "Creating a Sunrise project with a template" Page 55)
There may only be 1 Sunrise project on the robot controller at any given time.
This is transferred from Sunrise.Workbench to the robot controller by means
of project synchronization.
(>>> 9.5 "Project synchronization" Page 171)
Overview Frames are coordinate transformations which describe the position of points
in space or objects in a station. The coordinate transformations are arranged
hierarchically in a tree structure. In this hierarchy, each frame has a higher-lev-
el parent frame with which it is linked through the transformation.
The root element or origin of the transformation is the world coordinate system
which is located as standard in the robot base. This means that all frames are
directly or indirectly related to the world coordinate system.
A transformation describes the relative position of 2 coordinate systems to
each other, i.e. how a frame is offset and oriented relative to its parent frame.
The position of a frame relative to its parent frame is defined by the following
transformation data:
X, Y, Z: Offset of the origin along the axes of the parent frame
A, B, C: Rotational offset of the axis angles of the parent frame
Rotational angle of the frames:
Angle A: Rotation about the Z axis
Angle B: Rotation about the Y axis
Angle C: Rotation about the X axis
Example
Frame1, 2 and 3 are child elements of World and are located on the same hi-
erarchical level. P1 and P2 are child elements of Frame1 and are located one
level below it.
Procedure Right-click on the desired frame and select Base from the context menu.
Alternative:
Select the frame and click on the Base hand icon.
The frame is marked with a hand icon.
Example
Description A frame can be moved in the Application data view and assigned to a new
parent frame. The following points must be taken into consideration:
The subordinate frames are automatically moved at the same time.
The absolute position of the moved frames in space is retained. The rela-
tive transformation of the frames to the new parent frame is adapted.
Frames cannot be inserted under one of their child elements.
The names of the direct child elements of a frame must be unique.
If a frame is moved, its path changes. Since frames are used via this
path in the source code of applications, the path specification must be
corrected accordingly in the applications.
Procedure 1. Click on the desired frame and hold down the left mouse button.
2. Drag the frame to the new parent frame with the mouse.
3. When the desired new parent frame is selected, release the mouse button.
Description Frames can be removed from the frame tree in the Application data view. If
a frame has child elements, the following options are available:
Move children to parent: Only the selected frame is deleted. The subor-
dinate frames are retained, are moved up a level and assigned to a new
parent frame.
The absolute position of the moved frames in space is retained. The rela-
tive transformation of the frames to the new parent frame is adapted.
If a frame is moved, its path changes. Since frames are used via this
path in the source code of applications, the path specification must be
corrected accordingly in the applications.
Delete parent and child frames: Deletes the selected frame and all sub-
ordinate frames.
Procedure 1. Right-click on the frame to be deleted and select Delete from the context
menu. A frame without child elements is deleted immediately.
2. If the frame has child elements, the system asks whether these should
also be deleted. Select the desired option.
3. Only with the Move children to parent option: if a name conflict occurs
when moving the child elements, a notification message appears and the
delete operation is canceled.
Remedy: Rename one of the frames in question and repeat the delete op-
eration.
Procedure 1. Select the frame in the Application data view. The properties of the frame
are displayed in the Properties view, distributed over various tabs. Some
of the properties can be edited, others are for display only.
2. Select the desired tab and enter the new value.
For physical variables, the value can be entered with the unit. If this
is compatible with the preset unit, the value is converted accordingly,
e.g. cm into mm or ° into rad. If no unit is entered, the preset unit is
used.
Parameter Description
Name Name of the frame
Comment A comment on the frame can be entered here
(optional).
Project Project in which the frame was created (display
only)
Last modification Date and time of the last modification (display
only)
Parameter Description
X, Y, Z Translational offset of the frame relative to its
parent frame
A, B, C Rotational offset of the frame relative to its par-
ent frame
Parameter Description
E1 Value of the redundancy angle
(>>> 9.2.6.3 "“Redundancy” tab" Page 155)
Status (>>> 14.10.2 "Status" Page 330)
Turn (>>> 14.10.3 "Turn" Page 331)
The Teach information tab contains information about a taught frame (dis-
play only).
Parameter Description
Device Robot that was used for teaching
Tool Tool that was used for teaching
TCP Frame path for the TCP that was used for teach-
ing
X, Y, Z Translational offset of the TCP relative to the ori-
gin frame of the tool
A, B, C Rotational offset of the TCP relative to the origin
frame of the tool
The Measurement tab contains information about base calibration (for frames
marked as a base; display only).
Parameter Description
Measurement Method used
method
Last modification Date and time of the last modification
Description A frame created in the application data can be inserted as the end point in a
motion instruction.
Example robot.move(ptp(getApplicationData().getFrame("/P2/Target")));
Tools Properties:
Tools are mounted on the robot flange.
Tools can be used as movable objects in the robot application.
The tool load data affect the robot motions.
Tools can have any number of working points (TCPs) which are defined
as frames.
Workpieces Characteristics:
Workpieces can be a wide range of objects which are used, processed or
moved in the course of a robot application.
Workpieces can be coupled to tools or other workpieces.
Workpieces can be used as movable objects in the robot application.
The workpiece load data affect the robot motions, e.g. when a gripper
grips the workpiece.
Workpieces can have any number of frames which mark relevant points,
e.g. points on which a gripper grips a workpiece.
Every tool has an origin frame (root). As standard, the origin of the tool is de-
fined to match the flange center point in position and orientation when the tool
is mounted on the robot flange. The origin frame is always present and does
not have to be created separately.
A tool can have any number of working points (TCPs), which are defined rel-
ative to the origin frame of the workpiece (root) or to one of its child elements.
The transformation of the frames is static. For active tools, e.g. grip-
pers, this means that the TCP does not adapt to the current position
of jaws or fingers.
Every workpiece has an origin frame (root). The origin frame is always present
and does not have to be created separately.
A workpiece can have any number of frames, which are defined relative to the
origin frame of the workpiece (root) or to one of its child elements.
Description Tools and workpieces created as object templates for a project can be used in
every robot application of the project.
The tools can be selected on the smartHMI for jogging after the project has
been synchronized.
(>>> 6.14.1 "“Jogging options” window" Page 89)
Description Each frame created for a tool or workpiece can be programmed in the robot
application as the reference point for motions.
After the project is synchronized, the frames of a tool can be selected as the
TCP for Cartesian jogging on the smartHMI.
(>>> 6.14.1 "“Jogging options” window" Page 89)
The frames of a tool (TCPs) can be calibrated with robot relative to the flange
coordinate system.
(>>> 7.5.1 "Tool calibration" Page 119)
If the data of a calibrated tool are saved in Sunrise.Workbench by means of
synchronization, the transformation data of the frame change in accordance
with the calibration.
The tool data of the TCP used to execute a Cartesian motion influ-
ence the robot velocity. Incorrectly entered tool data can cause unex-
pectedly high Cartesian velocities at the installed tool. The velocity of
250 mm/s may be exceeded in T1 mode.
Procedure 1. Select the frame in the Object templates view. The properties of the
frame are displayed in the Properties view, distributed over various tabs.
2. Select the desired tab and enter the new value.
For physical variables, the value can be entered with the unit. If this
is compatible with the preset unit, the value is converted accordingly,
e.g. cm into mm or ° into rad. If no unit is entered, the preset unit is
used.
Parameter Description
Name Name of the frame
Comment A comment on the frame can be entered here
(optional).
Parameter Description
X, Y, Z Translational offset of the frame relative to its
parent frame
-10,000 mm … +10,000 mm
A, B, C Rotational offset of the frame relative to its par-
ent frame
Any
Safety-oriented tool frames can be configured on the Safety tab. The tab is not
available for frames of workpieces.
Parameter Description
Radius Radius of the sphere on the safety-oriented
frame
25 … 10000 mm
Safety-oriented Check box active: Frame is safety-oriented
frame
Check box not active: Frame is not a safety-
oriented frame
The check box can only be edited under the fol-
lowing conditions:
The frame belongs to a safety-oriented tool.
A permissible value has been entered for the
radius.
(>>> 9.3.9 "Safety-oriented tools" Page 162)
The Measurement tab contains information about tool calibration (display on-
ly).
Parameter Description
Measurement Method used
method
Calculation error Translational or rotational calculation error which
specifies the quality of the calibration (unit: mm
or °)
Last modification Date and time of the last modification
Description If a tool or workpiece has a frame with which a large part of the motions must
be executed, this frame can be defined as the default frame for motions.
Defining an appropriate default frame for a tool or workpiece simplifies the mo-
tion programming.
Example
Description Load data are all loads mounted on or connected to the robot flange. They
form an additional mass mounted on the robot which must also be moved to-
gether with the robot.
The load data of tools and workpieces can be specified when the correspond-
ing object templates are created. If several tools and workpieces are connect-
ed to the robot, the resulting total load is automatically calculated from the
individual load data.
The load data are integrated into the calculation of the paths and accelera-
tions. Correct load data are an important precondition for the optimal function-
ing of the servo control and help to optimize the cycle times.
Manual calculation
CAD programs
The load data of tools can be determined automatically.
(>>> 7.6 "Determining tool load data" Page 127)
Safety-oriented Just like any other tool, a safety-oriented tool can have any number of frames.
frames In order to configure the monitoring spheres, suitable frames must be defined
as safety-oriented frames. The center of the sphere is situated, by definition,
at the origin of the safety-oriented frame. The radius of the sphere is defined
in the frame properties.
If workpieces are used that are to be taken into consideration for safety-orient-
ed Cartesian space or velocity monitoring, e.g. due to the dimensions of the
workpieces, the spheres of the safety-oriented tool must be configured ac-
cordingly.
Safety-oriented frames are also relevant for the following configurable, tool-
specific safety monitoring functions:
Monitoring of the tool orientation (only available for robots)
One of the safety-oriented frames can be defined as the tool orientation
frame. Safety-oriented monitoring of the orientation of this frame can be
carried out.
(>>> 13.12.10 "Monitoring the tool orientation" Page 267)
Direction-specific monitoring of the Cartesian velocity (available for robots
and mobile platforms)
One of the safety-oriented frames can be defined as the monitoring point
for the tool-specific velocity monitoring. A second frame can additionally
be defined as the orientation for the monitoring. This orientation frame de-
fines the orientation of the coordinate system in which the velocity of the
monitoring point is described. In tool-specific velocity monitoring, a com-
ponent of this velocity can be monitored.
(>>> 13.12.8.3 "Direction-specific monitoring of Cartesian velocity"
Page 254)
Tools with load data outside the specified range of values cannot be used
as safety-oriented tools.
(>>> 9.3.9.2 "Tool properties – Load data tab" Page 164)
Further information on the load data can be found here: (>>> 9.3.8 "Load
data" Page 161)
5. In the Object templates view, select the tool frame that is to be safety-ori-
ented.
6. In the Properties view, select the Safety tab.
7. Enter the radius of the monitoring sphere on the safety-oriented frame.
8. Set the check mark at Safety-oriented.
The frame icon in the Object templates view is highlighted in yellow.
9. Select the Transformationtab and enter any missing transformation data
of the frame with respect to its parent frame.
Frames with transformation data outside the specified range of values
cannot be used as safety-oriented frames.
(>>> 9.3.6.2 "“Transformation” tab" Page 159)
10. Repeat steps 5 to 9 to define further safety-oriented tool frames.
11. If required, set the safety-oriented frames that are necessary for tool-spe-
cific safety monitoring functions:
a. Select the safety-oriented tool in the Object templates view.
b. In the Properties view, select the Safety tab.
c. Under Safety properties assign the desired safety-oriented frames to
the tool-specific safety monitoring functions.
(>>> 9.3.9.3 "Tool properties – Safety tab" Page 165)
The icons of the assigned frames are marked with a sphere symbol in
the Object templates view.
The Load data tab contains the load data of the tool.
The value ranges apply to safety-oriented tools. Tools with load data outside
these ranges of values cannot be used as safety-oriented tools.
Parameter Description
Mass Mass of the tool
≤2,000 kg
MS X, MS Y, MS Z Position of the center of mass relative to the ori-
gin frame of the tool
-10,000 … +10,000 mm
MS A, MS B, MS C Orientation of the principal inertia axes relative to
the origin frame of the tool
Any
jX, jY, jZ Mass moments of inertia of the tool
0 … 1,000 kg·m2
Parameter Description
Safety-oriented Check box active: The tool is a safety-oriented tool
Check box not active: The tool is not a safety-oriented tool
Tool orientation frame Safety-oriented frame, the orientation of which can be moni-
tored using the AMF Tool orientation.
If no tool orientation frame is defined, the pickup frame of the
tool is used as the tool orientation frame.
(>>> "Pickup frame" Page 165)
Point for tool-related veloc- Safety-oriented frame defining a point on the tool at which the
ity Cartesian velocity in a specific direction can be monitored using
the AMF Tool-related velocity component.
If no point is defined for the tool-related velocity, the pickup
frame of the tool is used. The velocity is monitored at the origin
of the pickup frame.
(>>> "Pickup frame" Page 165)
Orientation for tool-related Safety-oriented frame, the orientation of which determines the
velocity directions in which the Cartesian velocity can be monitored
using the AMF Tool-related velocity component.
If no orientation is defined for the tool-related velocity, the
pickup frame of the tool is used. The orientation of the pickup
frame determines the monitoring direction.
(>>> "Pickup frame" Page 165)
Pickup frame The pickup frame of a tool is dependent on the kinematic system on which it
is mounted and on the tool configuration:
The tool is mounted on the robot flange: the pickup frame is the flange co-
ordinate system of the robot.
The tool is mounted on a mobile platform: the pickup frame is the coordi-
nate system at the center point of the platform.
The tool is mounted on a fixed tool: the pickup frame is the standard frame
for motions of the fixed tool.
Description Loads picked up by the robot, e.g. a gripped workpiece, exert an additional
force on the robot and influence the torques measured by the joint torque sen-
sors.
The following AMFs require the workpiece load data for calculation of the ex-
ternal forces and torques:
TCP force monitoring
(>>> 13.12.13.3 "TCP force monitoring" Page 273)
Base-related TCP force component
(>>> 13.12.13.4 "Direction-specific monitoring of the external force on the
TCP" Page 275)
Collision detection
(>>> 13.12.13.2 "Collision detection" Page 272)
Programming If one of these workpiece load-specific AMFs is active and workpieces are
picked up at the same time, the current workpiece must be transferred to the
safety controller. For this, the KUKA RoboticsAPI offers the method setSafe-
tyWorkpiece().
(>>> 15.10.5 "Transferring workpiece load data to the safety controller"
Page 376)
Once the workpiece has been transferred, the workpiece load data are taken
into consideration by the safety controller. A load change, e.g. if a workpiece
is set down again, must also be communicated with setSafetyWorkpiece(…).
Requirements The workpiece transferred to the safety controller must meet the following re-
quirements:
The workpiece load data must be within the specified limits. If this is not
the case, the load data are invalid and cannot be transferred to the safety
controller.
(>>> 9.3.10.2 "Workpiece properties – Load data tab" Page 167)
In order to be able to use workpieces as safety-oriented workpieces, the
mass of the heaviest workpiece that could possibly be picked up by the ro-
bot must additionally be configured in the safety-oriented project settings.
(>>> 9.3.10.3 "Configuring the mass of the heaviest workpiece"
Page 168)
The mass from the workpiece load data must not exceed the configured
mass of the heaviest workpiece. Otherwise, workpiece load-specific AMFs
are violated.
Workpiece pick- The way in which the load data of a workpiece influence the workpiece load-
up dependent AMFs depends on how the workpiece is picked up. For a work-
piece, the safety controller requires the origin frame of the workpiece to be
identical to the standard frame for motions of the safety-oriented tool.
Item Description
1 Safety-oriented tool
2 Standard frame for motions of the safety-oriented tool:
Frame of the safety-oriented tool on which the workpiece must be
picked up. It is not necessary for this frame to be a safety-oriented
frame.
3 Origin frame of the workpiece
Frame of the workpiece on which the safety-oriented tool must
pick up the workpiece.
4 Workpiece
5 Status after transfer of the workpiece to the safety controller
The origin frame of the workpiece is identical to the standard frame
for motions of the safety-oriented tool.
The workpiece load data can be entered on the Load data tab.
The value ranges apply for workpieces that are used as safety-oriented work-
pieces. Workpieces with load data outside these ranges of values cannot be
used as safety-oriented workpieces.
Parameter Description
Mass Mass of the workpiece
0.001 … 2,000 kg
MS X, MS Y, MS Z Position of the center of mass relative to the ori-
gin frame of the workpiece
-10,000 … +10,000 mm
MS A, MS B, MS C Orientation of the principal inertia axes relative to
the origin frame of the workpiece
0° … 359°
jX, jY, jZ Mass moments of inertia of the workpiece
0 … 1,000 kg·m2
Description If the following workpiece load-dependent AMFs are used, the mass of the
heaviest workpiece that could possibly be picked up by the robot must be con-
figured in the safety-oriented project settings:
TCP force monitoring
Base-related TCP force component
Collision detection
Each of the workpiece load-dependent AMFs checks whether the workpiece
mass transferred to the safety controller with the workpiece load data exceeds
the configured mass of the heaviest workpiece. If the mass of the heaviest
workpiece is not configured, it is initialized with the default value (= 0.0 kg) and
the AMF is violated if the workpiece load data from the safety controller are
used.
Procedure 1. Right-click on the desired project in the Package Explorer view and select
Sunrise > Change project settings from the context menu.
The Properties for [Sunrise Project] window opens.
2. Select Sunrise > Safety in the directory in the left area of the window.
3. Enter the mass of the heaviest workpiece under Heaviest workpiece in
the right-hand area of the window:
0.0 … 2000.0 kg
4. Click on OK to apply the settings and close the window.
Description When an object template is copied, a copy of the object templates including all
frames is created. The properties of the object and its frames, with the excep-
tion of the safety properties, are included in the copy. The Safety-oriented
property is not set in a copy.
Procedure Right-click on the object template and select Create copy from the context
menu.
Prior to start-up, the passwords for the user groups must be modified
by the administrator, transferred to the robot controller in an installa-
tion procedure and activated. The passwords must only be communi-
cated to authorized personnel. (>>> 9.4.2 "Changing and activating the
password" Page 171)
User groups The following user groups are available with Sunrise.RolesRights:
Administrator
Only available in Sunrise.Workbench. The Administrator manages the
passwords of the user groups.
The user group is protected by means of a password.
The default password is “kuka”.
Operator
The user group “Operator” is the default user group.
Expert
Additional protected functions, that may not be performed by the “Opera-
tor”, are available to the user group “Expert”.
The user group is protected by means of a password.
The default password is “kuka”.
Safety maintenance technician
The user “Safety maintenance” is responsible for starting up the safety
equipment of the industrial robot. All functions of the user group “Expert”
are available to the user group “Safety maintenance”. Users in this user
group can additionally modify the safety configuration on the robot control-
ler.
The user group is protected by means of a password.
The default password is “argus”.
Prior to start-up, the passwords for the user groups must be modified
by the administrator, transferred to the robot controller in an installa-
tion procedure and activated. The passwords must only be communi-
cated to authorized personnel. (>>> 9.4.2 "Changing and activating the
password" Page 171)
Functions Depending on the installed software, the users can execute the following func-
tions:
Safety mainte-
Function Operator Expert
nance
Project synchronization with
unchanged safety configuration
Project synchronization with
changed safety configuration
Safety mainte-
Function Operator Expert
nance
Selecting/deselecting an application
Pausing an application
Teaching frames
Robot mastering/unmastering
Safety mainte-
Function Operator Expert
nance
Backing up data manually
Description The passwords for the user groups on the robot controller are defined in the
project settings in Sunrise.Workbench. If these passwords are changed, they
can only be activated by an installation of the system software on the robot
controller.
Procedure 1. Right-click on the desired project in the Package Explorer view and select
Sunrise > Change project settings from the context menu.
The Properties for [Project] window opens.
2. Select Sunrise > Passwords in the directory in the left area of the win-
dow.
3. Click on Login and enter the Administrator password. Confirm the pass-
word with OK.
4. Select the user group for which the password is to be changed.
5. Enter the new password twice.
For security reasons, the entries are displayed encrypted. Upper and low-
er case are distinguished.
Authorization If the option package Sunrise.RolesRights is installed, only the user groups
“Expert” and “Safety maintenance” are authorized to transfer a project to the
robot controller by means of synchronization:
User group Expert
Default user group for performing project synchronization
User group Safety maintenance technician
Only required if the safety configuration has been changed.
The Authorization dialog is automatically opened when project synchroniza-
tion is performed. The required user group is already preset here. It only re-
mains to enter the correct user password.
Description The procedure described here applies if no project is on the robot controller
yet or if there is a different project from the one to be transferred.
4. Click on Execute.
5. If the safety configuration or I/O configuration is modified, a dialog indi-
cates that the robot controller must be rebooted in order to complete the
synchronization.
Click on OK to transfer the project to the robot controller. Once the
transfer is completed, the robot controller automatically reboots.
Transfer of the project can be stopped with Cancel.
6. Only if the option package Sunrise.RolesRights is used: The “Authoriza-
tion” dialog opens. The required user group is preset.
Enter password and confirm with OK.
7. The progress of the project transfer is displayed in a dialog both in Sun-
rise.Workbench and on the smartPAD. Once the transfer is completed, the
dialog is closed and the robot controller automatically reboots.
Description The procedure described here applies if the same project exists in Sun-
rise.Workbench and on the robot controller, but in different versions.
8. Only for transfer to the robot controller and if the option package Sun-
rise.RolesRights is used: The “Authorization” dialog opens. The required
user group is preset.
Enter password and confirm with OK.
9. The progress of the project transfer is displayed in a dialog both in Sun-
rise.Workbench and on the smartPAD. When the transfer is completed,
the dialog is automatically closed. If the safety configuration or I/O config-
uration is modified, the robot controller is automatically rebooted.
If the transfer fails, a corresponding dialog is displayed both in Sun-
rise.Workbench and on the smartPAD. In addition, the cause of the error
is displayed in Sunrise.Workbench.
Confirm the dialog in Sunrise.Workbench and on the smartPAD with OK.
10. If the safety configuration is modified, activate this on the robot controller.
Description A project can be loaded from the robot controller if the project is not located in
the workspace of Sunrise.Workbench.
Procedure 1. Select the menu sequence File > New > Sunrise project. The project cre-
ation wizard opens.
2. Enter the IP address of the robot controller from which the project is to be
loaded in the IP address of controller: box.
Procedure
f
Open the station configuration:
1. In the Package Explorer view, open the node of the project that is to be
configured.
2. Double-click on the file StationSetup.cat. The file is opened in the editor.
The file contains the station configuration of the project.
The station configuration can be edited and installed using the following tabs:
Topology The Topology tab displays the hardware components of the station. The to-
pology can be restructured or modified.
Software The Software tab displays the software catalog of Sunrise.Workbench. Cata-
log elements to be installed or uninstalled in the project can be selected here.
The elements that can be selected depend on the topology and the option
packages installed in Sunrise.Workbench.
Configuration The Configuration tab displays the configuration of the robot controller. The
configuration can be changed. The parameters that can be configured depend
on the topology and the installed option packages.
IP address and subnet mask of the robot controller
IP address range for KUKA Line Interface (KLI)
Manual guidance support
General safety settings
Parameters for calibration
Type of media flange (if present on robot)
Installation direction (default: floor-mounted installation)
The installation and use of option packages in the project may cause
further parameters to be added.
Installation The system software is installed on the robot controller via the Installation
tab.
Procedure 1. Open the Network and Sharing Center via the Windows Control Panel or
Windows Explorer.
2. In the top left-hand area, click the Change adapter settings entry. The
network connections are displayed.
Description A software catalog containing errors prevents installation of the System Soft-
ware on the robot controller. The errors must be eliminated before installation.
Possible causes of errors are:
Missing reference to a catalog element
Some catalog elements are dependent on others. If a catalog element that
is required by another one is deselected in the software catalog or re-
moved by being uninstalled, the remaining catalog element is marked in
red.
Catalog element used, but not installed
If a catalog element that is not installed in Sunrise.Workbench is used in a
project, this catalog element is indicated and marked in red.
Item Description
1 The catalog element Manual guidance support is not available
because the catalog element Robotics API has been deselected.
Possible remedies:
Deselect the catalog element that is not available (deactivate
check box) and save the station configuration.
Select the required catalog element (activate check box) and
save the station configuration.
2 The project uses functions of the safety option KUKA Sun-
rise.HRC. The catalog element Human Robot Collaboration is
not available because the option is not installed in Sunrise.Work-
bench.
Possible remedies:
Deselect the catalog element that is not available (deactivate
check box) and save the station configuration.
Install the safety option in Sunrise.Workbench (only necessary
if the safety configuration has not yet been completed and
AMFs of the safety option are required for the configuration).
Description The KLI is the Ethernet interface of the robot controller for external communi-
cation. In order for external PCs, e.g. the development computer with KUKA
Sunrise.Workbench, to be able to connect to the robot controller via a network,
the KLI must be configured accordingly.
The following IP address ranges are used internally by the robot controller.
192.*.*.*
172.16.*.*
172.17.*.*
If one or more KLI network devices (e.g. the robot controller, bus devices or
other network devices) use IP addresses from one of these ranges, this IP ad-
dress range must be set. Sunrise then reconfigures the internal network to en-
sure that there are no IP address conflicts.
Parameter Description
IP address range for KUKA The following IP address ranges are available:
Line Interface
192.*.*.*
172.16.*.*
172.17.*.*
Other
Default: Other
Field buses How the KLI has to be configured depends, among other things, on whether
an Ethernet-based field bus is installed on the robot controller.
Ethernet-based field buses are:
KUKA Sunrise.ProfiNet M/S
Robots that have a hand guiding device with a safety-oriented enabling device
can be guided manually if no application is selected or if an application is
paused.
An application is paused if it has one of the following states:
Selected
Motion paused
Error
Manual guidance is supported as standard in all operating modes except CRR
mode. It is possible to configure manual guidance as not allowed in Test mode
and/or Automatic mode.
Configuration parameters under Manual guidance support:
Parameter Description
Enable manual guidance in Manual guidance in Automatic mode
Automatic mode
True: Manual guidance is allowed in Automatic mode.
False: Manual guidance is not allowed in Automatic mode.
Default: True
Enable manual guidance in Manual guidance in Test mode (T1, T2)
the test modes
True: Manual guidance is allowed in Test mode.
False: Manual guidance is not allowed in Test mode.
Default: True
Parameter Description
smartPAD unplugging Unplugging the smartPAD
allowed
True: Unplugging of the smartPAD is allowed. The robot can
be moved with the smartPAD unplugged.
False: Unplugging of the smartPAD is not allowed. The robot
cannot be moved with the smartPAD unplugged. An EMER-
GENCY STOP is triggered.
Default: True
Parameter Description
Minimum calibration point Minimum distance which must be maintained between 2 mea-
distance (tool) in mm suring points (XYZ 4-point and ABC 2-point methods) during
tool calibration
0 … 200
Default: 8
Maximum calculation error Maximum translational calculation error during tool calibration
in mm up to which the quality of the calibration is considered sufficient
0 … 200
Default: 5
Minimum calibration point Minimum distance which must be maintained between 2 mea-
distance (base) in mm suring points during base calibration
0 … 200
Default: 50
Minimum angle in ° Minimum angle to be maintained between the straight lines
which are defined by the 3 measuring points during base cali-
bration (3-point method)
0 … 360
Default: 2.5
Parameter Description
Automatic backup Activation/deactivation of automatic backup
active/inactive
active: The robot controller automatically carries out back-
ups.
The following parameters determine the time and the inter-
val:
Time [hh:mm]: Time of backup
Default: 00:00:00
Time interval [days]: Backup interval in days
Default: 7
Note: If the robot controller was switched off at the config-
ured time, it carries out a data backup as soon as it is
switched on at the next configured time. It only carries out
one backup, even if the time was missed more than once.
inactive: No automatic backup.
Default: inactive
Backup mode Target and source directory for backups and restorations
Local: The target directory for backups and the source direc-
tory for restorations is the directory D:\ProjectBackup on the
robot controller.
Note: If the backup of the projects and user data takes up too
much memory, the local memory may be full before the max-
imum configured number of backup copies has been
reached. In this case, no further backup is possible.
network storage: The target directory for backups and the
source directory for restorations is a network path:
Network path
If during backup and/or restoration the robot controller must
access the network and an authentication is required, the
user name and password for the network path must be spec-
ified:
User name for network path
Password for network path
Note: Any other network path can be set on the robot con-
troller for restorations.
Default: Local
Parameter Description
Maximum number of back- Maximum number of backup copies
ups
1 … 50
Once the maximum number of backup copies has been
reached, the oldest backup copy is overwritten.
If more backup copies than the permissible number are present,
e.g. because the maximum number has been reduced, the
excess backup copies will be deleted next time a backup is
made (starting with the oldest).
Default: 1
Restore-configuration file Path to a file with network configurations for restorations
The file must be present in CSV format and copied manually to
the robot controller.
Note: It is advisable to save the file on drive D:\. If it is saved on
C:\, it is not possible to rule out the possibility of it being over-
written in the case of a restoration or installation.
CSV file Network configurations for restorations must be entered in a CSV file and
saved on the robot controller. The data set with the network configurations can
then be loaded using the Backup Manager and the source directory from
which the data are to be restored can be selected.
Example of a CSV file:
IP_adress;subnetmask;BM_Username;BM_Password;BM_ProjectRestoreDirecto
ryPath;Server
192.168.0.131;255.255.0.0;User41;pwd82p;\\Server\Path\Restore;Restore
3Backup857
192.168.0.239;255.255.0.0;User66;pwd24ppp;\\Server\Path\Restore;Resto
re0Remote415
192.168.0.151;255.255.0.0;User38;pwd75ppp;\\Server\Path\Lokal;Lokal1R
estore705
...
The following points must be observed when creating the CSV file:
The header data set must contain the columns specified in the example
file.
The column names must not be modified.
The columns can be saved in any order.
Further columns can be added, e.g. to save additional information.
Description During installation, all configuration data relevant for operation of the industrial
robot are transferred from Sunrise.Workbench to the robot controller. These
include:
Station configuration
Safety configuration
Passwords for user groups
The following points must be observed during installation:
The robot type and media flange (if present) set in the station configuration
must match the robot connected to the robot controller (see identification
plate). If the data do not match, the robot cannot be moved after installa-
tion.
The safety configuration is not yet active after installation. The robot can-
not be moved until the safety configuration has been activated.
(>>> 13.11.1 "Activating the safety configuration" Page 244)
Reinstallation If the station configuration or the password for a user group on the robot con-
troller changes, installation must be carried out again:
Change to the station configuration on the Topology tab
Change to the station configuration on the Software tab
Examples:
Installation of additional option packages
Incompatible version changes of existing software packages
Incompatible version changes can occur if a project that was created
with an older version of Sunrise.Workbench is loaded into the work-
space.
Error message The following error message may be generated on saving the station configu-
ration:
Not all parameters could be converted to the current version of the safe-
ty configuration. Please check that the safety configuration is complete
and correct before installation.
This message is displayed if safety settings have been added or removed in
the new software version, e.g.:
Allow muting via input (available from software version 1.10 or higher)
Allow external position referencing (available from software version
1.11 or higher)
Safety-oriented workpieces no longer configurable as object templates
(software version 1.12 or higher)
The message is only displayed if at least one safety-oriented workpiece is
configured in the loaded project. Instead, only the mass of the heaviest
workpiece now needs to be specified in the safety-oriented project set-
tings.
Procedure 1. Select the menu sequence Help > Install new software .... The Install
window is opened.
2. To the right of the Work with box, click on Add …. The Add repository
window is opened.
Alternatively: Drag the ZIP archive of the option into the window, then con-
tinue with step 5.
3. Click on Archive …, navigate to the directory in which the ZIP archive of
the option is located and select the archive.
4. Confirm your selection with Open. The Position box now displays the in-
stallation path. Confirm the path with OK.
5. In the Install window, the installation path is adopted in the Work with
box.
The window now also displays a check box with the name of the option.
Activate the check box.
6. Leave the other settings in the Install window as they are and click on
Next >.
7. An installation details overview is displayed. Click on Next >.
8. A license agreement is displayed. In order to be able to install the soft-
ware, the agreement must be accepted. Then click on Finish. The instal-
lation is started.
9. A safety warning concerning unsigned contents is displayed. Confirm with
OK.
10. A message indicates that Sunrise.Workbench must be restarted in order
to apply the changes. Click on Restart now.
11. Sunrise.Workbench restarts. This completes installation in Sunrise.Work-
bench.
12. Open the station configuration of the desired project (file StationSet-
up.cat). The new software entries are displayed on the Software tab.
13. If the check mark is set in the Install column for the new entries, the new
software has automatically been selected for installation.
If not, set the check mark for the new entries.
14. Save the station configuration. The system asks whether the modifications
to the project should be accepted. Click on Save and apply.
15. Install the system software on the robot controller. Once the robot control-
ler has been rebooted, the new software is available for the station.
Description Once the virus scanner has been installed on the robot controller, a tile for the
virus scanner is available on the smartHMI. This tile can be used, for example,
to display the version of the installed virus scanner and messages about virus-
es that have been found.
(>>> 20.4 "Displaying messages of the virus scanner" Page 533)
Description The user interface on the smartHMI is available in the following languages:
Languages which are only available after software is delivered can be installed
later if required.
Description Option packages that are no longer required can be uninstalled in Sun-
rise.Workbench.
Precondition Option package has been removed from the robot controller.
Procedure 1. Select the menu sequence Help > Install new software .... The Install
window is opened.
2. Click on the link by What is already installed?. The Installation details
for Sunrise Workbench window is opened.
3. Select the Installed software tab (if it is not already selected).
4. In the list of installed software, select the option that is no longer required.
Description In order to remove an option package from the robot controller, the system
software must be reinstalled.
11 Bus configuration
s
Step Description
1 Install the Sunrise option package in WorkVisual.
The option package is available as a KOP file and is supplied
together with Sunrise.Workbench (file Sunrise.kop in the
directory WorkVisual AddOn).
Note: The option package supplied with Sunrise.Workbench
must always be used. If an old version of Sunrise.Workbench
is uninstalled and a new version installed, the option package
must also be exchanged in WorkVisual.
2 Terminate WorkVisual and create a new I/O configuration in
Sunrise.Workbench or open an existing I/O configuration.
WorkVisual is started automatically and the WorkVisual proj-
ect corresponding to the I/O configuration is opened.
(>>> 11.3 "Creating a new I/O configuration" Page 190)
(>>> 11.4 "Opening an existing I/O configuration" Page 190)
3 Only necessary if devices are used for which no device
description files have yet been imported:
1. Close the WorkVisual project.
2. Import the required device description files.
3. Reopen the WorkVisual project.
4 Configure the field bus.
(>>> 11.2 "Overview of field buses" Page 189)
5 Create the Sunrise I/Os and map them.
(>>> 11.5 "Creating Sunrise I/Os" Page 191)
(>>> 11.6.3 "Mapping Sunrise I/Os" Page 198)
6 Export the I/O configuration to the Sunrise project.
(>>> 11.7 "Exporting the I/O configuration to the Sunrise proj-
ect" Page 198)
7 Transfer the I/O configuration to the robot controller by means
of project synchronization and reboot the robot controller.
(>>> 9.5 "Project synchronization" Page 171)
The following field buses are supported by Sunrise and can be configured with
WorkVisual:
The I/O configuration is created automatically for the media flange set
in the project. If a media flange with an EtherCAT output (e.g. media
flange IO pneumatic) is used and additional EtherCAT devices are
connected, these must be configured using WorkVisual.
Procedure 1. Select an input or output module of the configured bus on the Field buses
tab in the top right-hand corner of the I/O Mapping window.
(>>> 11.6.1 "I/O Mapping window" Page 196)
2. Select the Sunrise I/Os tab in the top left-hand corner of the I/O Mapping
window.
3. In the bottom left-hand corner of the I/O Mapping window, click on the
Creates signals at the provider button. The Create I/O signals window
is opened.
(>>> 11.5.1 "“Create I/O signals” window" Page 192)
4. Create an I/O group and inputs/outputs within the group.
(>>> 11.5.2 "Creating an I/O group and inputs/outputs within the group"
Page 194)
5. Click on OK. The Sunrise I/Os are saved. The Create I/O signals window
is closed.
The created I/O group is displayed on the Sunrise I/Os tab of the I/O Mapping
window. The signals can now be mapped.
(>>> 11.6.3 "Mapping Sunrise I/Os" Page 198)
Overview
The window for creating and editing the Sunrise I/Os and Sunrise I/O groups
consists of the following areas:
Area Description
Edit I/O group In this area, I/O groups are created and edited. It is also possi-
ble to save I/O groups as a template or to import previously cre-
ated templates.
Edit I/O signals In this area, the input/output signals of an I/O group are dis-
played.
Edit I/O In this area, the inputs/outputs of an I/O group are created and
edited.
Input boxes are displayed with a red frame if values must be entered
or if incorrect values have been entered. A help text is displayed when
the mouse pointer is moved over the box.
Signal properties In the Edit I/O area, new signals can be created and the signal properties de-
fined:
Property Description
I/O name Name of the input/output
Description Description for the input/output (optional)
Property Description
Direction Signal direction
Input: Signal is an input.
Output: Signal is an output.
Type Signal type
Analog: Signal is an analog signal.
Digital: Signal is a digital signal.
Data type Data type of the signal
In WorkVisual, a total of 15 different data types are available for
selection. For use with Java, these data types are mapped to
the following data types:
integer, long, double, boolean
Bit width Number of bits that make up the signal. With the data type
BOOL, the bit width is always 1.
Note: The value must be a positive integer which does not
exceed the maximum permissible length of the selected data
type.
The following signal properties are only relevant for analog inputs/outputs:
Property Description
Start range The smallest possible value of an analog connection without a
physical unit. This is the value to which the smallest possible
number that can be generated on the bus is mapped.
Note: The start range must be lower than the end range. It is
also possible to enter decimal values.
End range The largest possible value of an analog connection without a
physical unit. This is the value to which the largest possible
number that can be generated on the bus is mapped.
Note: The end range must be greater than the start range. It is
also possible to enter decimal values.
Signed Defines whether the number generated on the bus is interpreted
as signed or unsigned.
Check box active: Signed
Check box not active: Unsigned
Examples:
In the case of an analog output module with a measurement
range from 0 to 10 V, 0 must be specified for the start range
and +10 for the end range.
In the case of an analog input module with a measurement
range from 4 to 20 mA, 4 must be specified for the start
range and 20 for the end range.
In the case of an analog input module with a measurement
range of +/-10 V plus 1.76 V overflow, -11.76 must be spec-
ified for the start range and +11.76 for the end range.
4. Click on Create. The I/O group is created and displayed in the selection
menu I/O group.
5. In the Edit I/O area, enter a name for the input or output of the group and
define the signal properties.
(>>> "Signal properties" Page 192)
6. In the Edit I/O group area, click on Create. The input or output signal is
created and displayed in the Edit I/O Signals area.
7. Repeat steps 5 and 6 to define further inputs/outputs in the group.
Procedure 1. Select the desired I/O group from the I/O group selection menu.
2. Click on Edit. The Rename I/O group window is opened.
3. Change the name of the I/O group and/or the corresponding description
(optional). Confirm with Apply.
Procedure 1. Select the desired I/O group from the I/O group selection menu.
2. Click on Delete. If signals have already been created for the I/O group, a
request for confirmation is displayed.
3. Reply to the request for confirmation with Yes. The I/O group is deleted.
Procedure 1. Select the I/O group of the signal from the I/O group selection menu.
2. In the Edit I/O Signals area, click on the desired input or output.
3. In the Edit I/O area, edit the signal properties as required.
(>>> "Signal properties" Page 192)
Procedure 1. Select the I/O group of the signal from the I/O group selection menu.
2. In the Edit I/O Signals area, click on the desired input or output.
3. Click on Delete.
Description I/O groups can be saved as a template. The template contains all the in-
puts/outputs belonging to the saved I/O group. This enables I/O groups, once
created, to be reused. The mapping of the inputs and outputs is not saved,
however.
After exporting the template, the templates created in WorkVisual are avail-
able in Sunrise.Workbench in the IOTemplates folder of the project.
Procedure 1. In the Edit I/O group area, select the I/O group that is to be exported as a
template.
2. Click on Export as template. The Save I/O group as template window is
opened.
3. Enter a name for template.
If a template with the same name already exists in the Sunrise proj-
ect, it will be overwritten during the export operation.
Procedure 1. In the Edit I/O group area, click on Import from template. The Import I/O
group from template window is opened.
2. In the selection list Used template, select the template to be imported.
3. Enter a name in the I/O-group name box for the I/O group to be created.
4. Click on Import. An I/O group configured in accordance with the template
is imported and can be edited.
Overview
Item Description
1 Displays the Sunrise I/O groups
The signals in the I/O group selected here are displayed in the
overviews lower down.
2 Displays the inputs/outputs of the bus modules
The signals in the module selected here are displayed in the over-
views lower down.
3 Connection overview
Displays the mapped signals. These are the signals of the I/O
group selected under Sunrise I/Os, which are mapped to the bus
module selected under Field buses.
4 Signal overview
Here the signals can be mapped.
(>>> 11.6.3 "Mapping Sunrise I/Os" Page 198)
Item Description
5 The arrow buttons allow the connection and signal overviews to
be collapsed and expanded independently of one another.
Collapse connection view (left-hand arrow symbol pointing
up)
Expand connection view (left-hand arrow symbol pointing
down)
Collapse signal view (right-hand arrow symbol pointing up)
Expand signal view (right-hand arrow symbol pointing down)
6 Buttons for creating and editing the Sunrise I/Os
7 Displays how many bits the selected signals contain.
For the I/O mapping in Sunrise, only the Sunrise I/Os and Field bus-
es tabs are relevant.
Some of these buttons are displayed in several places. In such cases, they re-
fer to the side of the I/O Mapping window on which they are located.
Edit
Button Name/description
Creates signals at the provider
Opens the Create I/O signals window.
(>>> 11.5.1 "“Create I/O signals” window" Page 192)
The button is only active if an input or output module is
selected on the Field buses tab and a signal of the I/O group
is selected in the signal overview.
Edit signals at the provider
Opens the Edit I/O signals window.
The button is only active if an I/O group is selected on the
Sunrise I/Os tab and a signal of the I/O group is selected in
the signal overview.
Deletes signals at the provider
Deletes all the selected inputs/outputs. If all the inputs/outputs
of a group are selected, the I/O group is also deleted.
The button is only active if an I/O group is selected on the
Sunrise I/Os tab and a signal of the I/O group is selected in
the signal overview.
Mapping
Button Name/description
Disconnect
Disconnects the selected mapped signals. It is possible to
select and disconnect a number of connections simultane-
ously.
Connect
Connects signals which are selected in the signal overview.
Description This procedure is used to map the Sunrise I/Os to the inputs/outputs of the
field bus module. It is only possible to map inputs to inputs and outputs to out-
puts if they are of the same data type. For example, it is possible to map BOOL
to BOOL or INT to INT, but not BOOL to INT or BYTE.
Precondition The robot controller has been set as the active controller.
Procedure 1. On the Sunrise I/Os tab in the left-hand half of the window, select the I/O
group for which the I/Os are to be mapped.
The signals of the group are displayed in the bottom area of the I/O Map-
ping window.
2. On the Field buses tab in the right-hand half of the window, select the de-
sired input or output module.
The signals of the selected field bus module are displayed in the bottom
area of the I/O Mapping window.
3. Drag the signal of the group onto the input or output of the module. (Or al-
ternatively, drag the input or output of the device onto the signal of the
group.)
The signals are now mapped. Mapped signals are indicated by green ar-
rows.
Alternative procedure for mapping:
Select the signals to be mapped and click on the Connect button.
Description When exporting an I/O configuration from WorkVisual, a separate Java class
is created for each I/O group in the corresponding Sunrise project. Each of
these Java classes contains the methods required for programming, in order
to be able to read the inputs/outputs of an I/O group and write to the outputs
of an I/O group.
The classes and methods are saved in the Java package com.kuka.generat-
ed.ioAccess in the source folder of the Sunrise project.
The structure of the Sunrise project after exporting an I/O configuration is de-
scribed here:
(>>> 15.11 "Using inputs/outputs in the program" Page 378)
Precondition The robot controller has been set as the active controller.
The automatic change recognition is activated in Sunrise.Workbench.
(>>> 5.9 "Activating the automatic change recognition" Page 65)
Procedure 1. Select the menu sequence File > Import / Export. The import/export wiz-
ard for files opens.
2. Select Export the I/O configuration to the Sunrise Workbench proj-
ect..
3. Click on Next > and then on Finish. The configuration is exported to the
Sunrise project.
It is not essential to map all the Sunrise I/Os that have been created.
12
2
External control
x
Default appli- A default application must be assigned to every project that is to be controlled
cation externally.
The default application has the following characteristics:
It is automatically selected when the operating mode is switched to Auto-
matic.
It can only be started via the input signal App_Start (not by means of the
Start key on the smartPAD).
It cannot be deselected again in Automatic mode.
Interfaces External controller and robot controller can communicate via the following in-
terfaces:
I/O system of the robot controller
Network protocol UDP
The input/output signals for communication are predefined:
The external controller can start, pause and resume the default application
via the input signals.
(>>> 12.4 "External controller input signals" Page 202)
The output signals can be used to provide information about the status of
the default application and the station to the external controller.
(>>> 12.5 "External controller output signals" Page 203)
The following steps are required for configuring the external controller via the
I/O system:
Step Description
1 Configure and map inputs/outputs for communication with the
external controller in WorkVisual.
(>>> 12.4 "External controller input signals" Page 202)
(>>> 12.5 "External controller output signals" Page 203)
2 Export I/O configuration from WorkVisual to Sunrise.Work-
bench.
3 Create the default application for the external controller.
Step Description
4 Configure the external controller in the project settings.
(>>> 12.7 "Configuring the external controller in the project
settings" Page 205)
5 Transfer the project to the robot controller by means of syn-
chronization.
The following steps are required for configuring the external controller via the
UDP network protocol:
Step Description
1 Create the default application for the external controller.
2 Configure the external controller in the project settings.
(>>> 12.7 "Configuring the external controller in the project
settings" Page 205)
3 Transfer the project to the robot controller by means of syn-
chronization.
App_Start The input signal is absolutely vital for an externally controlled project.
The default application is started and resumed in Automatic mode by the ex-
ternal controller by means of a rising edge of the signal (change from FALSE
to TRUE).
Get_State The input signal is only available if the UDP interface is used.
The external controller can use this signal to request application and station
statuses from the robot controller. The value of the signal can be TRUE or
FALSE.
System response The input signal App_Enable has a higher priority than the input signal
App_Start. If the input signal App_Enable is configured, the default applica-
tion can only be started if App_Enable has a HIGH level or is TRUE.
The following table describes the system behavior when the App_Enable sig-
nal is configured.
AutExt_Active The output signal has a HIGH level or is TRUE if Automatic mode is active and
the project on the robot controller can be controlled externally via the interface.
AutExt_AppRead The output signal has a HIGH level or is TRUE if the default application is
yToStart ready to start.
The application is ready to start in the following states:
Selected
Motion paused
DefaultApp_Error The output signal has a HIGH level or is TRUE if an error occurred when the
default application was run.
Station_Error The output signal has a HIGH level or is TRUE if the station is in an error state.
There is an active error state in the following cases:
Motion enable signal is not present.
Drive error or bus error active.
At least one robot axis is not mastered and the operating mode is not set
to T1.
Procedure 1. Right-click on the desired project in the Package Explorer view and select
Sunrise > Change project settings from the context menu.
The Properties for [Sunrise Project] window opens.
2. Select Sunrise > External control in the directory in the left area of the
window.
3. Make the settings for external control of the project in the right-hand area
of the window.
Set the check mark at Project is controlled externally.
In the Default application area, select the default application.
Under Input interface:, select the interface for the external communi-
cation.
Configure the input/output parameters for the interface.
(>>> 12.7.1 "Input/output parameters of the I/O interface" Page 207)
(>>> 12.7.2 "Input/output parameters of the UDP interface"
Page 207)
4. Click on OK to save the settings and close the window.
Description
Item Description
1 Directory of the project settings
2 “Default application” area
All robot applications of the project are available for selection as
the default application.
3 “Input configuration” area
The interface for the external communication is selected here:
IO Groups: I/O interface
UDP: UDP interface
The configurable input parameters depend on the specific inter-
face.
4 “Output configuration” area
The configurable output parameters are not dependent on the in-
terface selected for the inputs. The values of the outputs can also
be requested via UDP, for example, if the I/O interface has been
configured for the inputs.
Button Description
Restoring default Resets the window to the default settings. All
values user settings will be lost.
Apply Saves the user settings. The window remains
open.
Button Description
OK Saves the user settings and closes the window.
Cancel Closes the window without saving.
If the I/O interface is used, mapped inputs/outputs of an I/O group must be as-
signed to the required input/output signals.
The input App_Start is absolutely vital for external control of a project. The in-
put App_Enable and the signal outputs can optionally be configured.
Column Description
I/O group All I/O groups of the I/O configuration of the project are available.
Boolean input All inputs of the I/O group selected in the I/O group column are
available.
Boolean output All outputs of the I/O group selected in the I/O group column are
available.
Parameter Description
with App_Enable sup- Use of the input signal App_Enable
ported
Check box not active (default): App_Enable is not evaluated.
Check box active: App_Enable is evaluated.
IP of controlling client: IP address of the client configured for external control of the proj-
ect
IPs of state receivers: List of clients to receive status information (optional)
For each client, the IP address must be specified in the following
format together with the corresponding port:
IP_address_1:Port_1;IP_address_2:Port_2;...
Note: It is advisable to specify the IP address and port of the con-
trolling client in order to inform it of changes of state.
Form and length of the UDP data packets for the data exchange are pre-
defined:
UTF-8 coding
Data arrays are separated by a semicolon.
Description In the case of the UDP interface, application and station statuses are trans-
ferred from the robot controller to an external controller by means of so-called
status messages.
In the following cases, the robot controller sends status messages to the cli-
ents that are configured as recipients of status messages in the project set-
tings:
Following receipt of the control message from an external client
Example 1449066055468;7;2;1;true;false;false;false;RUNNING;false;false
If more than one fault occurs simultaneously, the fault with the highest
priority is transferred. A fault with the ID -3, for example, has a higher
priority than a fault with the ID -4.
Precondition When a controller message is sent, the following target address and port must
always be specified:
IP address of the robot controller (see Configuration tab in the station
configuration)
Port 30300 (fixed port of the robot controller)
Description With the UDP interface, input signals are set via so-called controller messages
that the external controller must send to the robot controller. This client data
packet must contain the following data arrays:
Example 1449066055468;1;App_Start;true
App_Enable If the input signal App_Enable is evaluated, the following points must be taken
into consideration when sending controller messages:
The application can only be started by the input signal App_Start if the ro-
bot controller has received a message with …App_Enable;true in the
last 100 ms.
The input signal App_Enable functions like a heartbeat signal.
The application is executed as long as the robot controller receives a con-
troller message, e.g. …App_Enable;true, at least every 100 ms. If no
message is received, the application is paused.
When sending the controller messages, the client must take the net-
work delay into account.
If the external client sets the input signal App_Enable from TRUE to
FALSE within the 100 ms, this also pauses the application.
The example shows how a robot application can be started from a PC via UDP
and what start-up steps and programming are required for this.
The input signal App_Enable is not used in this example. This example can
thus not be used to pause an application and does not claim to be comprehen-
sive.
The following steps are required for starting up the external controller:
1. Connect the PC to the robot controller via the Ethernet interface KLI.
2. Assign a fixed network IP address to the PC, e.g. 192.168.0.10.
On the PC used for external control, there must be a program that can gener-
ate and send UDP data packets.
If a firewall is used, it must be ensured that it does not block the in-
coming and outgoing UDP data packets.
Precondition The correct target address and port have been assigned to the data pack-
ets that are to be sent:
IP address of the robot controller (see Configuration tab in the station
configuration)
Port 30300 (fixed port of the robot controller)
Description Following a reboot of the robot controller, the robot application can be started
with a controller message with App_Start:
1457449078435;1;App_Start;true
The first number in the packet is the time stamp that must be used to
document when the packet was sent. Here, and in the following code
examples, this number must always be replaced with a current time
stamp in milliseconds. (When using Java, such a number can be generated,
for example, with java.lang.System.currentTimeMillis().)
If the value to be transferred for the counter is not known, a socket on the PC
must be opened that can receive UDP messages at port 30333. Get_State
can then be used to request the current counter value:
1457450539457;1;Get_State;true
If the socket is now checked for received messages, a status message should
now be present as the answer from the robot controller, e.g.:
1457450539459;4;1337;-3;true;true;false;false;IDLE;false;true
The received message shows that the current value of the data packet counter
is 1337. The counter value 1338 must therefore be transferred in the next data
packet.
In order to restart a robot application, the state of the signal from App_Start
must change from FALSE to TRUE. For this purpose, the following packets
are sent:
1457450539511;1338;App_Start;false
1457450539511;1339;App_Start;true
12.10 Configuring the signal outputs for a project that is not externally controlled
Description The predefined output signals for the external controller can also be used to
signal application and station statuses in projects that are not externally con-
trolled.
The application statuses always refer to the default application selected in the
project settings.
Precondition In the case of communication via the I/O system of the robot controller:
The I/O configuration of the project contains the outputs configured and
mapped in WorkVisual.
(>>> 12.5 "External controller output signals" Page 203)
Procedure 1. Right-click on the desired project in the Package Explorer view and select
Sunrise > Change project settings from the context menu.
The Properties for [Sunrise Project] window opens.
2. Select Sunrise > General in the directory in the left area of the window.
3. Make the general settings for the project in the right-hand area of the win-
dow.
If application statuses are to be signaled: In the Default application
area, select the desired default application.
In the Output configuration area, configure the output parameters re-
quired by the communications interface.
(>>> 12.10.1 "Output parameters of the I/O interface" Page 214)
(>>> 12.10.2 "Output parameters of the UDP interface" Page 214)
4. Click on OK to save the settings and close the window.
If the I/O interface is used, mapped outputs of an I/O group must be assigned
to the required output signals.
Column Description
I/O group All I/O groups of the I/O configuration of the project are available.
Boolean output All outputs of the I/O group selected in the I/O group column are
available.
Parameter Description
IPs of state receivers: List of clients to receive status information
For each client, the IP address must be specified in the following
format together with the corresponding port:
IP_address_1:Port_1;IP_address_2:Port_2;...
13 Safety configuration
f
f
The safety configuration defines the safety-oriented functions in order to inte-
grate the industrial robot safely into the system. Safety-oriented functions
serve to protect human operators when they work with the robot.
The safety configuration is an integral feature of a Sunrise project and is man-
aged in tabular form. The individual safety functions are grouped in KUKA
Sunrise.Workbench on an application-specific basis. The safety configuration
is then transferred with the project to the controller and activated there.
The system integrator must verify that the safety configuration suffi-
ciently reduces risks during collaborative operation (HRC).
It is advisable to perform this verification in accordance with the infor-
mation and instructions for operating collaborative robots in ISO/TS 15066.
States with various safety settings are defined in the safety configu-
ration as part of the ESM mechanism (Event-Driven Safety Monitor-
ing). It is possible to switch between these in the application. Since
switching between these states is carried out by means of non-safety-orient-
ed signals, all configured states must be consistent. This means that each
state must ensure a sufficient degree of safety, regardless of the time or
place of activation (i.e. regardless of the current process step).
Overview The safety configuration must implement all safety functions which are re-
quired to operate the industrial robot. A safety function monitors the entire sys-
tem on the basis of specific criteria. These are described by individual
monitoring functions, so-called AMFs (Atomic Monitoring Functions). To con-
figure a safety function, several AMFs can be linked to form complex safety
monitoring functions. In addition, the safety function defines a suitable reaction
which is triggered in case of error.
Example: In a specific area of the robot's workspace, the velocity at the TCP
must not exceed 500 mm/s (“Workspace monitoring” and “Velocity monitoring”
monitoring functions). Otherwise, the robot must stop immediately (reaction in
case of error).
PSM and ESM The Sunrise safety concept provides 2 different monitoring mechanisms:
Permanent safety-oriented monitoring
The safety functions of the PSM mechanism (Permanent Safety Monitor-
ing) are always active. It is only possible to deactivate individual safety
functions by changing the safety configuration.
The PSM mechanism is used to constantly monitor the system. It imple-
ments basic safety settings which are independent of the process step be-
ing carried out. These include, for example, EMERGENCY STOP
functions, the enabling switch on the smartPAD, the definition of a cell
area or safety functions that depend on the operating mode.
Event-dependent safety-oriented monitoring
The ESM mechanism (Event-driven Safety Monitoring) defines safe
states. It is possible to switch between these in the application. A safe
ESM state contains the safety functions required in the corresponding pro-
cess step.
Since switching is carried out by means of non-safety-oriented signals, the
defined state must ensure a sufficient degree of safety, regardless of the
time or place of activation.
The ESM mechanism allows specific safety functions to be adapted for
specific processes. This is of particular importance for human-robot collab-
oration applications, as these often require various safety settings de-
pending on the situation. The required parameters, such as permissible
velocity, collision values or spatial limits, can be individually defined for
each process step using an ESM state.
AMF The smallest unit of a safety monitoring function is called an Atomic Monitoring
Function (AMF).
Each AMF supplies an elementary, safety-relevant item of information, e.g.
whether a safety-oriented input is set or whether the Automatic operating
mode is selected.
Atomic Monitoring Functions can have 2 different states and are LOW-active.
This means that if a monitoring function is violated, the state switches from “1”
to “0”.
State “0”: The AMF is violated.
State “1”: The AMF is not violated.
The AMF smartPAD Emergency Stop is violated, for example, if the EMER-
GENCY STOP device on the operator panel is pressed.
Safety function A safe ESM state is defined with up to 20 safety functions. The safety func-
tions of the ESM mechanism use exactly one AMF. If this AMF is violated, the
safety function and thus the entire ESM state is considered to be violated.
For safety functions of the PSM mechanism, up to 3 AMFs are logically linked
to one another. This allows complex safety monitoring functions to be imple-
mented. If all AMFs of a safety function of the PSM mechanism are violated,
the entire safety function is considered to be violated.
A suitable reaction is defined for each safety function. This reaction must take
place in the case of an error and put the system into a safe state.
Brake is triggered.
The manipulator is braked on the path as long as the safety monitoring is
violated. The braking process is monitored by the safety controller. Safety
stop 1 is executed in the event of a fault.
Unlike the safety stop, the braking process does not lead to a complete
standstill or deactivation of the drives. The velocity reduction and safety-
oriented monitoring of the braking process are only carried out until the
safety monitoring is no longer violated.
Conditions for use of the “Brake” safety reaction:
Only available with the KUKA Sunrise.EnhancedVelocityController op-
tion
Only available for safety functions of the PSM mechanism
Only compatible with safety functions that contain Cartesian velocity
monitoring
Not compatible with safety functions that contain an extended AMF
Only compatible with position-controlled spline motions
(>>> 17.2 "“Brake” safety reaction" Page 487)
Safety-oriented output is set to “0” (LOW level).
Setting a safety-oriented output is a safety reaction that is only available
for safety functions of the PSM mechanism.
The reactions can be used for any number of safety functions. A reaction is
triggered once one of the safety functions using this reaction is violated. This
makes it possible, for example, to inform a higher-level controller via a safe
output when specific errors occur.
With the PSM mechanism, it is possible to trigger several different reactions
when a specific combination of AMFs is violated. For example, a safety stop
can be triggered as well as a safe output. To do so, 2 safety functions must be
configured with identical AMF combinations.
If 2 safety functions differ only in the type of stop configured, a violation trig-
gers the stronger stop reaction. In other words, it triggers the stop reaction
which causes an earlier safety-oriented disconnection of the drives. If several
safety functions use the same output signal as a reaction, this signal is set to
“0” once one of the safety functions is violated.
If a safety function which uses a safety output as a reaction is violated, this out-
put is immediately set to LOW.
If the violation state is cancelled, the output is only set to HIGH again when the
following conditions have been met:
The safety function is not violated for at least 24 ms. The reaction to can-
cellation of the violation state is always delayed.
If an Ethernet safety interface is used:
The output has the LOW level for at least 500 ms beforehand. If the LOW
level has not yet been present for this time, the level change to HIGH waits
until the 500 ms has elapsed.
If the discrete safety interface is used:
The output has the LOW level for at least 200 ms beforehand. If the LOW
level has not yet been present for this time, the level change to HIGH waits
until the 200 ms has elapsed.
Description The safety functions of the PSM mechanism (Permanent Safety Monitoring)
are permanently active and use the criteria defined by these functions to en-
sure that the overall system is constantly monitored.
For a safety function of the PSM mechanism, up to 3 AMFs (Atomic Monitoring
Functions) can be linked to one another. The entire safety function is only con-
sidered violated if all of these AMFs are violated. The safety function also de-
fines a reaction. This is triggered if the entire safety function is violated.
Categories For diagnosis in case of error, a category is assigned to each safety function
of the PSM mechanism. Depending on the category, errors are displayed on
the smartPAD and saved in the LOG file. For this reason, it is advisable to se-
lect these carefully.
The following categories are available:
Parameterizable AMFs
Extended AMFs
Overview KUKA Sunrise contains a basic package of AMFs. These include, for example,
all standard AMFs. The following safety options are also available and can be
used to install further AMFs:
KUKA Sunrise.SafeOperation (SOP)
KUKA Sunrise.HRC: safety option for HRC applications
Automatic mode
Test mode
High-velocity mode
Reduced-velocity mode
Input signal
Motion enable
Position referencing
Time delay
Tool orientation
Collision detection
Torque referencing
AMF Task
smartPAD Emergency Stop Monitors the EMERGENCY STOP device on the smartPAD
smartPAD enabling switch Checks whether the enabling signal has not been issued on the
inactive smartPAD.
smartPAD enabling switch Checks whether an enabling switch on the smartPAD has been
panic active pressed down fully (panic position).
AMF Task
Test mode Checks whether a test operating mode is active (T1, T2, CRR)
Automatic mode Checks whether the Automatic operating mode is active (AUT)
Reduced-velocity mode Checks whether an operating mode with reduced velocity is
active (T1, CRR)
Note: In the case of a mobile platform, the velocity is not
reduced in T1 and CRR mode.
High-velocity mode Checks whether an operating mode with programmed velocity
is active (T2, AUT)
AMF Task
Motion enable Monitors the motion enable signal
Motion enable is withdrawn if a safety stop is active.
Note: The “Brake” safety reaction does not lead to withdrawal
of motion enable.
AMF Task
Input signal Monitors a safety-oriented input
(>>> 13.12.4 "Monitoring safe inputs" Page 246)
AMFs for evaluating the enabling signal on the hand guiding device:
AMF Task
Hand guiding device enabling Checks whether the enabling signal has not been issued on the
inactive hand guiding device.
Hand guiding device enabling Checks whether the enabling signal has been issued on the
active hand guiding device.
The AMF is used to activate further monitoring functions during
manual guidance with an enabling device.
(>>> 13.12.5 "Manual guidance with enabling device and velocity monitoring"
Page 246)
AMFs for evaluating the referencing status:
AMF Task
Position referencing Monitors the referencing status of the position values for the
axes of a kinematic system
(>>> 13.12.6 "Evaluating the position referencing" Page 249)
Torque referencing Monitors the referencing status of the joint torque sensors of the
axes of a kinematic system
(>>> 13.12.7 "Evaluating the torque referencing" Page 250)
AMF Task
Axis velocity monitoring Monitors the velocity of an axis
(>>> 13.12.8.1 "Defining axis-specific velocity monitoring"
Page 251)
Cartesian velocity monitoring Monitors the Cartesian translational velocity at defined points of
a kinematic system
(>>> 13.12.8.2 "Defining Cartesian velocity monitoring"
Page 252)
Tool-related velocity compo- Monitors the Cartesian translational velocity in a specific
nent defined direction.
(>>> 13.12.8.3 "Direction-specific monitoring of Cartesian
velocity" Page 254)
AMF Task
Cartesian workspace monitor- Checks whether a part of the structure of a kinematic system
ing being monitored is located outside of its permissible workspace
(>>> 13.12.9.1 "Defining Cartesian workspaces" Page 261)
AMF Task
Cartesian protected space Checks whether a part of the structure of a kinematic system
monitoring being monitored is located within a non-permissible protected
space
(>>> 13.12.9.2 "Defining Cartesian protected spaces"
Page 263)
Axis range monitoring Monitors the position of one of the axes of a kinematic system
(>>> 13.12.9.3 "Defining axis-specific monitoring spaces"
Page 265)
AMF Task
Tool orientation Checks whether the orientation of the tool of a kinematic system
is outside a permissible range
(>>> 13.12.10 "Monitoring the tool orientation" Page 267)
AMF Task
Axis torque monitoring Monitors the measured torque of an axis
(>>> 13.12.13.1 "Axis torque monitoring" Page 271)
Collision detection Monitors the external axis torques of all axes of a kinematic sys-
tem
(>>> 13.12.13.2 "Collision detection" Page 272)
TCP force monitoring Monitors the external force acting on the tool or robot flange of a
kinematic system
(>>> 13.12.13.3 "TCP force monitoring" Page 273)
Base-related TCP force com- Monitors a component of the external force acting on the tool or
ponent robot flange of a kinematic system relative to a base coordinate
system.
(>>> 13.12.13.4 "Direction-specific monitoring of the external
force on the TCP" Page 275)
Description An extended AMF differs from a standard AMF and a parameterizable AMF in
that monitoring parameters are only defined during operation. The parameters
are set at the time of activation. For the AMF Standstill monitoring of all axes,
for example, the axis angles are set as reference angles for monitoring at the
time of activation.
An extended AMF is activated if all other AMFs used by the safety function are
violated. As long as at least one of the other AMFs is not violated, the extend-
ed AMF is not active and not evaluated.
Extended AMFs are only evaluated one cycle after they are activated.
This can result in an extension of the reaction time by up to 12 ms.
Extended AMFs are not available for the safety functions of the ESM
mechanism.
AMF Task
Standstill monitoring of all Monitors the standstill of all axes of a kinematic system.
axes
(>>> 13.12.11 "Standstill monitoring (safe operational stop)"
Page 269)
AMF Task
Time delay Delays the triggering of the reaction of a safety function for a
defined time.
(>>> 13.12.12 "Activation delay for safety function" Page 270)
Description Some safety monitoring functions (AMFs) provided by the System Software
are kinematic-specific. Kinematic-specific means that the kinematic system to
be monitored must be selected during configuration of these AMFs. (Param-
eter Monitored kinematic system with the values First kinematic system …
Fourth kinematic system)
If kinematic-specific AMFs are used in the safety configuration, the kinematic
system that is to be monitored must be specified as follows:
First kinematic system: An LBR is monitored.
Second kinematic system: A mobile platform is monitored.
Third kinematic system: Not currently assigned to a kinematic system
Fourth kinematic system: Not currently assigned to a kinematic system
Overview Not all kinematic-specific AMFs are available for monitoring a KMP, as the re-
quired sensor information is not available. If an AMF cannot be used, it is al-
ways violated.
Torque referencing
Tool orientation
Collision detection
13.8 Worst-case reaction times of the safety functions in the case of a single fault
The reaction time describes the time between the following events:
Time at which the event occurs that is to trigger a safety reaction, e.g. vi-
olation of a monitored axis range or setting of an EMERGENCY STOP in-
put
Time at which the safety reaction is initiated, e.g. stop reaction is initiated
or an output has been deactivated
The reaction time thus contains fault detection times and delays before initia-
tion of the safety reaction. The worst-case reaction time in the case of a single
fault considers the presence of an individual fault and is thus greater than the
reaction time typically expected for the safety function. The reaction time does
not include the time between initiation of a stop reaction and the kinematic sys-
tem coming to a standstill.
1 Reaction time
2 Braking time
3 Stopping time = Reaction time + Braking time
v Velocity
t Time
t0 Time at which the triggering event occurs
t1 Time at which the safety reaction is initiated
t2 Time at which the kinematic system comes to a standstill
For the stop reactions, the reaction time for the safety stop 0 is specified in
each case. For safety stop 1 and safety stop 1 (path-maintaining), the reaction
time may be longer in the case of defective stopping with the drives. This fault
is detected by monitoring the braking ramps. The reaction time thus depends
on the actual motion up to triggering of the braking ramp monitoring. Deacti-
vation of the motor power can be delayed by a maximum of 1 second for safety
stop 1 and safety stop 1 (path-maintaining).
Incorrect braking by the drives is also detected by means of braking ramp
monitoring in the case of the “Brake” safety reaction. For this reason, the re-
action time in the event of a fault, as in the case of safety stop 1 and safety
stop 1 (path-maintaining), depends on the actual motion up to triggering of this
monitoring function. The limit value for this monitoring is continuously reduced
to 0 mm/s over a period of 1 s.
If multiple monitoring functions (AMFs) are combined in a PSM table row, the
monitoring function with the longest reaction time determines the reaction time
of the safety function.
Axis range
Reaction Reaction time
monitoring
Stop 0 27 ms
CIB_SR output 258 ms
PROFIsafe output 49 ms + PROFIsafe master watchdog time
FSoE output 49 ms + FSoE master watchdog time
Input signal
Reaction Reaction time
CIB_SR
Stop 0 174 ms
CIB_SR output 328 ms
PROFIsafe output 143 ms + PROFIsafe master watchdog time
FSoE output 143 ms + FSoE master watchdog time
Input signal
Reaction Reaction time
PROFIsafe
Stop 0 67 ms + y*
CIB_SR output 245 ms + y*
PROFIsafe output Watchdog time * [2 + Ceil(24 ms / watchdog
time)]
FSoE output 36 ms + y* + FSoE master watchdog time
**: For FSoE inputs, delay y must additionally be taken into consideration. This
delay is dependent on the watchdog time of the FSoE slave and is set by the
FSoE master:
Input signal
Reaction Reaction time
media flange
Stop 0 174 ms
"Touch"
CIB_SR output 351 ms
PROFIsafe output 143 ms + PROFIsafe master watchdog time
FSoE output 143 ms + FSoE master watchdog time
smartPAD
Reaction Reaction time
Emergency Stop
Stop 0 174 ms
CIB_SR output 351 ms
PROFIsafe output 143 ms + PROFIsafe master watchdog time
FSoE output 143 ms + FSoE master watchdog time
smartPAD
Reaction Reaction time
enabling switch
Stop 0 174 ms
panic active
CIB_SR output 351 ms
PROFIsafe output 143 ms + PROFIsafe master watchdog time
FSoE output 143 ms + FSoE master watchdog time
smartPAD
Reaction Reaction time
enabling switch
Stop 0 174 ms
inactive
CIB_SR output 351 ms
PROFIsafe output 143 ms + PROFIsafe master watchdog time
FSoE output 143 ms + FSoE master watchdog time
Axis velocity
Reaction Reaction time
monitoring
Stop 0 32 ms
CIB_SR output 258 ms
PROFIsafe output 49 ms + PROFIsafe master watchdog time
FSoE output 49 ms + FSoE master watchdog time
Cartesian
Reaction Reaction time
workspace
Stop 0 27 ms
monitoring
CIB_SR output 258 ms
PROFIsafe output 49 ms + PROFIsafe master watchdog time
FSoE output 49 ms + FSoE master watchdog time
Cartesian velocity
Reaction Reaction time
monitoring
Stop 0 32 ms
CIB_SR output 258 ms
PROFIsafe output 49 ms + PROFIsafe master watchdog time
FSoE output 49 ms + FSoE master watchdog time
Cartesian
Reaction Reaction time
protected space
Stop 0 27 ms
monitoring
CIB_SR output 258 ms
Standstill
Reaction Reaction time
monitoring of all
Stop 0 27 ms
axes
CIB_SR output 258 ms
PROFIsafe output 49 ms + PROFIsafe master watchdog time
FSoE output 49 ms + FSoE master watchdog time
Tool-related
Reaction Reaction time
velocity
Stop 0 32 ms
component
CIB_SR output 258 ms
PROFIsafe output 49 ms + PROFIsafe master watchdog time
FSoE output 49 ms + FSoE master watchdog time
Tool orientation
Reaction Reaction time
Stop 0 27 ms
CIB_SR output 258 ms
PROFIsafe output 49 ms + PROFIsafe master watchdog time
FSoE output 49 ms + FSoE master watchdog time
Axis torque
Reaction Reaction time
monitoring
Stop 0 27 ms
CIB_SR output 294 ms
PROFIsafe output 85 ms + PROFIsafe master watchdog time
FSoE output 85 ms + FSoE master watchdog time
Base-related TCP
Reaction Reaction time
force component
Stop 0 27 ms + x***
CIB_SR output 258 ms + x***
PROFIsafe output 49 ms + PROFIsafe master watchdog time + x***
FSoE output 49 ms + FSoE master watchdog time + x***
***: With this monitoring function, an additional detection time x must be taken
into account for collision detection, as the collision forces are not measured di-
rectly. Detection of the actual collision forces is carried out approximately with
a delay of a PT1 element with the time constant T=1/30 s.
***: With this monitoring function, an additional detection time x must be taken
into account for collision detection, as the collision forces are not measured di-
rectly. Detection of the actual collision forces is carried out approximately with
a delay of a PT1 element with the time constant T=1/30 s.
TCP force
Reaction Reaction time
monitoring
Stop 0 27 ms + x***
CIB_SR output 258 ms + x***
PROFIsafe output 49 ms + PROFIsafe master watchdog time + x***
FSoE output 49 ms + FSoE master watchdog time + x***
***: With this monitoring function, an additional detection time x must be taken
into account for collision detection, as the collision forces are not measured di-
rectly. Detection of the actual collision forces is carried out approximately with
a delay of a PT1 element with the time constant T=1/30 s.
Hand guiding The reaction time depends on the input used to connect the enabling device
device enabling on the hand guiding device to the robot controller. The reaction time corre-
active sponds to the reaction time of the corresponding AMF Input signal.
Hand guiding The reaction time depends on the input used to connect the enabling device
device enabling on the hand guiding device to the robot controller. The reaction time corre-
inactive sponds to the reaction time of the corresponding AMF Input signal.
Use Deactivation of safety functions may be used, for example, for freeing persons
in a crushing situation.
To cancel a safety stop triggered by one of the defined AMFs, the config-
ured input must be set to HIGH.
As long as the input is HIGH, the robot can be moved for a maximum of 5
seconds. Every further safety stop triggered by one of the defined AMFs
in this time does not become active.
After this time, the input must be reset and set again.
Velocity While the safety functions are deactivated, all axis-specific velocity monitoring
monitoring functions and the Cartesian velocity monitoring function remain active.
For all kinematic systems, safety-oriented monitoring of the Cartesian velocity
of 250 mm/s of the robot and tools is additionally active. This additional Carte-
sian velocity monitoring is active irrespective of the operating mode.
Procedure 1. Right-click on the desired project in the Package Explorer view and select
Sunrise > Change project settings from the context menu.
The enabling device of the hand guiding device can be used as an in-
put for deactivating safety functions. In this case, it must be taken into
consideration in the risk assessment that every time the enabling
switch on the hand guiding device is used, a safety stop that is active at the
time of the enabling can be cancelled if it was triggered by one of the defined
safety functions.
Further safety functions and safe ESM states can be configured. Safety-ori-
ented tools can also be mapped.
After the safety configuration has been transferred to the robot controller, it
must be activated and safety acceptance must be carried out.
Step Description
1 Open the safety configuration.
(>>> 13.10.2 "Opening the safety configuration" Page 232)
2 Edit the safety functions in the Customer PSM table or create
new safety functions.
(>>> 13.10.3 "Configuring the safety functions of the PSM
mechanism" Page 235)
3 Configure event-dependent monitoring functions if required.
To do so, create safe ESM states and corresponding safety
functions. Existing ESM states can be changed by adapting
safety functions which are already configured or by adding
new ones.
(>>> 13.10.4 "Configuring the safe states of the ESM mecha-
nism" Page 237)
Step Description
4 If necessary, map safety-oriented tools.
(>>> 13.10.5 "Mapping safety-oriented tools" Page 241)
5 Save safety configuration.
6 When using the ESM mechanism
Program the necessary switch between the safe states in the
robot application and/or background applications.
(>>> 13.10.4.8 "Switching between ESM states" Page 240)
7 When using position-based AMFs (>>> "Position-based
AMFs" Page 291)
Create the application prepared by KUKA for the position and
torque referencing of the LBR iiwa or an application of your
own for the reference run.
(>>> 13.14.1 "Position referencing" Page 283)
8 When using axis torque-based AMFs (>>> "Axis torque-
based AMFs" Page 291)
Create the application prepared by KUKA for the position and
torque referencing of the LBR iiwa and integrate the safety-
oriented tool into the application. Further adaptations in the
application may be necessary.
(>>> 13.14.2 "Torque referencing" Page 284)
9 Transfer the safety configuration to the robot controller.
By installation of the system software or by project syn-
chronization
10 Reboot the robot controller to apply the safety configuration.
11 Activate the safety configuration on the robot controller
(>>> 13.11.1 "Activating the safety configuration" Page 244)
12 When using position-based AMFs (>>> "Position-based
AMFs" Page 291)
Carry out position referencing.
13 When using axis torque-based AMFs (>>> "Axis torque-
based AMFs" Page 291)
Carry out torque referencing.
14 Carry out safety acceptance.
(>>> 13.15 "Safety acceptance overview" Page 287)
When the safety configuration is evaluated, the Customer PSM and KUKA
PSM tables are always checked simultaneously. It is possible for the two ta-
bles to contain identical safety functions with different reactions. If different
stop reactions are configured, a violation triggers the stronger stop reaction. In
other words, it triggers the stop reaction which causes an earlier safety-orient-
ed disconnection of the drives.
If the ESM mechanism is used, all safety functions of the currently active ESM
state are additionally monitored.
Item Description
1 Table selected
Contains the configured safety functions of the selected PSM
table or of the selected ESM state.
2 Selection table
With respect to the cell selected in the highlighted table row, the
category, AMF or reaction of a safety function can be selected
here.
3 Instance table
This area displays the instances of the AMF marked in the selec-
tion table as well as the table rows in which they are used.
4 Parameter table
The parameter values of the AMF instance selected in the
instance table are displayed here. The values can be changed.
5 Information display
Information about the selected category, AMF or reaction
6 List of tables
In this area, the desired tables can be selected and new ESM
states can be added.
7 Computing time utilization of the safety controller
Indicates the percentage of the computing time used for the open
safety configuration, including all changes that have not been
saved.
List of tables The list of tables in the lower area of the Editor is used to select the table to be
displayed and edited.
Item Description
1 “Tool selection table” tab
Opens the Tool selection table table. Safety-oriented tools can be
mapped.
2 “KUKA PSM” tab
Opens the KUKA PSM table. The parameters of the parameteriz-
able AMFs used can be changed.
3 “Customer PSM” tab
Opens the Customer PSM table. Safety functions can be modified
and created.
4 Tab for an ESM state
Opens the ESM state. The ESM state can be edited.
5 Add new ESM state button
Adds a new ESM state. The new state is automatically opened
and can be edited.
The PSM mechanism defines safety monitoring functions which are perma-
nently active.
The safety functions are displayed in tabular form. Each row in the table con-
tains a safety function.
In the PSM table Customer PSM, new safety functions are added and existing
settings are adapted. This means that the category, the Atomic Monitoring
Functions (AMFs) used, the parameterization of the AMF instances and the re-
action can be changed. Individual safety functions can be activated or deacti-
vated.
Item Description
1 Active column
Defines whether the safety function is active. Deactivated safety
functions are not monitored.
Check box active: safety function is active.
Check box not active: safety function is deactivated.
2 Category column
Defines the category of the safety function. In the event of an
error, the category is shown on the smartHMI as the cause of
error.
3 Columns AMF 1, AMF 2, AMF 3
Define the individual AMFs of the safety function. Up to 3 AMFs
can be used. The safety function is violated if all of the AMFs used
are violated.
4 Reaction column
Defines the reaction of the safety function. It is triggered if the
safety function is violated.
Item Description
5 Number of safety functions currently configured
A total of 100 rows are available for configuring the user-specific
safety monitoring functions.
6 Buttons for editing the table
7 Selected row
The row containing the currently selected safety function is high-
lighted in gray.
Button Description
Add row
Adds a new row to the table (only possible when the
non-configured blank rows are hidden). The new row
has the standard configuration and is activated auto-
matically.
Reset row
Resets the configuration of the selected row to the
standard configuration. The safety function is deacti-
vated.
Show empty rows/Hide empty rows
All empty rows which are not configured are deacti-
vated and preset with the standard configuration.
Category: None
AMF 1, AMF 2, AMF 3: None
Reaction: Stop 1
The empty rows can be shown or hidden. The empty
rows are hidden by default.
Procedure 1. In the table, select the row with the safety function to be deleted.
2. Click on Reset row. The safety function is deactivated and is given the
standard configuration (None, AMF, Reaction: Stop 1).
Using the ESM mechanism, various safety settings are defined by configu-
rable safe states. Up to 10 safe states can be created. The states are num-
bered sequentially from 1 to 10 and can therefore be identified unambiguously.
A safe state is defined in a table with up to 20 safety functions. These safety
functions define the safety settings which must be valid for the state.
A safe state is represented in a table. Each row in the table contains a safety
function.
Use of the ESM mechanism is optional. The ESM mechanism is activated if at
least one ESM state is configured. If no ESM states are configured, the mech-
anism is deactivated.
If the ESM mechanism is active, exactly one safe state is valid. The safety
functions of this state are monitored in addition to the permanently active safe-
ty functions. Depending on the situation, it is possible to switch between the
configured safe states. Switching can be carried out in the robot application or
in a background application.
Up to 10 safe states can be created for the ESM mechanism. If this number is
reached, the tab for adding new states is hidden.
Item Description
1 Active column
Defines whether the safety function is active. Deactivated safety
functions are not monitored.
Check box active: safety function is active.
Check box not active: safety function is deactivated.
The safety function in the first row of the table is always active. It
cannot be deactivated (indicated by the lock icon).
2 AMF column
Defines the AMF of the safety function. Only one AMF is used for
safety functions of ESM states. If this AMF is violated, the safety
function and thus the entire state is violated.
Item Description
3 Reaction column
Defines the reaction of the safety function. It is triggered if the
safety function is violated.
4 Number of safety functions currently configured
A total of 20 rows are available for configuring the safety monitor-
ing functions of an ESM state.
5 Buttons for editing the table
6 Selected row
The row containing the currently selected safety function is high-
lighted in gray.
Button Description
Delete state
Deletes the entire state The delete operation must be
confirmed via a dialog.
Add row
Adds a new row to the table (only possible when the
non-configured blank rows are hidden). The new row
has the standard configuration and is activated auto-
matically.
Reset row
Resets the configuration of the selected row to the
standard configuration. The safety function is deacti-
vated (exception: the first row of the table is always
active).
Show empty rows/Hide empty rows
All empty rows which are not configured are deacti-
vated and preset with the standard configuration.
AMF: None
Reaction: Stop 1
The empty rows can be shown or hidden. The empty
rows are hidden by default.
Procedure 1. In the table, select the row with the safety function to be deleted.
2. Click on Reset row. The safety function is deactivated and is given the
standard configuration (None, AMF, Reaction: Stop 1).
Description The setESMState(…) method can be used to activate an ESM state and
switch between the different ESM states. The method belongs to the LBR
class and can be used in both a robot application and a background applica-
tion.
Syntax robot.setESMState(state);
Explanation of
Element Description
the syntax
robot Type: LBR
Name of the robot for which the ESM state is activated
state Type: String
Number of the ESM state which is activated
1 … 10
If a non-configured ESM state is specified, the robot stops
with a safety stop 1.
Example In an application, the LBR iiwa is to be guided by hand. For this purpose, a
suitable start position is addressed. In order to address the start position, ESM
state 3 must be activated. ESM state 3 ensures sensitive collision detection
and monitors the Cartesian velocity.
Manual guidance is to begin once the start point has been reached. ESM
state 8 must be activated for manual guidance. ESM state 8 requires enabling
on the hand guiding device but permits a higher Cartesian velocity than ESM
state 3.
@Inject
private LBR robot;
// ...
robot.setESMState("8");
robot.move(handGuiding());
// ...
}
Description Each kinematic system can be assigned a maximum of one fixed safety-ori-
ented tool that is always active and one or more safety-oriented tools that can
be activated via an input.
Assignment of a fixed tool (always active)
A fixed tool is coupled to the flange of the configured kinematic system and
cannot be uncoupled or changed. The fixed tool can be a machining tool,
a tool for picking up workpieces or a tool that can pick up other tools, e.g.
a tool changer.
The assignment of multiple fixed tools to a kinematic system is not al-
lowed. In this case, all tool-dependent monitoring functions of this kinemat-
ic system enter the safe state.
Assignment of tools that can be activated (via an input)
The tool is activated when the configured input signal is HIGH.
If a fixed tool is configured for this kinematic system, the activatable tool is
coupled to the pickup frame of the fixed tool (standard frame for motions).
If no fixed tool is configured for a kinematic system, it is coupled to the
flange of the kinematic system.
If an activatable tool is configured for a kinematic system, exactly one ac-
tivatable tool must always be active for this kinematic system. This means
that exactly one of the input signals configured for this kinematic system
must be HIGH.
If multiple activatable tools are active simultaneously, or if none of the ac-
tivatable tools is active, all tool-dependent monitoring functions of this ki-
nematic system enter the safe state. For this reason, the tool No tool must
be activated if the activatable tool is uncoupled.
Overview
Item Description
1 State of the mapped tool
Check box active: The tool is always active or activatable.
Check box not active: The tool is deactivated.
2 Kinematic system to which the tool is assigned
First kinematic system: Robot
Second kinematic system: Mobile platform
Third kinematic system: No function
Fourth kinematic system: No function
3 Tool assigned to the kinematic system
No tool: No tool is assigned to the kinematic system.
All safety-oriented tools defined in the object templates are
available for selection.
4 Activation of the tool
Always active: The tool is always active.
A maximum of 1 fixed tool can be assigned to each kinematic
system.
The tool can be activated via a safe input
The safe inputs of the Ethernet safety interface used are avail-
able.
5 Number of tools currently mapped
A total of 50 rows are available for mapping.
6 Buttons for editing the table
Item Description
7 Information display
Information about the selected parameter
8 Selection table
The table contains the values available for the parameter selected
in the configuration line.
Button Description
Add row
Adds a new row to the table (only possible when the
non-configured blank rows are hidden). The new row
has the standard configuration and is activated auto-
matically.
Reset row
Resets the configuration of the selected row to the
standard configuration. The mapped tool is activated.
Show empty rows/Hide empty rows
All empty rows which are not configured are deacti-
vated and preset with the standard configuration.
Assigned kinematic system: First kinematic system
Selected tool: No tool
Activation signal: Always active
The empty rows can be shown or hidden. The empty
rows are hidden by default.
Description If a new safety configuration is transferred to the robot controller, but is not to
be activated, the most recently active safety configuration can be restored.
Up to 100 instances are available for each parameterizable AMF. As the pro-
cessing power of the safety controller is limited, this quantity cannot be used
to the full in practice.
Each instance of the AMF used in the safety configuration requires part of
the available processing power. The processing time required by an AMF
instance depends, for example, on the number of parameters and the
complexity of the corresponding calculations.
How often an AMF instance is used in the safety configuration, how many
lines are used in the Customer PSM table and how many ESM states are
used are not relevant for the processing power.
Response if the processing power of the safety controller is exceeded:
The required processing time of a safety configuration is calculated auto-
matically on saving the safety configuration. If it is too great, a warning is
displayed. It is nonetheless saved.
The transfer of an excessively large safety configuration to the robot con-
troller is prevented. Project synchronization and installation of the system
software are canceled in this case with a corresponding error message.
AMF Description
smartPAD Emergency Stop The AMF is violated if the EMERGENCY STOP device on the
smartPAD is pressed.
smartPAD enabling switch The AMF is violated if no enabling signal is issued on the smart-
inactive PAD (no enabling switch is pressed on the smartPAD or an
enabling switch is fully pressed).
smartPAD enabling switch The AMF is violated if an enabling switch on the smartPAD is
panic active fully pressed (panic position).
The set operating mode has a powerful effect on the behavior of the industrial
robot and determines which safety precautions are required.
The following standard AMFs are available for configuring a safety function to
evaluate the set operating mode:
AMF Description
Test mode The AMF is violated if a test operating mode is active (T1, T2,
CRR).
Automatic mode The AMF is violated if the active operating mode is an automatic
mode (AUT).
Reduced-velocity mode The AMF is violated if an operating mode is active whose veloc-
ity is reduced to a maximum of 250 mm/s (T1, CRR).
Note: In the case of a mobile platform, the velocity is not
reduced in T1 and CRR mode.
High-velocity mode The AMF is violated if an operating mode is active in which the
robot is moved with a programmed velocity (T2, AUT).
Description The robot cannot be moved without the motion enable. The motion enable can
be cancelled for various reasons, e.g. if enabling is not issued in Test mode or
if the EMERGENCY STOP is pressed on the smartPAD.
The AMF for motion enable functions like a group signal for all configured stop
conditions. In particular, it can be be used for switching off peripheral devices.
For safety functions which receive the evaluation of the motion enable, a safe
output should therefore be configured as the reaction. If a safety stop is set as
the reaction, the robot cannot be moved.
AMF Description
Motion enable The AMF is violated if the motion enable is not issued due to a
stop request.
Note: This AMF is only suitable for use with an output as a reac-
tion.
Description The inputs of the discrete safety interface and of the Ethernet safety interface
can be used as safe inputs as long as they are configured in WorkVisual.
(>>> 13.4 "Safety interfaces" Page 218)
Safety equipment can be connected to the safe inputs, e.g. external EMER-
GENCY STOP devices or safety gates. The AMF Input signal is used to eval-
uate the associated input signal.
AMF Description
Input signal The AMF is violated if the safe input used is low (state “0”).
If a robot with a media flange Touch is used, the safe inputs at which
enabling and panic switches for the media flange are connected can
be used in the AMF.
Parameter Description
Input for safety signal Safe input to be monitored
Description The AMF Hand guiding device enabling inactive serves to evaluate 3-step en-
abling devices. Up to 3 enabling switches and 3 panic switches can be config-
ured. 3-step enabling devices with only one output which process the panic
signal internally can also be evaluated.
The AMF fulfils the following normative requirements and measures against
predictable misuse:
If the enabling switch has been fully pressed down, the signal will not be
issued if the switch is released to the center position.
The signal is cancelled in case of a stop request. To issue the signal again,
the enabling switch must be released and pressed again.
The signal is only issued 100 ms after the enabling switch has been
pressed.
The following applies if several enabling switches are used:
If all 3 enabling switches of an enabling device are held simultaneously in
the center position, a safety stop 1 is triggered.
It is possible to hold 2 enabling switches of an enabling device in the center
position simultaneously for up to 15 seconds. This makes it possible to ad-
just grip from one enabling switch to another one. If the enabling switches
are held simultaneously in the center position for longer than 15 seconds,
this triggers a safety stop 1.
If the enabling switches of different enabling devices are pressed simulta-
neously, e.g. an enabling switch on the smartPAD and an enabling switch
on the hand guiding device, a safety stop 1 (path-maintaining) is triggered.
AMF Description
Hand guiding device enabling The AMF is violated in the following cases:
inactive
All safe inputs to which an enabling switch is connected have
the signal level LOW (state “0”)
At least one of the safe inputs to which a panic switch is con-
nected has the signal level LOW (state “0”)
If a robot with a media flange Touch is used, the safe inputs at which
enabling and panic switches for the media flange are connected can
be used in the AMF.
Parameter Description
Enabling switch 1 used Indicates whether the enabling switch is connected to a safe
Enabling switch 2 used input
Enabling switch 3 used true: An input is connected.
false: No input is connected.
Default: false
Parameter Description
Enabling switch 1 input signal Safe input to which the enabling switch is connected
Enabling switch 2 input signal The inputs of the discrete safety interface or the safe inputs of
Enabling switch 3 input signal the Ethernet safety interface can be used as safe inputs as long
as they are configured in WorkVisual.
(>>> 13.4 "Safety interfaces" Page 218)
Panic switch 1 used Indicates whether the panic switch is connected to a safe input
Panic switch 2 used true: An input is connected.
Panic switch 3 used false: No input is connected.
Default: false
Panic switch 1 input signal Safe input to which the panic switch is connected
Panic switch 2 input signal The inputs of the discrete safety interface or the safe inputs of
Panic switch 3 input signal the Ethernet safety interface can be used as safe inputs as long
as they are configured in WorkVisual.
(>>> 13.4 "Safety interfaces" Page 218)
Description The standard AMF Hand guiding device enabling active makes it possible to
implement safety functions that activate other monitoring functions during
manual guidance with the enabling device, e.g. Cartesian velocity monitoring.
AMF Description
Hand guiding device enabling This AMF is violated if the enabling signal for manual guidance
active is issued.
The AMF Hand guiding device enabling active represents the
inverse state of the AMF Hand guiding device enabling inactive:
The AMF Hand guiding device enabling active is violated if
the AMF Hand guiding device enabling inactive is not violat-
ed.
The AMF Hand guiding device enabling active is not violated
as long as the AMF Hand guiding device enabling inactive is
violated.
The AMF Hand guiding device enabling active takes into
account the enabling device configured for the AMF Hand guid-
ing device enabling inactive.
Example Space and velocity monitoring during manual guidance with enabling device
(category: Workspace monitoring, Velocity monitoring)
During manual guidance of a robot with an enabling device, the robot must not
leave a defined workspace. Furthermore, the robot is to move with a maximum
velocity during manual guidance of 600 mm/s. If the workspace is left while en-
abling is active, or if the velocity limit is exceeded, a safety stop 1 (path-main-
taining) is to be executed.
Description Position referencing checks whether the saved zero position of the motor of
an axis (= saved mastering position) corresponds to the actual mechanical
zero position.
The safety integrity of the safety functions based upon this is limited until the
position referencing test has been performed. This includes, for example,
safely monitored Cartesian and axis-specific robot positions, safely monitored
Cartesian velocities, TCP force monitoring and collision detection.
The AMF Position referencing can be used to check whether the position val-
ues of all axes are referenced.
AMF Description
Position referencing The AMF is violated in the following cases:
The position of at least one axis of the monitored kinematic
system is not referenced.
The position referencing of at least one axis has failed.
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: No function
Third kinematic system: No function
Fourth kinematic system: No function
Description The referencing test of the joint torque sensors checks whether the expected
external torque, which can be calculated for an axis based on the robot model
and the given load data, corresponds to the value determined on the basis of
the measured value of the joint torque sensor. If the difference between these
values exceeds a certain tolerance value, the referencing of the torque sen-
sors has failed.
The safety integrity of the safety functions based upon this is limited until the
torque referencing test has been performed successfully. This includes, for ex-
ample, axis torque and TCP force monitoring as well as collision detection.
The AMF Torque referencing can be used to check whether the joint torque
sensors of all axes are referenced.
AMF Description
Torque referencing The AMF is violated in the following cases:
The joint torque sensor of at least one axis of the monitored
kinematic system is not referenced.
The referencing of at least one joint torque sensor has failed.
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: No function
Third kinematic system: No function
Fourth kinematic system: No function
Since the safety integrity of this function is only ensured for successfully refer-
enced joint torque sensors, the referencing status of the sensors must be mon-
itored simultaneously. As soon as at least one joint torque sensor has not been
referenced or referencing has failed, a safety stop 1 (path-maintaining) is to be
triggered in high-velocity operating modes (T2 and AUT).
AMF Description
Axis velocity monitoring The AMF is violated if the absolute velocity of the monitored
axis of the monitored kinematic system exceeds the configured
limit.
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: Mobile platform
Third kinematic system: No function
Fourth kinematic system: No function
Monitored axis Axis of the kinematic system to be monitored
Axis1 … Axis16
Axis1 … Axis7 are used for an LBR.
In the case of a mobile platform, the axes are assigned as fol-
lows:
Axis1: front left drive
Axis2: front right drive
Axis3: rear left drive
Axis4: rear right drive
Maximum velocity [°/s] Maximum permissible velocity at which the monitored axis may
move in the positive and negative direction of rotation
1 … 500 °/s
Description The AMF Cartesian velocity monitoring is used to define a Cartesian velocity
monitoring function.
In the case of a robot, the translational Cartesian velocity can be moni-
tored at all axis center points as well as at the robot flange.
If a safety-oriented tool is active on the robot controller, the velocity at the
center points of the spheres which are used to configure the safety-orient-
ed tool can also be monitored.
(>>> 9.3.9 "Safety-oriented tools" Page 162)
The system does not monitor the entire structure of the robot and tool
against the violation of a velocity limit, but rather only the center
points of the monitoring spheres. In particular with protruding tools
and workpieces, the monitoring spheres of the safety-oriented tool must be
planned and configured in such a way as to assure the safety integrity of the
velocity monitoring.
AMF Description
Cartesian velocity monitoring The AMF is violated if the Cartesian translational velocity at at
least one point of the monitored kinematic system exceeds the
defined limit.
The AMF is additionally violated in the following cases:
An axis is not mastered.
The referencing of a mastered axis has failed.
Note: If an AMF is violated due to loss of mastering, the robot
can only be moved and mastered again by switching to CRR
mode.
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: Mobile platform
Third kinematic system: No function
Fourth kinematic system: No function
Monitored structure Structure to be monitored
Kinematic system to be monitored is a robot
Robot and tool: The center points of the axes on the robot
and the center points of the spheres used to configure the
active safety-oriented tool are monitored (default).
Robot: The center points of the axes on the robot are moni-
tored.
Tool: The center points of the spheres used to configure the
active safety-oriented tool are monitored.
Note: If no safety-oriented tool is active and the tool is selected
as the structure to be monitored, the center point of the robot
flange is monitored.
(>>> "Spheres on the robot" Page 259)
Kinematic system to be monitored is a mobile platform
Robot and tool: The 4 corner points of the platform and the
center points of the spheres used to configure the active
safety-oriented tool are monitored (default).
Robot: The 4 corner points of the platform are monitored.
Tool: The center points of the spheres used to configure the
active safety-oriented tool are monitored.
Note: If no safety-oriented tool is active and the tool is selected
as the structure to be monitored, the frame at the center point of
the platform is monitored.
Maximum velocity [mm/s] Maximum permissible Cartesian velocity which must not be
exceeded at any monitored point
1 … 10,000 mm/s
Description The AMF Tool-related velocity component is used to check whether the Car-
tesian translational velocity in a specific direction is below the configurable lim-
it value.
The AMF can be used, for example, to ensure that the velocity in the working
direction of a sharp-pointed tool is not too high. The AMF can also be used to
monitor the motion direction.
The AMF monitors the velocity on a reference frame of the last active safety-
oriented tool of the kinematic chain. The position and orientation of the refer-
ence frame are defined in the properties of the tool by means of safety-orient-
ed frames. The following safety parameters are available for this in the
properties of the safety-oriented tool:
Point for tool-related velocity: The safety-oriented frame set here deter-
mines the position of the reference frame.
If no point is defined for the tool-related velocity, the reference frame is the
pickup frame of the active safety-oriented tool.
If only one safety-oriented tool is active, the reference frame is the
flange coordinate system. The velocity is monitored at the origin of the
flange coordinate system.
If a safety-oriented tool is active and coupled to the fixed tool, the ref-
erence frame is the standard frame for motions of the fixed tool. The
velocity is monitored at the origin of the standard frame for motions.
Orientation for tool-related velocity: The safety-oriented frame set here
determines the orientation of the reference frame.
If no orientation is defined for the tool-related velocity, the reference frame
is the pickup frame of the active safety-oriented tool.
If only one safety-oriented tool is active, the reference frame is the
flange coordinate system. The orientation of the flange coordinate sys-
tem determines the monitoring direction.
If a safety-oriented tool is active and coupled to the fixed tool, the ref-
erence frame is the standard frame for motions of the fixed tool. The
orientation of the standard frame for motions determines the monitor-
ing direction.
(>>> 9.3.9 "Safety-oriented tools" Page 162)
Fig. 13-11: Reference frame at the center point of the mobile platform
AMF Description
Tool-related velocity compo- The AMF is violated if the configured component of the velocity
nent vector in the coordinate system of the reference frame of the
monitored kinematic system exceeds the maximum defined
value.
In the case of an LBR, the AMF is additionally violated in the fol-
lowing cases:
An axis is not mastered.
The referencing of a mastered axis has failed.
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: Mobile platform
Third kinematic system: No function
Fourth kinematic system: No function
Maximum velocity [mm/s] Maximum Cartesian velocity for the monitored component of
the velocity vector
1 … 10000 mm/s
Note: When selecting the maximum velocity, it must be noted
that, particularly in the case of highly dynamic motions, low
velocities against the commanded direction of motion may
occur due to overshoot. For this reason, it is recommended that
the maximum velocity should not be set too low.
Component of the velocity Monitored component of the velocity vector (direction of moni-
vector toring)
X positive or X negative
Y positive or Y negative
Z positive or Z negative
Item Description
1 Reference frame for the tool-specific velocity component
2 Velocity vector of the translational Cartesian velocity
3 Maximum permissible velocity for the positive Z component of the
velocity vector
Item Description
4 Positive Z component of the velocity vector
The velocity is below the maximum permissible velocity; the AMF
is not violated.
5 Positive Z component of the velocity vector
The velocity is above the maximum permissible velocity; the AMF
is violated.
The safety function configured with the AMF monitors the positive Z compo-
nent of the velocity vector. If the maximum velocity of 25 mm/s is exceeded by
the monitored component in Automatic mode, a safety stop 1 (path-maintain-
ing) is to be executed.
Category: Velocity monitoring
Example 2 In order to keep the dimensions of the protected space of a mobile platform to
the rear and to both sides as small as possible, the direction of motion of the
platform must be monitored in such a way that only forward motions can be
carried out at high velocity.
Configuration:
3 instances of the AMF Tool-related velocity component are required. Ref-
erence frame is the center point of the platform in all cases.
The motion to the left and right is to be carried out with a maximum velocity
of 50 mm/s. For this, the positive and negative Y components of the veloc-
ity vector are limited to 50 mm/s.
The backward motion is to be carried out with a maximum velocity of
20 mm/s. For this, the negative X component of the velocity vector is lim-
ited to 20 mm/s.
Parameterization of the configured instances:
Instance 1:
Monitored kinematic system: Second kinematic system
Maximum velocity [mm/s]: 50
Component of the velocity vector: Y positive
Instance 2:
Monitored kinematic system: Second kinematic system
Maximum velocity [mm/s]: 50
Component of the velocity vector: Y negative
Instance 3:
Monitored kinematic system: Second kinematic system
Maximum velocity [mm/s]: 20
Component of the velocity vector: X negative
Item Description
1 Approximate dimensions of the desired protected space
2 Reference frame for the tool-specific velocity component
3 Velocity vector of the translational Cartesian velocity
4 Maximum permissible velocity for the negative Y component of the
velocity vector
5 Maximum permissible velocity for the negative X component of the
velocity vector
6 Maximum permissible velocity for the positive Y component of the
velocity vector
7 Positive Y component of the velocity vector
The velocity is above the maximum permissible velocity of instance
1; the AMF is violated.
8 Negative Y component of the velocity vector
The velocity is below the maximum permissible velocity of instance
3; none of the 3 instances of the AMF is violated.
3 safety functions are configured, each of which uses one of the 3 instances.
If the configured maximum velocity of is exceeded in at least one of the 3 mon-
itored components in Automatic mode, a safety stop 1 (path-maintaining) is to
be executed.
Category: Velocity monitoring
Description The robot environment can be divided into areas in which it must remain for
execution of the application, and areas which it must not enter or may only en-
ter under certain conditions. The system must then continuously monitor
whether the robot is within or outside of such a monitoring space.
A monitoring space can be defined as a Cartesian cuboid or by means of indi-
vidual axis ranges.
A Cartesian monitoring space can be configured as a workspace in which the
robot must remain, or as a protected space which it must not enter.
Via the link to other safety monitoring functions, it is possible to define further
conditions which must be met when a monitoring space is violated. For exam-
ple, a monitoring space can be activated by a safe input or applicable in Auto-
matic mode only.
If the robot has violated a monitoring space and been stopped by the
safety controller, the robot can be moved out of the violated area in
CRR mode.
(>>> 6.7 "CRR mode – controlled robot retraction" Page 82)
Spheres on the Spheres are modeled around selected points on the robot, enclosing and mov-
robot ing with the robot. These spheres are predefined and are monitored, as stan-
dard, against the limits of activated Cartesian monitoring spaces.
The centers and radii of the monitored spheres are defined in the machine
data of the robot. A sphere is defined for each robot axis, for the robot base
and for the robot flange. The sphere center lies on the center point of each ax-
is, of the robot base and of the robot flange.
The dimensions of the monitored spheres vary according to robot type and the
media flange used:
r = sphere radius
z, y = sphere center point relative to the robot base coordinate system
Variant 1: LBR iiwa 7 R800 with media flange Touch
Base A1 A2 A3 A4 A5 A6 A7 Flange
r [mm] 135 90 125 90 125 90 80 85 65
z [mm] 50 90 340 538 740 935 1140 1130 1240
y [mm] -30
Variant 2: LBR iiwa 7 R800 with media flange (all variants except media
flange Touch)
Base A1 A2 A3 A4 A5 A6 A7 Flange
r [mm] 135 90 125 90 125 90 80 85 65
z [mm] 50 90 340 538 740 935 1140 1130 1220
y [mm] -30
Base A1 A2 A3 A4 A5 A6 A7 Flange
r [mm] 150 100 140 90 131 90 80 85 65
z [mm] 50 160 360 580 780 980 1180 1170 1280
y [mm] -30
Variant 4: LBR iiwa 14 R820 with media flange (all variants except media
flange Touch)
Base A1 A2 A3 A4 A5 A6 A7 Flange
r [mm] 150 100 140 90 131 90 80 85 65
z [mm] 50 160 360 580 780 980 1180 1170 1260
y [mm] -30
Spheres on tool If a safety-oriented tool is active on the robot controller, the system not only
monitors the spheres on the robot as standard, but also the spheres used to
configure the safety-oriented tool.
(>>> 9.3.9 "Safety-oriented tools" Page 162)
The system does not monitor the entire structure of the robot and tool
against the violation of a space, but rather only the monitoring
spheres. In particular with protruding tools and workpieces, the mon-
itoring spheres of the safety-oriented tool must be planned and configured in
such a way as to assure the safety integrity of workspaces and protected
spaces.
Selecting It is not necessary or appropriate to include all robot and tool spheres in the
monitoring Cartesian workspace monitoring of every application.
spheres Example: If the entry of a tool into a protected space is programmed to acti-
vate further monitoring functions, only the tool spheres must be monitored.
The structure to be monitored can be selected when configuring Cartesian
monitoring spaces:
Robot and tool (default)
Only tool
Only robot
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: No function
Third kinematic system: No function
Fourth kinematic system: No function
Monitored structure Structure to be monitored
Robot and tool: The spheres on the robot and the spheres
used to configure the safety-oriented tool are monitored.
(Default)
Robot: The spheres on the robot are monitored.
Tool: The spheres used to configure the safety-oriented tool
are monitored.
Note: If no safety-oriented tool is configured and the tool is
selected as the structure to be monitored, the sphere on the
robot flange is monitored. (>>> "Spheres on the robot"
Page 259)
One corner of the cuboid is defined relative to the world coordinate system.
This is the origin of the workspace and is defined by the following parameters:
Parameter Description
X, Y, Z [mm] Offset of the origin of the workspace along the X, Y and Z axes
of the world coordinate system
-100,000 mm … +100,000 mm
A, B, C [°] Orientation of the origin of the workspace about the axes of the
world coordinate system, specified by the rotational angles A, B,
C
0° … 359°
Based on this defined origin, the size of the workspace is determined along the
coordinate axes:
Parameter Description
Length [mm] Length of the workspace (= distance along the positive X axis of
the origin)
0 mm … 100,000 mm
Width [mm] Width of the workspace (= distance along the positive Y axis of
the origin)
0 mm … 100,000 mm
Height [mm] Height of the workspace (= distance along the positive Z axis of
the origin)
0 mm … 100,000 mm
Example The diagram shows an example of a Cartesian workspace. Its origin is offset
in the negative X and Y directions with reference to the world coordinate sys-
tem.
Description A Cartesian protected space is defined as a cuboid whose position and orien-
tation in space are defined relative to the world coordinate system.
These monitoring spheres are monitored against the limits of activated pro-
tected spaces and must move outside of these protected spaces.
The AMF Cartesian protected space monitoring is used to define a Cartesian
protected space. The AMF is violated as soon as one of the monitored spheres
is no longer completely outside of the defined protected space.
The AMF is additionally violated in the following cases:
An axis is not mastered.
The referencing of a mastered axis has failed.
If a very narrow protected space is configured, the robot may be able to move
into and out of the protected space without the space violation being detected.
Possible cause: Due to a very high tool velocity, the protected space is only
violated during a very short interval.
Assuming that the following minimum values are configured:
Radius of tool sphere: 25 mm
Thickness of protected space: 0 mm
In this case, tool velocities of over 4 m/s are required for the robot to pass
through the protected space without detection.
The following measures are recommended in order to prevent robots from
passing through protected spaces undetected:
Configure Cartesian velocity monitoring (do not allow a value greater than
4 m/s).
OR: When configuring the protected space, select sufficient values for the
length, width and height of the protected space.
OR: When configuring the tool spheres, select sufficient values for the ra-
dius.
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: No function
Third kinematic system: No function
Fourth kinematic system: No function
Monitored structure Structure to be monitored
Robot and tool: The spheres on the robot and the spheres
used to configure the safety-oriented tool are monitored.
(Default)
Robot: The spheres on the robot are monitored.
Tool: The spheres used to configure the safety-oriented tool
are monitored.
Note: If no safety-oriented tool is configured and the tool is
selected as the structure to be monitored, the sphere on the
robot flange is monitored. (>>> "Spheres on the robot"
Page 259)
One corner of the cuboid is defined relative to the world coordinate system.
This is the origin of the protected space and is defined by the following param-
eters:
Parameter Description
X, Y, Z [mm] Offset of the origin of the protected space along the X, Y and Z
axes of the world coordinate system
-100,000 mm … +100,000 mm
A, B, C [°] Orientation of the origin of the protected space about the axes
of the world coordinate system, specified by the rotational
angles A, B, C
0° … 359°
Based on this defined origin, the size of the protected space is determined
along the coordinate axes:
Parameter Description
Length [mm] Length of the protected space (= distance along the positive X
axis of the origin)
0 mm … 100,000 mm
Parameter Description
Width [mm] Width of the protected space (= distance along the positive Y
axis of the origin)
0 mm … 100,000 mm
Height [mm] Height of the protected space (= distance along the positive Z
axis of the origin)
0 mm … 100,000 mm
Example The diagram shows an example of a Cartesian protected space. Its origin is
offset in the negative X and positive Y directions with reference to the world
coordinate system.
Description The axis limits can be defined individually and safely monitored for each axis.
The axis angle must lie within the defined axis range.
The AMF Axis range monitoring is used to define an axis-specific monitoring
space. The AMF is violated if an axis is not inside the defined axis range.
The AMF is additionally violated in the following cases:
An axis is not mastered.
The referencing of a mastered axis has failed.
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: No function
Third kinematic system: No function
Fourth kinematic system: No function
Monitored axis Axis to be monitored
Axis1 … Axis16
Note: Axis1 … Axis7 are used for an LBR.
Lower limit [°] Lower limit of the allowed axis range in which the monitored
axis may move
-180° … +180°
Upper limit [°] Upper limit of the allowed axis range in which the monitored
axis may move
-180° … +180°
For personnel protection, only the position of the axis is relevant. For
this reason, the positions are converted to the axis range -180° …
+180°, even for axes which can rotate more than 360°.
Example Axes A1, A2 and A4 are to be monitored so that the robot may only be moved
in a limited space. The monitoring is activated by a safe input. The permitted
range of each axis is defined by an upper and lower limit, and is shown in
green in the corresponding chart in the PSM table.
As soon as one of the monitored axis ranges is violated, a safety stop 1 (path-
maintaining) is triggered. For this purpose, an individual table row must be
used for each axis.
The AMF Tool orientation can be used to monitor the orientation of a safety-
oriented tool. It checks whether a specific axis of the tool orientation frame is
within a permissible direction range.
This function can for example be used to prevent dangerous parts of the
mounted tool, e.g. sharp edges, from pointing towards humans in HRC appli-
cations.
The following tool orientations are monitored, depending on the tool configu-
ration:
As standard, the orientation of the Z axis of the tool orientation frame of
the last active safety-oriented tool of the kinematic chain is monitored.
(>>> 9.3.9 "Safety-oriented tools" Page 162)
If no tool orientation frame is defined, the Z axis of the pickup frame of the
last active safety-oriented tool of the kinematic chain is monitored.
If only one fixed safety-oriented tool is active, the pickup frame is the
flange coordinate system. The Z axis of the flange coordinate system
is monitored.
If a safety-oriented tool is active and coupled to the fixed tool, the pick-
up frame is the standard frame for motions of the fixed tool. The Z axis
of the standard frame for motions of the fixed tool is monitored.
If no safety-oriented tool is active, the Z axis of the flange coordinate sys-
tem is monitored.
The permissible range for the orientation angle is defined by a reference vec-
tor with a fixed orientation relative to the world coordinate system and a per-
missible deviation angle of this reference vector.
The reference vector is defined by the rotation of the unit vector of the Z axis
of the world coordinate system about the 3 Euler angles A, B and C relative to
the world coordinate system. A monitoring cone is extended around the refer-
ence vector. The opening of the cone is defined by a configurable deviation
angle. The deviation angle defines the permissible angle between the tool ori-
entation and reference vector. The values of the angle of the reference vector
and the deviation angle are defined in the parameterization of the AMF.
The monitoring sphere defines the permissible range for the tool orientation.
Item Description
1 Axes of the world coordinate system
2 Reference vector
The reference vector defines a fixed orientation relative to the
world coordinate system.
3 Monitoring cone
Defines the permissible range for the tool orientation.
4 Deviation angle
The deviation angle determines the opening of the monitoring
cone.
AMF Description
Tool orientation The AMF is violated if the angle between the reference vector
and Z axis of the tool orientation frame is greater than the con-
figured deviation angle.
The AMF is additionally violated in the following cases:
An axis is not mastered.
The position referencing of a mastered axis has failed.
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: No function
Third kinematic system: No function
Fourth kinematic system: No function
A [°] Rotation of the reference vector about the Z axis of the world
coordinate system
0° … 359°
Parameter Description
B [°] Rotation of the reference vector about the Y axis of the world
coordinate system
0° … 359°
C [°] Rotation of the reference vector about the X axis of the world
coordinate system
0° … 359°
Operating angle [°] Workspace of the tool orientation
Defines the maximum permissible deviation angle between the
reference vector and the Z axis of the tool orientation frame.
1° … 179°
Item Description
1 Robot is not violating the AMF Tool orientation.
The Z axis of the tool orientation frame is within the range defined
by the monitoring cone.
2 Origin of the tool orientation frame
3 Monitoring cone
4 Z axis of the tool orientation frame
5 Robot is violating the AMF Tool orientation.
The Z axis of the tool orientation frame is outside of the range de-
fined by the monitoring cone.
Description If, under certain conditions, the robot must not move but must remain under
servo-control, the standstill of all axes must be safely monitored. The AMF
Standstill monitoring of all axes is used for this purpose.
This AMF is an extended AMF, meaning that the monitoring only begins when
all other AMFs of the safety function are violated.
Extended AMFs are not available for the safety functions of the ESM
mechanism.
AMF Description
Standstill monitoring of all The AMF is violated as soon as the joint value of an axis is out-
axes side of a tolerance range of +/- 0.1° of the value saved when
standstill monitoring was activated, or if one of the axes moves
at an absolute value of more than 1 °/s.
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: No function
Third kinematic system: No function
Fourth kinematic system: No function
Description The AMF Time delay can be used to delay the triggering of the reaction of a
safety function for a defined time.
This AMF is an extended AMF, meaning that the delay time only starts running
when all other AMFs of the safety function are violated.
Extended AMFs are not available for the safety functions of the ESM
mechanism.
AMF Description
Time delay This AMF is violated if the set time has expired.
If the same instance of the AMF is used for several safety functions,
the delay time begins running from the first activation.
Parameter Description
Delay time Amount of time by which the triggering of the reaction of a
safety function is delayed.
12 ms … 24 h
The time can be entered in milliseconds (ms), seconds (s), min-
utes (min) and hours (h). Each delay is a multiple of 12 ms,
meaning that it is rounded up to the next multiple of 12.
An LBR is fitted with position and joint torque sensors in all axes. These make
it possible to measure and react to external forces and torques.
Axis torque monitoring can limit and monitor the torques of individual axes.
The following points must be observed when using axis torque monitoring:
Successful torque referencing is a precondition.
AMF Description
Axis torque monitoring The AMF is violated if the torque of the monitored axis exceeds
or falls below the configured torque limit.
If the AMF is violated and a safety stop triggered, the interaction forc-
es may continue to increase due to the stopping distances of the ro-
bot. For this reason, the AMF may only be used in collaborative
operation at reduced velocity. For this, the AMF can be combined with the
AMF Cartesian velocity monitoring, Axis velocity monitoring or Tool-related
velocity component.
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: No function
Third kinematic system: No function
Fourth kinematic system: No function
Monitored axis Axis to be monitored
Axis1 … Axis16
Note: Axis1 … Axis7 are used for an LBR.
Minimum torque [Nm] Minimum permissible torque for the given axis
-500 … 500 Nm
Maximum torque [Nm] Maximum permissible torque for the given axis
-500 … 500 Nm
13.12.13.2Collision detection
Collision detection monitors the external axis torques against a definable limit
value.
The external axis torque is that part of the torque of an axis which is generated
from the forces and torques occurring as the robot interacts with its environ-
ment. The external axis torque is not measured directly but is rather calculated
using the dynamic robot model. The accuracy of the calculated values de-
pends on the dynamics of the robot motion and of the interaction forces of the
robot with its environment.
The following points must be observed when using collision detection:
Successful position and torque referencing are preconditions.
The load data of safety-oriented tools are taken into consideration (if ac-
tive).
If a safety-oriented fixed tool is configured, it must also be mounted on the
robot flange.
The load data of workpieces that are picked up are only taken into consid-
eration if the current workpiece is communicated to the safety controller.
(>>> 15.10.5 "Transferring workpiece load data to the safety controller"
Page 376)
The weight of the heaviest workpiece whose mass is configured in the
safety-oriented project settings is taken into consideration.
The AMF Collision detection does not automatically take into consid-
eration possible errors in the workpiece load data.
When configuring the collision detection, it is therefore necessary to
set the lowest possible values for the maximum permissible external torque.
In this way, significant deviations in the load data are interpreted as a colli-
sion and cause a violation of the AMF.
Workpieces that have been picked up must not come loose uninten-
tionally and fall down while the monitoring is active. The user must en-
sure this when using the AMF.
AMF Description
Collision detection This AMF is violated if the external torque of at least one axis
exceeds the configured limit value.
If the AMF is violated and a safety stop triggered, the interaction forc-
es may continue to increase due to the stopping distances of the ro-
bot. For this reason, the AMF may only be used in collaborative
operation at reduced velocity. For this, the AMF can be combined with the
AMF Cartesian velocity monitoring, Axis velocity monitoring or Tool-related
velocity component.
External forces on the robot or tool with short distances to the robot
axes can only cause slight external torques in the robot axes under
certain circumstances. If the AMFs are used, this can pose a safety
risk, particularly in potential crushing situations during collaborative opera-
tion. Critical crushing situations can arise on the robot itself, between the ro-
bot and the surroundings or between the tool and the surroundings.
It is therefore advisable to avoid potentially critical incidents of crushing by
using suitable equipment for the robot cell and/or by using one of the follow-
ing AMFs: Cartesian workspace monitoring, Cartesian protected space mon-
itoring, Axis range monitoring or Tool orientation.
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: No function
Third kinematic system: No function
Fourth kinematic system: No function
Maximum external torque [Nm] Maximum permissible external torque
0 … 30 Nm
Description In TCP force monitoring, the external force acting on the tool or robot flange is
monitored against a definable limit value.
The external force on the TCP is not measured directly but is rather calculated
using the dynamic robot model. The accuracy of the calculated external force
depends on the dynamics of the robot motion and of the actual force, among
other things.
The following points must be observed when using TCP force monitoring:
Successful position and torque referencing are preconditions.
The load data of safety-oriented tools are taken into consideration (if ac-
tive).
If a safety-oriented fixed tool is configured, it must also be mounted on the
robot flange.
The load data of workpieces that are picked up are only taken into consid-
eration if the current workpiece is communicated to the safety controller.
(>>> 15.10.5 "Transferring workpiece load data to the safety controller"
Page 376)
The weight of the heaviest workpiece whose mass is configured in the
safety-oriented project settings is taken into consideration.
Workpieces that have been picked up must not come loose uninten-
tionally and fall down while the monitoring is active. The user must en-
sure this when using the AMF.
AMF Description
TCP force monitoring This AMF is violated if the external force acting on the tool or
robot flange exceeds the configured limit value.
If the AMF is violated and a safety stop triggered, the interaction forc-
es may continue to increase due to the stopping distances of the ro-
bot. For this reason, the AMF may only be used in collaborative
operation at reduced velocity. For this, the AMF can be combined with the
AMF Cartesian velocity monitoring, Axis velocity monitoring or Tool-related
velocity component.
External forces on the robot with short distances to the robot axes can
only cause slight external torques in the robot axes under certain cir-
cumstances. If the AMFs are used, this can pose a safety risk, partic-
ularly in potential crushing situations during collaborative operation. Critical
crushing situations can arise on the robot itself or between the robot and the
surroundings.
It is therefore advisable to avoid potentially critical incidents of crushing by
using suitable equipment for the robot cell and/or by using one of the follow-
ing AMFs: Cartesian workspace monitoring, Cartesian protected space mon-
itoring, Axis range monitoring or Tool orientation.
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: No function
Third kinematic system: No function
Fourth kinematic system: No function
Maximum TCP force [N] Maximum permissible external force on the TCP
50 … 1,000 N
Accuracy of force The accuracy of TCP force detection is dependent on the robot pose. The
detection safety controller recognizes non-permissible poses and sets the AMF TCP
force monitoring to “violated” with a corresponding error message.
Non-permissible poses are those in which it is possible for TCP forces to
have a short distance to all robot axes. This applies to singularity poses
and poses near singularities.
(>>> 14.11 "Singularities" Page 331)
External forces on the robot reduce the accuracy of TCP force detection.
In many cases, the safety controller can automatically detect the external
forces acting on the robot. The AMF TCP force monitoring is violated in
this case.
It is not possible to guarantee that the safety controller will always au-
tomatically detect external forces acting on the robot. The user must
ensure that the external forces act exclusively on the TCP in order to
assure the safety integrity of the AMF TCP force monitoring.
Description The AMF Base-related TCP force component is used to monitor the external
force acting in a specific direction on the tool or on the robot flange relative to
a base coordinate system against a definable limit value.
As standard, the world coordinate system is used as the base coordinate sys-
tem. No other base coordinate system can currently be defined for this moni-
toring function.
The AMF monitors the force along the component of a reference coordinate
system. The orientation of the reference coordinate system corresponds as
standard to the orientation of the base coordinate system. The orientation of
the reference coordinate system relative to the base coordinate system can be
modified in the AMF.
The external force on the TCP is not measured directly but is rather calculated
using the dynamic robot model. The accuracy of the calculated external force
depends on the dynamics of the robot motion and of the actual force, among
other things.
The following points must be observed when monitoring base-related TCP
force components:
Successful position and torque referencing are preconditions.
The load data of safety-oriented tools are taken into consideration (if ac-
tive).
If a safety-oriented fixed tool is configured, it must also be mounted on the
robot flange.
The load data of the heaviest safety-oriented workpiece are taken into
consideration (if configured).
The load data of workpieces that are picked up are only taken into consid-
eration if the current workpiece is communicated to the safety controller.
(>>> 15.10.5 "Transferring workpiece load data to the safety controller"
Page 376)
The weight of the heaviest workpiece whose mass is configured in the
safety-oriented project settings is taken into consideration.
The AMF Base-related TCP force component may only be used if the
direction in which hazardous forces can arise is known. At the same
time, it must be ensured that no hazardous forces can arise in the
non-monitored directions. If this is not the case, either the AMF TCP force
monitoring must be used, or the other directions must also be monitored us-
ing the AMF Base-related TCP force component.
Workpieces that have been picked up must not come loose uninten-
tionally and fall down while the monitoring is active. The user must en-
sure this when using the AMF.
AMF Description
Base-related TCP force com- The AMF is violated if the external force acting along the moni-
ponent tored component of the TCP force vector exceeds the config-
ured limit value.
If the AMF is violated and a safety stop triggered, the interaction forc-
es may continue to increase due to the stopping distances of the ro-
bot. For this reason, the AMF may only be used in collaborative
operation at reduced velocity. For this, the AMF can be combined with the
AMF Cartesian velocity monitoring, Axis velocity monitoring or Tool-related
velocity component.
External forces on the robot with short distances to the robot axes can
only cause slight external torques in the robot axes under certain cir-
cumstances. If the AMFs are used, this can pose a safety risk, partic-
ularly in potential crushing situations during collaborative operation. Critical
crushing situations can arise on the robot itself or between the robot and the
surroundings.
It is therefore advisable to avoid potentially critical incidents of crushing by
using suitable equipment for the robot cell and/or by using one of the follow-
ing AMFs: Cartesian workspace monitoring, Cartesian protected space mon-
itoring, Axis range monitoring or Tool orientation.
Parameter Description
Monitored kinematic system Kinematic system to be monitored
First kinematic system: Robot
Second kinematic system: No function
Third kinematic system: No function
Fourth kinematic system: No function
Maximum TCP force [N] Maximum external force acting along the monitored component
of the TCP force vector
50 … 1,000 N
A [°] Rotation of the TCP force vector about the Z axis of the base
coordinate system
0° … 359°
B [°] Rotation of the TCP force vector about the Y axis of the base
coordinate system
0° … 359°
C [°] Rotation of the TCP force vector about the X axis of the base
coordinate system
0° … 359°
Component of the TCP force Component of the TCP force vector that is monitored (direction
vector of monitoring)
X positive or X negative
Y positive or Y negative
Z positive or Z negative
Accuracy of force The accuracy of TCP force detection is also dependent on the robot pose.
detection The safety controller recognizes non-permissible poses and sets the AMF
Base-related TCP force component to “violated” with a corresponding error
message.
Non-permissible poses are those in which it is possible for TCP forces to
have a short distance to all robot axes. This applies to singularity poses
and poses near singularities.
(>>> 14.11 "Singularities" Page 331)
Depending on the direction of the monitored force component, the AMF
Base-related TCP force component can be used closer to singularities
than the AMF TCP force monitoring. This can result in a larger workspace.
External forces on the robot reduce the accuracy of TCP force detection.
In many cases, the safety controller can automatically detect the external
forces acting on the robot. The AMF Base-related TCP force component
is violated in this case.
It is not possible to guarantee that the safety controller will always au-
tomatically detect external forces acting on the robot. The user must
ensure that the external forces act exclusively on the TCP in order to
assure the safety integrity of the AMF Base-related TCP force component.
The AMF Base-related TCP force component has the following parameters:
Monitored kinematic system: First kinematic system
Maximum TCP force: 50 N
Component of the TCP force vector: Z positive
A = B = C: 0°
13.13.1 Task
13.13.2 Requirement
The following safety functions are required as part of the risk assessment for
the above-described process:
1. It must be possible to stop the robot by pressing an external EMERGEN-
CY STOP switch within reach of the operator.
2. The robot must not leave a defined workspace. The collaboration space is
part of the workspace.
3. A transfer motion between the start position and pre-position can cause
unintentional collisions with the operator. However, the space is designed
in such a way that the human cannot be crushed. For this reason, the max-
imum permissible robot velocity for this space has been defined as
500 mm/s.
4. Collisions must be safely recognized during the transfer motion and cause
the robot to come to a standstill if a torque of 15 Nm is exceeded on at
least one axis.
5. Motions between the pre-position and the workpiece pick-up position can
cause the hand and arm of the operator to be crushed. In order to ensure
that the operator can respond appropriately to a robot motion and that the
braking distances are sufficiently short, the robot velocity must not exceed
100 mm/s.
6. Furthermore, the robot must be brought to a standstill if crushing forces of
more than 50 N arise during motions between the pre-position and the
workpiece pick-up position. Force values of 20 N or more cause the low-
ering motion in the process to be aborted and are thus sufficiently below
the latter limit.
Permanent safety The EMERGENCY STOP function must be active throughout operation and
monitoring the robot must not leave the workspace. Corresponding safety functions are
configured in the Customer PSM table.
Line Description
1 External EMERGENCY STOP
Implements requirement 1
An external EMERGENCY STOP is connected to a safe
input. If the operator actuates the EMERGENCY STOP, a
safety stop 1 (path-maintaining ) is executed.
2 Cartesian workspace monitoring 2
Implements requirement 2
The workspace is represented by a safely monitored Carte-
sian workspace. If the robot leaves the configured space, a
safety stop 1 (path-maintaining) is executed.
ESM state for An ESM state is defined for the transfer motion through the collaboration
transfer motion space between the start and pre-position. This is activated in the application
before the transfer motion begins.
Velocity monitoring and collision detection must be active during the transfer
motion in order to sufficiently reduce the danger of a collision between human
and robot.
In order to avoid crushing at all times, an additional protected space is defined.
This brings the robot to a standstill as soon as the distance between the robot
or tool and the workpiece pick-up position becomes less than 15 cm.
Line Description
1 Cartesian velocity monitoring
Implements requirement 3
If a Cartesian velocity exceeds 500 mm/s, a safety stop 1
(path-maintaining) is executed.
2 Collision detection
Implements requirement 4
If a collision causes an external torque of more than 15 Nm in
at least one robot axis, a safety stop 1 (path-maintaining) is
executed.
3 Protected space monitoring
Implements the safety of the state regardless of the time and
place of activation
The safely monitored protected space encompasses the
space above the workpiece pick-up position. As soon as the
robot or the safely monitored tool enters this space, a safety
stop 1 (path-maintaining) is executed.
ESM state for A specific ESM state is defined for the motions between the pre-position and
workpiece pick- the workpiece pick-up position. This is activated in the application before the
up position lowering motion begins.
Velocity monitoring and force monitoring must be active during the motion in
order to sufficiently reduce the danger of crushing the operator’s hand or lower
arm.
The state must ensure a sufficient degree of safety, regardless of the time or
place of activation. The low permissible velocity and the active force monitor-
ing mean that no further measures are necessary.
Line Description
1 Cartesian velocity monitoring
Implements requirement 5
If a Cartesian velocity exceeds 100 mm/s, a safety stop 1
(path-maintaining) is executed.
2 Force monitoring
Implements requirement 6
If a contact situation causes a force of more than 50 N to be
exerted at the TCP, a stop 0 is executed.
Description Position referencing checks whether the saved zero position of the motor of
an axis (= saved mastering position) corresponds to the actual mechanical
zero position.
In the case of an LBR iiwa, referencing is carried out continuously by the sys-
tem when an axis moves at less than 30 °/s. Referencing is successful when
the mastering sensor detects the mechanical zero position of the axis in a nar-
row range around the saved zero position of the motor.
Referencing fails in the following cases:
The mastering sensor does not detect the mechanical zero position of the
axis in the range around the saved zero position of the motor.
The mastering sensor detects the mechanical zero position of the axis at
an unexpected point.
For other robots, the axis positions can only be referenced via an external sys-
tem. The interface for external position referencing must be configured.
(>>> 13.14.4 "External position referencing" Page 286)
The safety integrity of the safety functions based upon this is limited until the
position referencing test has been performed. This includes, for example,
safely monitored Cartesian and axis-specific robot positions, safely monitored
Cartesian velocities, TCP force monitoring and collision detection.
If position referencing fails on at least one axis, all AMFs based on safe axis
positions are violated. (>>> "Position-based AMFs" Page 291)
Requirement The position of an axis is not referenced after the following events:
Robot controller is rebooted.
The axis is remastered.
Torque referencing of the axis fails.
The maximum torque of the joint torque sensor of the axis has been ex-
ceeded.
These events do not lead to a violation of the safe position-based safety func-
tions. The robot can be moved, but the safety integrity of the safety functions
is no longer assured.
The safety functions based on safe positions are only violated after these
events if the position referencing of an axis fails. Referencing must be suc-
cessfully carried out before safety-critical applications can be executed.
The position referencing status can be used as an AMF in the safety configu-
ration. (>>> 13.12.6 "Evaluating the position referencing" Page 249)
Precondition The position of an axis is referenced when the axis is moved over the saved
zero position of the motor and the mastering sensor detects the zero position
of the axis in a range of 0° +/- 0.5°.
Preconditions for this:
The velocity at which the axis is moved over the zero position must be
< 30 °/s.
At the very least, a defined axis-specific range before and after the zero
position must be passed through. The motion direction is not relevant.
The axis-specific range of motion is robot-specific:
Robot variant A1 A2 A3 A4 A5 A6 A7
LBR iiwa 7 R800 ±10.5° ±10.5° ±10.5° ±10.5° ±10.5° ±14° ±14°
LBR iiwa 14 R820 ±9.5° ±9.5° ±10.5° ±10.5° ±10.5° ±14° ±14°
Execution Position referencing of all axes is continuously performed by the system when
the above conditions are met. Position referencing can be carried out in a tar-
geted manner the following ways:
Automatically while the program is running, when an axis moves over the
zero position at less than 30 °/s.
Jogging each axis individually over the zero position.
Executing the application prepared by KUKA. The axes are moved over
the zero position from the vertical stretch position.
An application for position and torque referencing of the LBR iiwa is avail-
able from Sunrise.Workbench. Position and torque referencing can be car-
ried out simultaneously with this application.
(>>> 13.14.3 "Creating an application for position and torque referencing"
Page 286)
Description The LBR iiwa has a joint torque sensor in each axis which reliably determines
the torque currently acting on the axis. These data are used for calculating and
monitoring externally acting torques or Cartesian forces, for example.
During referencing of the joint torque sensors, the system checks whether the
expected external torque of an axis matches the actual external torque of the
axis:
The expected torque is calculated using the robot model and the specified
load data for each axis.
The actual torque is determined on the basis of the measured value of the
joint torque sensor for each axis.
If the difference between the expected torque and the actual torque exceeds
a certain tolerance value, the referencing of the torque sensors has failed.
The safety integrity of the safety functions based upon this is limited until the
torque referencing test has been performed successfully. This includes, for ex-
ample, axis torque and TCP force monitoring as well as collision detection.
If torque referencing fails on at least one axis, all AMFs based on safe torque
values are violated. (>>> "Axis torque-based AMFs" Page 291)
Requirement The joint torque sensor of an axis is not referenced after the following events:
Robot controller is rebooted.
Position referencing of the axis fails.
The maximum torque of the joint torque sensor of the axis has been ex-
ceeded.
These events do not lead to a violation of the safety functions based on safe
torque values. The robot can be moved, but the safety integrity of the safety
functions is no longer assured.
The safety functions based on safe torque values are only violated after these
events if torque referencing of one axis fails. Referencing must be successfully
carried out before safety-critical applications can be executed.
The torque referencing status can be used as an AMF in the safety configura-
tion. (>>> 13.12.7 "Evaluating the torque referencing" Page 250)
Execution An application for position and torque referencing of the LBR iiwa is available
from Sunrise.Workbench. Position and torque referencing can be carried out
simultaneously with this application.
(>>> 13.14.3 "Creating an application for position and torque referencing"
Page 286)
A total of 10 measured joint torque values must be given for each axis. For this
purpose, 5 measurement poses are defined in the application, each of which
can be addressed with positive and negative directions of axis rotation. If the
poses cannot be addressed, they must be adapted in the application.
The safety controller evaluates the external torque for all 10 measured values
and determines the mean value of the external torque for each axis. Referenc-
ing is successful if this mean value is below a defined tolerance. Otherwise,
referencing has failed.
Before performing the torque referencing, the user must ensure the
following points:
The load data of the fixed tool mounted on the robot flange must
match the load data with which the fixed safety-oriented tool is config-
ured.
The load data of the tool that is coupled to the fixed tool (if present) must
match the load data of the activated safety-oriented tool.
The load data of the workpiece that is picked up (if present) must match
the load data of the activated workpiece.
There must be no supplementary loads mounted on the robot, e.g. dress
packages.
If one of these points is not met, the safety integrity of the referencing of the
joint torque sensors is not given.
Description The following points must be observed if the application for torque referencing
needs to be edited due to measurement poses which cannot be addressed:
The joint torque values must be measured while the robot is stationary.
A wait time of at least 2.5 seconds in which the robot does not move is re-
quired between the moment the measurement pose is reached and the
measurement itself. Wait times which are too short can reduce the refer-
encing accuracy due to oscillations on the robot.
The measurement is started with the method sendSafetyCommand().
There may be a maximum of 15 s between 2 consecutive measurements.
Procedure 1. In the Package Explorer view, select the desired project or package in
which the application is to be created.
2. Select the menu sequence File > New > Sunrise application. The wizard
for creating a new Sunrise application is opened.
3. In the folder Application examples > LBR iiwa, select the application Po-
sition and GMS referencing and click on Finish.
The application PositionAndGMSReferencing.java is created in the
source folder of the project and opened in the editor area of Sunrise.Work-
bench.
4. If measurement poses cannot be addressed due to the system configura-
tion, adapt them in the application.
5. Perform project synchronization in order to transfer the application to the
robot controller.
Description The user has the possibility of implementing his own test method or an exter-
nal system for position referencing, e.g. a tracker, a navigation system or an
absolute encoder. Confirmation that the external position referencing has
been successfully carried out must be communicated to the robot controller via
a safety-oriented input.
The input for external position referencing can be configured in the safety-ori-
ented project settings. If the external signal at this input changes from LOW to
HIGH and back to LOW within 2 seconds, the position referencing has been
successfully confirmed.
Description The safety-oriented input that allows external position referencing is config-
ured in the safety-oriented project settings.
Procedure 1. Right-click on the desired project in the Package Explorer view and select
Sunrise > Change project settings from the context menu.
The Properties for [Sunrise Project] window opens.
2. Select Sunrise > Safety in the directory in the left area of the window.
3. Make the following settings in the right-hand part of the window:
Set the check mark at Allow external position referencing.
Select the input that is to be used for external position referencing.
The inputs of the discrete safety interface and of the Ethernet safety
interface can be used as long as they are configured in WorkVisual.
(>>> 13.4 "Safety interfaces" Page 218)
The enabling device of the hand guiding device can also be used as
an input.
4. Click on OK to save the settings and close the window.
The system must not be put into operation until the safety acceptance proce-
dure has been completed successfully. For successful safety acceptance, the
points in the checklists must be completed fully and confirmed in writing by the
safety maintenance technician.
The following checklists must be used to verify whether the configured safety
parameters have been correctly transferred.
The checklists must be processed in the following order:
1. Checklist for basic test of the safety configuration
(>>> 13.15.1 "Checklist – System safety functions" Page 288)
2. Checklists for checking the mapped safety-oriented tools
(>>> 13.15.2 "Tool selection table checklist" Page 292)
(>>> 13.15.3 "Checklists for safety-oriented tools" Page 293)
3. Checklist for checking the rows used in the KUKA PSM table and in the
Customer PSM table
(>>> 13.15.4 "Checklist for rows used in PSM tables" Page 297)
4. Checklists for checking the ESM states which have been used and not
used
(>>> 13.15.5 "Checklists for ESM states" Page 298)
5. Checklists for checking the AMFs used
(>>> 13.15.6 "Checklists for AMFs used" Page 300)
6. Checklists for checking the safety-oriented project settings
(>>> 13.15.7 "Checklists – safety-oriented project settings" Page 308)
It is possible to create a report of the current safety configuration.
(>>> 13.15.8 "Creating a safety configuration report" Page 310)
Ye
No. Activity Not relevant
s
1 Operator safety: is all operator safety equipment configured,
properly connected and tested for correct function?
2 Operator safety: a stop is triggered if AUT or T2 mode is active
with the operator safety open.
3 Operator safety: a manual reset function is present and acti-
vated.
4 Brake test: is a brake test planned and has an application
been created for this purpose?
5 Hand guiding device enabling state: is the enabling device of
the hand guiding device configured, properly connected and
tested for correct function?
6 Local EMERGENCY STOP: are all local EMERGENCY STOP
devices configured, properly connected and tested for correct
function?
7 External EMERGENCY STOP: are all external EMERGENCY
STOP devices configured, properly connected and tested for
correct function?
8 Local and external EMERGENCY STOP: are the local and
external EMERGENCY STOPs each configured as an individ-
ual AMF in a row of the PSM table?
9 If unplugging of the smartPAD is allowed in the station configu-
ration: is at least one external EMERGENCY STOP device
installed?
Ye
No. Activity Not relevant
s
10 Safety stop: is all operator safety equipment configured, prop-
erly connected and tested for correct function?
11 Safe operational stop: is all equipment for the safe operational
stop configured, properly connected and tested for correct
function?
12 When using position-based AMFs: is the limited safety integ-
rity of the position-based AMFs taken into consideration in the
absence of position referencing?
(>>> "Position-based AMFs" Page 291)
Note: Initiation of the safe state in the absence of position ref-
erencing can be configured by using the AMF Position refer-
encing.
13 When using position-based AMFs: has position referencing
been carried out successfully?
14 If external position referencing is used: has a suitable test
method for position mastering been provided?
15 If external position referencing is used: has it been ensured
that the input is only set after successful testing?
16 Velocity monitoring: have all necessary velocity monitoring
tests been configured and tested?
17 Manual guidance: has it been configured in such a way that
appropriate velocity monitoring is active in every operating
mode for manual guidance?
18 If using the enabling device of the hand guiding device as an
input for deactivating safety functions:
Has it been taken into consideration that using the enabling
device as an input may result in safety functions being deacti-
vated during manual guidance?
19 Workspace monitoring: have all necessary workspace moni-
toring tests been configured and tested?
20 Cartesian workspace monitoring functions: has it been taken
into consideration that the system does not monitor the entire
structure of the robot, tool and workpiece against the space
violation, but only the monitoring spheres on the robot and
tool?
21 Collision detection: have all necessary HRC functionalities
been configured?
22 Collision detection: has it been configured in such a way that
velocity monitoring is also always active when collision detec-
tion is active?
23 Collision detection: has it been configured in such a way that
velocity monitoring is also always active when TCP force mon-
itoring or monitoring of a base-related TCP force component is
active?
24 Collision detection: When using the AMF Base-related TCP
force component:
has it been ensured that no hazardous forces can arise in the
non-monitored directions?
25 Collision detection: is a safety stop 0 configured for all safety
monitoring functions in order to detect crushing situations?
Ye
No. Activity Not relevant
s
26 When using axis torque-based AMFs: is the limited safety
integrity of the axis torque-based AMFs taken into consider-
ation in the absence of position referencing and/or torque ref-
erencing?
(>>> "Axis torque-based AMFs" Page 291)
Note: Initiation of the safe state in the absence of position
and/or torque referencing can be configured by using the AMF
Position referencing and/or the AMF Torque referencing.
27 In the configuration of all rows in the PSM table and all ESM
states, has it been taken into account that the safe state of the
AMFs is the “violated” state (state “0”)?
Note: In the event of an error, an AMF goes into the safe state.
28 PSM configuration: in the configuration of output signals, has it
been taken into account for the safety reaction that an output
is LOW (state “0”) in the safe state?
29 PSM configuration: Was a check carried out during configura-
tion of the “Brake” safety reaction to see whether there could
be an increased risk due to rapid switching to and from the vio-
lation state of the AMFs with which the Cartesian velocity mon-
itoring is linked?
Note: In the case of rapid switching between the states, it is
possible that the “Brake” safety reaction could lead to no
reduction in velocity.
30 ESM configuration: are all ESM states consistent, i.e. does
each individual ESM state sufficiently reduce all dangers?
31 Have torque and position referencing been carried out suc-
cessfully?
32 If the monitored kinematic system is fastened to a carrier kine-
matic system (e.g. mobile platform, linear unit):
Has the fact been taken into consideration that, with the AMF
Cartesian workspace monitoring / Cartesian protected space
monitoring, the monitoring space is defined relative to the base
of the monitored kinematic system and moves with the carrier
kinematic system?
33 If the monitored kinematic system is fastened to a carrier kine-
matic system (e.g. mobile platform, linear unit):
Has the fact been taken into consideration that, with the AMF
Cartesian velocity monitoring, it is not the absolute velocity, but
the velocity of the monitored kinematic system relative to the
carrier kinematic system that is monitored?
34 If the monitored kinematic system is fastened to a carrier kine-
matic system (e.g. mobile platform, linear unit):
Has the fact been taken into consideration that, with the AMF
Tool-related velocity component, it is not the absolute velocity,
but the velocity of the monitored kinematic system relative to
the carrier kinematic system that is monitored?
Ye
No. Activity Not relevant
s
35 If the monitored kinematic system is fastened to a carrier kine-
matic system (e.g. mobile platform):
Has the fact been taken into consideration that, with the AMF
Tool orientation, the reference orientation is defined relative to
the carrier kinematic system and moves with the carrier kine-
matic system?
36 If the monitored kinematic system is fastened to a carrier kine-
matic system (e.g. mobile platform):
Has the fact been taken into consideration that, with the AMF
Base-related TCP force component, the reference coordinate sys-
tem is defined relative to the robot base and the monitored
direction of the force component moves with the carrier kine-
matic system?
37 If the monitored kinematic system is fastened to a carrier kine-
matic system (e.g. mobile platform, linear unit):
Has the fact been taken into consideration that the safety
integrity of the AMFs Collision detection, TCP force monitoring
and Base-related TCP force component is only assured as
long as the carrier kinematic system is at a standstill?
Place, date
Signature
Position-based The safety integrity of position-based AMFs is only given without limitations
AMFs when position referencing has been carried out successfully. (Position-based
AMFs are only supported by robot types that have corresponding sensor sys-
tems, e.g. an LBR.)
Tool orientation
Axis torque- The safety integrity of axis torque-based AMFs is only given without limitations
based AMFs when position and/or torque referencing has been carried out successfully.
(Axis torque-based AMFs are only supported by robot types that have corre-
sponding sensor systems, e.g. an LBR.)
Collision detection
Description If one of the following AMFs is used in the safety configuration, the mapped
safety-oriented tools must be checked:
Cartesian velocity monitoring
Only if the monitoring spheres on the tool are configured as a structure to
be monitored.
Tool-related velocity component
Cartesian workspace monitoring / Cartesian protected space monitoring
Only if the monitoring spheres on the tool are configured as a structure to
be monitored.
Tool orientation
Collision detection
TCP force monitoring
Base-related TCP force component
Torque referencing
For each activated row of the tool selection table, it is necessary to check
whether the selected tool has been correctly assigned to the kinematic sys-
tem. This can be done, for example, using a suitable test for verification of the
tool parameters. A test is suitable if it checks a tool parameter, the value of
which differs considerably from that of the other safety-oriented tools:
In the case of considerably different geometric dimensions, it is advisable
to check whether the geometric tool data have been specified correctly.
(>>> 13.15.3.5 "Geometry data of the tool" Page 296)
In the case of considerably different load data, it is advisable to check
whether the load data of the tool have been specified correctly.
(>>> 13.15.3.6 "Load data of the tool" Page 297)
In the case of considerably different parameters when using the AMF Tool
orientation, it is advisable to check whether the tool orientation that is to be
monitored has been configured correctly.
(>>> 13.15.3.3 "Tool orientation" Page 295)
In the case of considerably different parameters when using the AMF Tool-
related velocity component, it is advisable to check whether the velocity
component that is to be monitored has been configured correctly.
(>>> 13.15.3.4 "Tool-related velocity component" Page 295)
For each row in the tool selection table, the points in the checklist
must be executed and separately documented.
Precondition If the tool is activated via an input: the configured input is HIGH.
If the tool is always active: only the fixed tool is mounted on the kinematic
system.
Description If the fixed tool of a kinematic system can pick up activatable tools, and if one
of the following AMFs is used simultaneously in the safety configuration, the
position and orientation of the pickup frame of the fixed tool (= default frame
for motions of the fixed tool) must be checked:
Cartesian velocity monitoring
Only if the monitoring spheres on the tool are configured as a structure to
be monitored.
Tool-related velocity component
Cartesian workspace monitoring / Cartesian protected space monitoring
Only if the monitoring spheres on the tool are configured as a structure to
be monitored.
Tool orientation
Collision detection
TCP force monitoring
Base-related TCP force component
Torque referencing
If the fixed tool of a kinematic system can pick up workpieces, and if one of the
following AMFs is used simultaneously in the safety configuration, the position
and orientation of the pickup frame of the fixed tool must again be checked:
Collision detection
TCP force monitoring
Base-related TCP force component
Torque referencing
If the fixed tool is used for picking up workpieces (no activatable tool can be
coupled), the pickup frame of the fixed tool must be verified. For this purpose,
check whether the load data of the tool have been specified correctly. The test
must be carried out with as heavy a workpiece as possible.
(>>> 13.15.3.6 "Load data of the tool" Page 297)
If the fixed tool is used for picking up an activatable tool (e.g. in the case of a
tool changer), the pickup frame of the fixed tool must be verified by means of
a suitable test with the activatable tool coupled to the fixed tool. A test is suit-
able if the parameters of the pickup frame have a major influence on the test
result:
In the case of large values for the position of the pickup frame and/or a pro-
truding coupled tool, it is advisable to check whether the geometric tool
data have been specified correctly.
(>>> 13.15.3.5 "Geometry data of the tool" Page 296)
If the tool is relevant for the monitoring of a tool-related velocity compo-
nent, it is advisable to check whether the velocity component to be moni-
tored has been configured correctly.
(>>> 13.15.3.4 "Tool-related velocity component" Page 295)
In the case of large values for the position of the pickup frame and a heavy
coupled tool, it is advisable to check whether the load data of the tool have
been specified correctly.
(>>> 13.15.3.6 "Load data of the tool" Page 297)
If only the tool orientation is monitored for a kinematic system, the orienta-
tion of the pickup frame can be verified. The test is only suitable if none of
the other AMFs mentioned above is used in the safety configuration for
this kinematic system.
(>>> 13.15.3.3 "Tool orientation" Page 295)
For each configured fixed tool in the tool selection table, the points in
the checklist must be executed and separately documented if the fol-
lowing preconditions are met:
The tool can pick up workpieces or activatable tools.
AND: One of the AMFs listed here is used in the safety configuration for
the kinematic system to which the tool is assigned.
Description If an activatable tool of a kinematic system can pick up a workpiece and, at the
same time, one of the following AMFs is used in the safety configuration, the
position and orientation of the pickup frame of the activatable tool must be
checked:
Collision detection
TCP force monitoring
Base-related TCP force component
Torque referencing
The pickup frame of the tool can be verified by checking whether the load data
of the tool have been specified correctly. The test must be carried out with as
heavy a workpiece as possible.
(>>> 13.15.3.6 "Load data of the tool" Page 297)
For each configured activatable tool in the tool selection table, the
points in the checklist must be executed and separately documented
if the following preconditions are met:
The tool can pick up workpieces.
AND: One of the AMFs listed here is used in the safety configuration for
the kinematic system to which the tool is assigned.
13.15.3.3Tool orientation
Description If one of the following AMFs is used in the safety configuration, it is necessary
to check whether the tool orientation that is to be monitored has been config-
ured correctly:
Tool orientation
(>>> 13.15.6.24 "AMF Tool orientation" Page 306)
Description If one of the following AMFs is used in the safety configuration, it is necessary
to check whether the tool-related velocity component that is to be monitored
has been configured correctly:
Tool-related velocity component
(>>> 13.15.6.25 "AMF Tool-related velocity component" Page 308)
Precondition Position referencing has been carried out successfully (not necessary in
the case of a mobile platform).
The correct safety-oriented tool is active.
If a fixed tool is checked: no activatable tool is coupled.
Description If one of the following AMFs is used in the safety configuration, it is necessary
to check that the geometric tool data have been entered correctly:
Cartesian velocity monitoring
Only if the monitoring spheres on the tool are configured as a structure to
be monitored.
Cartesian workspace monitoring / Cartesian protected space monitoring
Only if the monitoring spheres on the tool are configured as a structure to
be monitored.
The geometric tool data can be tested by intentionally violating one of the con-
figured monitoring spaces with each tool sphere and checking the reaction.
If no space monitoring functions are used, only the position of the sphere cen-
ter points is relevant. The configured Cartesian velocity limit can be tested by
intentionally exceeding this velocity for each tool sphere and checking the re-
action.
Precondition Position referencing has been carried out successfully (not necessary in
the case of a mobile platform).
The correct safety-oriented tool is active.
If the geometry data of a fixed tool are checked: no activatable tool is cou-
pled.
Ye
No. Activity Not relevant
s
1 Tool sphere (frame name) _____________
Have the radius and position of the tool sphere been correctly
entered and checked?
2 Tool sphere (frame name) _____________
Have the radius and position of the tool sphere been correctly
entered and checked?
3 Tool sphere (frame name) _____________
Have the radius and position of the tool sphere been correctly
entered and checked?
4 Tool sphere (frame name) _____________
Have the radius and position of the tool sphere been correctly
entered and checked?
5 Tool sphere (frame name) _____________
Have the radius and position of the tool sphere been correctly
entered and checked?
6 Tool sphere (frame name) _____________
Have the radius and position of the tool sphere been correctly
entered and checked?
Description If any of the following AMFs is used in the safety configuration, it is necessary
to check that the load data of the safety-oriented tool have been entered cor-
rectly:
Collision detection
TCP force monitoring
Base-related TCP force component
Torque referencing
It is advisable to check the load data by performing torque referencing in sev-
eral suitable poses. Suitable poses include those with similar axis angles in the
horizontal extended position and the following properties:
Axes A2, A4 and A6 are loaded.
The poses differ in their axis value of A7 by 90°.
If the load data are correct, torque referencing must be successful.
Precondition Position and torque referencing have been carried out successfully.
The correct safety-oriented tool is active.
If the load data of a fixed tool are being checked: No activatable tool is cou-
pled.
If a workpiece is picked up by the tool to check the load data: In the appli-
cation, the correct workpiece has been transferred to the safety controller.
Description Each row in the PSM table KUKA PSM and in the PSM table Customer PSM
must be tested to verify that the expected reaction is triggered. If the reaction
is to switch off an output, the test must also ensure that the output is correctly
connected.
The “Brake” reaction can be checked by moving the robot at a velocity that ex-
ceeds the limit value of the Cartesian velocity monitoring. As soon as all other
AMFs of the PSM row are violated, the velocity must be reduced to a value
below the limit value. There must be no stop to a complete standstill.
A row in the PSM table can be tested by violating 2 of its AMFs at a time. It is
then possible to test the remaining AMF separately in a targeted manner. If
fewer than 3 AMFs are used in a row, the unassigned columns are regarded
as violated AMFs.
(>>> 13.15.6 "Checklists for AMFs used" Page 300)
For each row in the PSM table, the points in the checklist must be ex-
ecuted and separately documented.
Ye
No. Activity Not relevant
s
1 AMF 1 was tested successfully. Precondition: AMF 2 and
AMF 3 are violated.
AMF 1: ______________________________________
2 AMF 2 was tested successfully. Precondition: AMF 1 and
AMF 3 are violated.
AMF 2: ______________________________________
3 AMF 3 was tested successfully. Precondition: AMF 1 and
AMF 2 are violated.
AMF 3: ______________________________________
Description Each row in the ESM state must be tested to verify that the expected reaction
is triggered when the configured AMF is violated.
(>>> 13.15.6 "Checklists for AMFs used" Page 300)
For each ESM state, the points in the checklist must be executed and
separately documented.
Ye
No. Activity Not relevant
s
1 AMF row 1 was tested successfully.
AMF row 1: _________________________________
2 AMF row 2 was tested successfully.
AMF row 2: _________________________________
3 AMF row 3 was tested successfully.
AMF row 3: _________________________________
4 AMF row 4 was tested successfully.
AMF row 4: _________________________________
5 AMF row 5 was tested successfully.
AMF row 5: _________________________________
6 AMF row 6 was tested successfully.
AMF row 6: _________________________________
7 AMF row 7 was tested successfully.
AMF row 7: _________________________________
8 AMF row 8 was tested successfully.
AMF row 8: _________________________________
Ye
No. Activity Not relevant
s
9 AMF row 9 was tested successfully.
AMF row 9: _________________________________
10 AMF row 10 was tested successfully.
AMF row 10: _________________________________
11 AMF row 11 was tested successfully.
AMF row 11: _________________________________
12 AMF row 12 was tested successfully.
AMF row 12: _________________________________
13 AMF row 13 was tested successfully.
AMF row 13: _________________________________
14 AMF row 14 was tested successfully.
AMF row 14: _________________________________
15 AMF row 15 was tested successfully.
AMF row 15: _________________________________
16 AMF row 16 was tested successfully.
AMF row 16: _________________________________
17 AMF row 17 was tested successfully.
AMF row 17: _________________________________
18 AMF row 18 was tested successfully.
AMF row 18: _________________________________
19 AMF row 19 was tested successfully.
AMF row 19: _________________________________
20 AMF row 20 was tested successfully.
AMF row 20: _________________________________
Description All ESM states which are not used must be tested as to whether a safety stop
is triggered when the ESM state is selected.
Checklist
Ye
No. Activity Not relevant
s
1 Selection of non-used ESM state 1 was tested successfully.
2 Selection of non-used ESM state 2 was tested successfully.
3 Selection of non-used ESM state 3 was tested successfully.
4 Selection of non-used ESM state 4 was tested successfully.
5 Selection of non-used ESM state 5 was tested successfully.
6 Selection of non-used ESM state 6 was tested successfully.
7 Selection of non-used ESM state 7 was tested successfully.
8 Selection of non-used ESM state 8 was tested successfully.
9 Selection of non-used ESM state 9 was tested successfully.
10 Selection of non-used ESM state 10 was tested successfully.
An AMF which is used in more than one row in the PSM table must be sepa-
rately tested in each row.
Checklist
No. Activity Yes
1 The configured reaction is triggered by pressing the E-STOP on the smart-
PAD.
Checklist
No. Activity Yes
1 The configured reaction is triggered by releasing an enabling switch on the
smartPAD.
Checklist
No. Activity Yes
1 The configured reaction is triggered by pressing an enabling switch down
fully on the smartPAD.
All enabling switches and panic switches configured for the hand guiding de-
vice must be tested.
Ye
No. Activity Not relevant
s
1 The configured reaction is triggered by releasing enabling
switch 1.
2 The configured reaction is triggered by pressing fully down on
enabling switch 1 (panic position).
3 The configured reaction is triggered by releasing enabling
switch 2.
4 The configured reaction is triggered by pressing fully down on
enabling switch 2 (panic position).
5 The configured reaction is triggered by releasing enabling
switch 3.
6 The configured reaction is triggered by pressing fully down on
enabling switch 3 (panic position).
All enabling switches configured for the hand guiding device must be tested.
Ye
No. Activity Not relevant
s
1 The configured reaction is triggered by pressing enabling
switch 1.
2 The configured reaction is triggered by pressing enabling
switch 2.
3 The configured reaction is triggered by pressing enabling
switch 3.
Checklist
No. Activity Yes
1 The configured reaction is triggered in T1.
2 The configured reaction is triggered in T2.
3 The configured reaction is triggered in CRR.
Checklist
No. Activity Yes
1 The configured reaction is triggered in AUT.
Checklist
No. Activity Yes
1 The configured reaction is triggered in T1.
2 The configured reaction is triggered in CRR.
Checklist
No. Activity Yes
1 The configured reaction is triggered in T2.
2 The configured reaction is triggered in AUT.
Checklist
No. Activity Yes
1 The configured reaction is triggered if, for example, the E-STOP is pressed
on the smartPAD.
Description The AMF can be tested by displaying the current measured axis torques on
the smartPAD and then subjecting the monitored axis to gravitational force or
manual loading.
Description The AMF can be tested by moving the monitored axis at a velocity of approx.
10% over the configured velocity limit.
Description The AMF can be tested by moving a monitored point of the monitored kinemat-
ic system at a Cartesian velocity of approx. 10% over the configured velocity
limit.
It must also be tested whether the structure to be monitored is correctly con-
figured. This involves violating the velocity monitoring, both with the monitor-
ing spheres on the robot and on the tool (if both structures are monitored), or
just with the monitoring spheres on the robot or on the tool.
Ye
No. Activity Not relevant
s
1 The configured reaction is triggered if the maximum permissi-
ble Cartesian velocity is exceeded at a monitored point.
2 The configured reaction is triggered if the velocity monitoring is
violated by the monitoring spheres on the robot.
3 The configured reaction is triggered if the velocity monitoring is
violated only by the monitoring spheres on the tool.
Description The first step is to test whether the orientation of the monitoring space is cor-
rectly configured. This involves violating 2 adjoining space surfaces at a mini-
mum of 3 different points in each case.
The second step is to test whether the size of the monitoring space is correctly
configured. This involves violating the other space surfaces at a minimum of
1 point in each case. In total, at least 10 points must be addressed.
The third step is to test whether the structure to be monitored is correctly con-
figured. This involves violating the space monitoring, both with the monitoring
spheres on the robot and on the tool (if both structures are to be monitored),
or just with the monitoring spheres on the robot or on the tool.
Ye
No. Activity Not relevant
s
1 The correct configuration of the orientation of the monitoring
space has been tested as described above. The configured
reaction is triggered every time a monitoring space is violated.
2 The correct configuration of the size of the monitoring space
has been tested as described above. The configured reaction
is triggered every time a monitoring space is violated.
3 The configured reaction is triggered if the space monitoring is
violated on the monitoring spheres on the robot.
4 The configured reaction is triggered if the space monitoring is
violated on the monitoring spheres on the tool.
Description The AMF can be tested by displaying the current measured external axis
torques on the smartPAD and then loading the individual axes.
Description In order to test the AMF, suitable measuring equipment is required, e.g. a
spring balance.
During the test, it must be noted that the monitoring function automatically
takes into consideration possible errors in the workpiece load data. This
means that the response may be triggered before the permissible external
TCP force has been reached.
Premature triggering of the response can be prevented by performing the test
as follows:
Tool has picked up no workpiece.
In the application, transfer no workpiece to the safety controller.
Apply the TCP force in the direction of gravitational acceleration (vertically
downwards) or perpendicular to gravitational acceleration.
Description In order to test the AMF, suitable measuring equipment is required, e.g. a
spring balance.
For the test, a force that is just above the configured maximum permissible
TCP force must be exerted on the tool or robot flange in 2 different directions:
Along the direction of the configured force component
In a direction perpendicular to the direction of the configured force compo-
nent
This is to ensure that the AMF is only violated if an excessive force is applied
along the direction of the configured force component.
During the test, it must be noted that the monitoring function automatically
takes into consideration possible errors in the workpiece load data. This
means that the response may be triggered before the permissible external
TCP force has been reached.
If, for example, no workpiece is picked up during the test, and if no workpiece
has been transferred to the safety controller in the application, this force that
is additionally taken into consideration corresponds to the weight of the heavi-
est workpiece configured in the safety-oriented project settings. The force that
is taken into consideration counteracts gravitational acceleration (it is applied
vertically upwards).
Description In order to test the AMF, the permissible orientation cone must be violated at
3 straight lines offset by approx. 120° to one another. This ensures that the
permissible orientation angle, the orientation of the reference vector and the
tool orientation are correctly configured.
The orientation angles of the Z axis of the tool orientation frame are defined
using 3 straight lines situated on the edge of the monitoring cone and offset at
120° to one another. These orientation angles must be set in order to test the
AMF Tool orientation. The AMF must be violated when all 3 orientation angles
are exceeded.
Procedure The procedure describes an example of how the correct configuration of the
monitoring cone can be tested.
1. Orient the Z axis of the tool orientation frame according to the reference
vector relative to the world coordinate system.
2. Exceed the permissible deviation angle by tilting the tool orientation frame
in B or C.
The configured reaction must be triggered.
3. Orient the Z axis of the tool orientation frame according to the reference
vector relative to the world coordinate system.
If a stop reaction has been configured, the robot must be switched to CRR
mode in order for it to be moved.
4. Rotate the tool orientation frame by 120° in A.
5. Exceed the permissible deviation angle by tilting the tool orientation frame
in B or C.
The configured reaction must be triggered.
6. Orient the Z axis of the tool orientation frame according to the reference
vector relative to the world coordinate system.
If a stop reaction has been configured, the robot must be switched to CRR
mode in order for it to be moved.
7. Rotate the tool orientation frame by 120° in A.
8. Exceed the permissible deviation angle by tilting the tool orientation frame
in B or C.
The configured reaction must be triggered.
The test must be carried out for every instance of the AMF and for ev-
ery tool that is mapped in the tool selection table to a kinematic sys-
tem for which the AMF Tool orientation is configured.
(>>> 13.15.3.3 "Tool orientation" Page 295)
If a fixed tool is configured that can pick up activatable tools, the fixed
tool must be checked in addition to the activatable tools without an ac-
tivatable tool being coupled.
Example: 4 instances of the AMF are configured, with 5 different tools that
can be selected for fastening to the fixed tool. At least 6 tests are necessary
to verify all AMF instances and tool orientations. If, on the other hand, only 2
different tools are available for selection for fastening on the fixed tool, 4 tests
are sufficient.
Description For the test, a motion with the configured point for the tool-related velocity
component must be programmed. The test motion must include a reorienta-
tion of the tool in order to check the correct configuration of the monitored
point.
The test must be performed twice:
Once at a velocity slightly above the maximum permissible velocity
Once at a velocity slightly below the maximum permissible velocity
This is to ensure that the velocity limit is only violated by the configured mon-
itored point.
The test must be carried out for every instance of the AMF and for ev-
ery tool that is mapped in the tool selection table to a kinematic sys-
tem for which the AMF Tool-related velocity component is configured.
(>>> 13.15.3.4 "Tool-related velocity component" Page 295)
If a fixed tool is configured that can pick up activatable tools, the fixed
tool must be checked in addition to the activatable tools. When check-
ing the fixed tool, no activatable tool may be coupled.
Example: 4 instances of the AMF are configured, with 5 different tools that
can be selected for fastening to the fixed tool. At least 6 tests are necessary
to verify all AMF instances and tool-related velocity components. If, on the
other hand, only 2 different tools are available for selection for fastening on
the fixed tool, 4 tests are sufficient.
Description The safety parameter smartPAD unplugging allowed in the station configu-
ration determines whether it is possible to move the robot with the smartPAD
unplugged. The configured response must be tested while the robot is moving
in Automatic mode.
Description If an input that allows the deactivation of safety functions is configured in the
safety-oriented project settings, a safety stop triggered by one of the following
AMFs can be briefly cancelled:
Axis range monitoring
Cartesian workspace monitoring
Cartesian protected space monitoring
Tool orientation
Tool-related velocity component
Standstill monitoring of all axes
Position referencing
Torque referencing
Axis torque monitoring
Collision detection
TCP force monitoring
Base-related TCP force component
The configured input must be tested. For this, a safety stop must be triggered
using at least one of the above AMFs, e.g. by violating a workspace or activat-
ing a standstill monitoring function.
Deactivation of safety functions via an input not allowed:
If the configured input is set to HIGH and retains this value, the robot can-
not be moved when the corresponding AMF is violated.
Deactivation of safety functions via an input allowed:
If the configured input is set to HIGH and retains this value, the robot can
be moved for 5 seconds even though the corresponding AMF is violated.
Description If an input that allows external position referencing is configured in the safety-
oriented project settings, this input must be tested.
The axis positions are not referenced after a reboot of the robot controller. If
the safety configuration contains a position-based AMF, the warning “Axis not
referenced” is displayed. The warning may no longer be displayed if the input
via which the external position referencing is carried out is set to HIGH for less
than 2 seconds..
Description A report of the current safety configuration can be created and displayed in the
Editor. The report can be edited and printed for documentation purposes.
The safety configuration report contains the following information for the un-
ambiguous assignment of the safety configuration:
Name of the Sunrise project to which the safety configuration belongs
Safety version used
Safety ID (checksum of the safety configuration)
The safety ID must match the ID of the safety configuration which is acti-
vated on the robot controller and is to be tested.
Date and time of the last modification to the safety configuration
Checklists The report provides the following checklists matching the safety configuration:
Checklist for checking the rows used in the Customer PSM table
Checklists for checking the ESM states which have been used and not
used
Checklists for checking the AMFs used
Checklists for checking the safety-oriented project settings
The checklists provided by the safety configuration report are not suf-
ficient for a complete safety acceptance procedure. The following ad-
ditional checklists must be used for complete safety acceptance:
Checklist for basic test of the safety configuration
Checklists for checking the safety-oriented tool
Checklist for checking the tool selection table
Warnings The safety configuration is checked. There are warnings for the following situ-
ations:
One row in the Customer PSM table is deactivated.
One row in an ESM state is deactivated.
Unplugging of the smartPAD is allowed, but no external EMERGENCY
STOP is used.
The input for deactivating safety functions is used in the tool selection ta-
ble.
Warning of the possible need to perform the brake test if a position-based
or torque-based monitoring function is configured.
The “Brake” safety reaction is configured.
A check must be carried out to ensure that there is no increased risk due
to rapid switching to and from the violation state of the AMFs with which
the Cartesian velocity monitoring is linked.
The safety maintenance technician must give reasons why a warning may be
ignored.
Procedure Right-click on the desired project in the Package Explorer view and select
Sunrise > Create safety configuration report from the context menu.
The report of the current safety configuration is created and opened in the
editor area.
14
4
Basic principles of motion programming
s
The start point of a motion is always the end point of the previous mo-
tion.
The robot guides the TCP along the fastest path to the end point. The fastest
path is generally not the shortest path in space and is thus not a straight line.
As the motions of the robot axes are simultaneous and rotational, curved paths
can be executed faster than straight paths.
PTP is a fast positioning motion. The exact path of the motion is not predict-
able, but is always the same, as long as the general conditions are not
changed.
The robot guides the TCP at the defined velocity along a straight path in space
to the end point.
In a LIN motion, the robot configuration of the end pose is not taken into ac-
count.
The robot guides the TCP at the defined velocity along a circular path to the
end point. The circular path is defined by a start point, auxiliary point and end
point.
In a CIRC motion, the robot configuration of the end pose is not taken into ac-
count.
The motion type SPL enables the generation of curved paths. SPL motions are
always grouped together in spline blocks. The resulting paths run smoothly
through the end points of the SPL motion.
In an SPL motion, the robot configuration of the end pose is not taken into ac-
count.
Spline is a motion type that is particularly suitable for complex, curved paths.
With a spline motion, the robot can execute these complex paths in a contin-
uous motion. Such paths can also be generated using approximated LIN and
CIRC motions, but splines have advantages, however.
Splines are programmed in spline blocks. A spline block is used to group to-
gether several individual motions as an overall motion. The spline block is
planned and executed by the robot controller as a single motion block.
The motions contained in a spline block are called spline segments.
A CP spline block can contain SPL, LIN and CIRC segments.
A JP spline block can contain PTP segments.
In a Cartesian spline motion, the robot configuration of the end pose is not tak-
en into account.
The configuration of the end pose of a spline segment depends on the robot
configuration at the start of the spline segment.
Path of a spline
block
The path is defined by means of points that are located on the path. These
points are the end points of the individual spline segments.
All points are passed through without exact positioning.
Exception: The velocity is reduced to 0.
(>>> 14.6.1 "Velocity profile for spline motions" Page 316)
If all points are situated in a plane, then the path is also situated in this
plane.
If all points are situated on a straight line, then the path is also a
straight line.
There are a few cases in which the velocity is reduced.
(>>> 14.6.1 "Velocity profile for spline motions" Page 316)
The path always remains the same, irrespective of the override setting, ve-
locity or acceleration.
Circles and tight radii are executed with great precision.
The robot controller already takes the physical limits of the robot into consid-
eration during planning. The robot moves as fast as possible within the con-
straints of the programmed velocity, i.e. as fast as its physical limits will allow.
The path always remains the same, irrespective of the override setting, veloc-
ity or acceleration.
Only dynamic effects, such as those caused by high tool loads or the installa-
tion angle of the robot, may result in slight path deviations.
Reduction of the velocity
With spline motions, the velocity falls below the programmed velocity in the fol-
lowing cases:
Tight corners, e.g. due to abrupt change in direction
Major reorientation
Motion in the vicinity of singularities
Reduction of the velocity due to major reorientation can be avoided with spline
segments by programming the orientation control SplineOrientationTy-
pe.Ignore.
(>>> 14.9 "Orientation control with LIN, CIRC, SPL" Page 324)
Reduction of the velocity to 0
With spline motions, exact positioning is carried out in the following cases:
Successive spline segments with the same end points
Successive LIN and/or CIRC segments. Cause: inconstant velocity direc-
tion.
Exceptions:
In the case of successive LIN segments that result in a straight line and in
which the orientations change uniformly, the velocity is not reduced.
);
...
robot.move(mySpline);
Frame X Y Z
P2 100.0 0.0 0.0
P3 102.0 0.0 0.0
P4 104.0 0.0 0.0
P5 204.0 0.0 0.0
Frame X Y Z
P3 102.0 1.0 0.0
Remedy:
Distribute the points more evenly.
Program straight lines (except very short ones) as LIN segments
The path remains inside the smaller angle if the following conditions are met:
The extensions of the two LIN segments intersect.
2/3 ≤ a/b ≤ 3/2
a = distance from start point of the SPL segment to intersection of the LIN
segments
b = distance from intersection of the LIN segments to end point of the SPL
segment
Description The robot can be guided using a hand guiding device. The hand guiding de-
vice is a device equipped with an enabling device and which is required for the
manual guidance of the robot.
Manual guidance mode can be switched on in the application using the motion
command handGuiding(). Manual guidance begins at the actual position
which was reached before the mode was switched on.
(>>> 15.9 "Programming manual guidance" Page 362)
In Manual guidance mode, the robot reacts compliantly to outside forces and
can be manually guided to any point in the Cartesian space. The impedance
parameters are automatically set when the robot is switched to Manual guid-
ance mode. The impedance parameters for manual guidance cannot be mod-
ified.
A manual guidance motion command can only be executed by an application
in Automatic mode. If the application is paused in Manual guidance mode, e.g.
because of a safety stop triggered by an EMERGENCY STOP, the manual
guidance motion is terminated. When the application is resumed, the next mo-
tion command is executed directly.
Approximate positioning means that the motion does not stop exactly at the
end point of the programmed motion, allowing continuous robot motion. Dur-
ing motion programming, different parameters can influence the approximate
positioning.
The point at which the original path is left and the approximate positioning arc
begins is referred to as the approximate positioning point.
PTP motion
The TCP leaves the path that would lead directly to the end point and moves,
instead, along a path that allows it to pass the end point without exact position-
ing. The path thus goes past the point and no longer passes through it.
During programming, the relative maximum distance from the end point at
which the TCP may deviate from its original path in axis space is defined. A
relative distance of 100% corresponds to the entire path from the start point to
the end point of the motion.
The approximation contour executed by the TCP is not necessarily the shorter
path in Cartesian space. The approximated point can thus also be located
within the approximate positioning arc.
LIN motion
The TCP leaves the path that would lead directly to the end point and moves
along a shorter path. During programming of the motion, the maximum dis-
tance from the end point at which the TCP may deviate from its original path
is defined.
CIRC motion
The TCP leaves the path that would lead directly to the end point and moves
along a shorter path. During programming of the motion, the maximum dis-
tance from the end point at which the TCP may deviate from its original path
is defined.
The auxiliary point may fall within the approximate positioning range and not
be passed through exactly. This is dependent on the position of the auxiliary
point and the programmed approximation parameters.
All spline blocks and all individual motions can be approximated with one an-
other. It makes no difference whether they are CP or JP spline blocks, nor is
the motion type of the individual motion relevant.
The motion type of the approximate positioning arc always corresponds to the
second motion. In the case of PTP-LIN approximation, for example, the ap-
proximate positioning arc is of type CP.
If a spline block is approximated, the entire last segment is approximated. If
the spline block only consists of one segment, a maximum of half the segment
is approximated (this also applies for PTP, LIN and CIRC).
Approximate positioning not possible due to time:
If approximation is not possible due to delayed motion commanding, the robot
waits at the start of the approximate positioning arc. The robot moves again as
soon as it has been possible to plan the next block. The robot then executes
the approximate positioning arc. Approximate positioning is thus technically
possible; it is merely delayed.
No approximate positioning in Step mode:
In Step mode, the robot stops exactly at the end point, even in the case of ap-
proximated motions.
In the case of approximate positioning from one spline block to another spline
block, the result of this exact positioning is that the path is different in the last
segment of the first block and in the first segment of the second block in rela-
tion to the path in standard mode.
In all other segments of both spline blocks, the path is identical in both pro-
gram run modes.
Approximated motions which were sent to the robot controller asynchronously
before Step mode was activated and which are waiting there to be executed
will stop at the approximate positioning point. For these motions, the approxi-
mate positioning arc will be executed when the program is resumed.
Description The orientation of the TCP can be different at the start point and end point of
a motion. During motion programming, it is possible to define how to deal with
the different orientations.
Orientation control is set as a motion parameter by the setOrientationType(…)
method. Orientation control is a value of type Enum SplineOrientationType.
CIRC motion It is possible to define for CIRC motions whether the orientation control is to
be space-related or path-related.
(>>> 14.9.1 "CIRC – reference system for the orientation control" Page 326)
During CIRC motions, the robot controller only takes the orientation of the end
point into consideration. It is possible to define whether, and to what extent,
the orientation of the auxiliary point is to be taken into consideration. The ori-
entation behavior at the end point can also be defined.
Description It is possible to define for CIRC motions whether the orientation control is to
be space-related or path-related.
Reference sys-
Description
tem
Base Base-related orientation control during the circular
motion
Path Path-related orientation control during the circular
motion
14.9.2 CIRC – combinations of reference system and type for the orientation control
For a given axis position of a robot, the resulting point in Cartesian space at
which the TCP is located is unambiguously defined. Conversely, however, the
axis position of the robot cannot be unambiguously determined from the Car-
tesian position X, Y, Z and orientation A, B, C of the TCP. A Cartesian point
can be reached with multiple axis configurations. In order to determine an un-
ambiguous configuration, the Status parameter must be specified.
Robots with 6 axes already have ambiguous axis positions for a given Carte-
sian point. With its additional 7th axis, an LBR is able to reach a given position
and orientation with a theoretically unlimited number of axis poses. To unam-
biguously determine the axis pose for an LBR, the redundancy angle must be
specified in addition to the Status.
The Turn parameter is required for axes which can exceed the angle ±180°. In
PTP motions, this helps to unambiguously define the direction of rotation of the
axes. Turn has no influence on CP motions.
Status, Turn and the redundancy angle are saved during the teaching of a
frame. They are managed as arrays of the data type AbstractFrame.
Programming The Status of a frame is only taken into account in PTP motions to this frame.
With CP motions, the Status given by the axis configuration at the start of the
motion is used.
In order to avoid an unpredictable motion at the start of an application and to
define an unambiguous axis configuration, it is advisable to program the first
motion in an application with one of the following instructions: The axis config-
uration should not be in the vicinity of a singular axis position.
PTP motion to a specified axis configuration with specification of all axis
values:
ptp(double a1, double a2, double a3, double a4, double
a5, double a6, double a7)
PTP motion to a specified axis configuration:
ptp(JointPosition joints)
PTP motion to a taught frame (AbstractFrame type):
ptp(getApplicationData().getFrame(String frameName));
With its 7th axis, an LBR is able to reach a point in space with a theoretically
unlimited number of different axis configurations. An unambiguous pose is de-
fined via the redundancy angle.
In an LBR, the redundancy angle has the value of the 3rd axis.
The following applies for all motions:
The redundancy angle of the end frame is taken into account when the ro-
bot that was used when teaching the frame also executes the motion com-
mand. In particular, the robot name defined in the station configuration
must match the device specified in the frame properties.
If the robots do not match or if calculated frames are used, the redundancy
angle given at the start of motion by the axis configuration is retained.
14.10.2 Status
The Status specification prevents ambiguous axis positions. The Status is de-
scribed by a binary number with 3 bits.
Bit 0 Specifies the position of the wrist root point (intersection of axes A5, A6, A7)
with reference to the X axis of the coordinate system of axis A1. The alignment
of the A1 coordinate system is identical to the robot base coordinate system if
axis A1 is at 0°. It moves with axis A1.
Position Value
Overhead area Bit 0 = 1
The robot is in the overhead area if the X value of the
position of the wrist root point, relative to the A1 coordi-
nate system, is negative.
Basic area Bit 0 = 0
The robot is in the basic area if the X value of the posi-
tion of the wrist root point, relative to the A1 coordinate
system, is positive.
Position Value
A4 < 0° Bit 1 = 1
A4 ≥ 0° Bit 1 = 0
Position Value
A6 ≤ 0° Bit 2 = 1
A6 > 0° Bit 2 = 0
The Status of the end frame is not taken into account. The Status given by
the axis configuration at the start of the motion is retained.
Exception: A change of Status is possible if the end frame is addressed
with the SplineOrientationType.OriJoint orientation control. The
status of the end frame is not taken into consideration in this case either.
The Status at the end of the motion is determined by the path planning,
which selects the shortest route to the end frame.
14.10.3 Turn
The Turn specification makes it possible to move axes through angles greater
than +180° or less than -180° without the need for special motion strategies
(e.g. auxiliary points). The Turn is specified by a binary number with 7 bits.
With rotational axes, the individual bits determine the sign before the axis val-
ue in the following way:
Bit = 0: Angle ≥ 0°
Bit = 1: Angle < 0°
The Turn is not taken into account in an LBR because none of its axes can
rotate over ±180°.
14.11 Singularities
Due to the axis position, Cartesian motions of the robot may be limited. Due to
the combination of axis positions of the entire robot, no motions can be trans-
ferred from the drives to the flange (or to an object on the flange, e.g. a tool)
in at least one Cartesian direction. In this case, or if very slight Cartesian
changes require very large changes to the axis angles, one speaks of singu-
larity positions.
The flexibility due to the redundancy of a 7-axis robot, in contrast to the 6-axis
robot, requires 2 or more kinematic conditions (e.g. extended position, 2 rota-
tional axes coincide) to be active at the same time in order reach a singularity
position. There are 4 different robot positions in which flange motion in one
Cartesian direction is no longer possible. Here only the position of 1 or 2 axes
is important in each case. The other axes can take any position.
A4 singularity This kinematic singularity is given when A4 = 0°. It is called the extended po-
sition.
Motion is blocked in the direction of the robot base or parallel to axis A3 or A5.
An additional kinematic condition for this singularity is reaching the workspace
limit. It is automatically met through A4 = 0°.
An extended robot arm causes a degree of freedom for the motion of the wrist
root point to be lost (it can no longer be moved along the axis of the robot arm).
The position of axes A3 and A5 can no longer be resolved.
A4/A6 singularity This kinematic singularity is given when A4 = 90° and A6 = 0°.
A2/A3 singularity This kinematic singularity is given when A2 = 0° and A3 = ±90° (π/2).
A5/A6 singularity This kinematic singularity is given when A5 = ±90° (π/2) and A6 = 0°.
The redundant configuration of the LBR with its 7th axis allows the robot arm
to move without the flange moving. In this null space motion, all axes move ex-
cept A4, the “elbow axis”. In addition to the normal redundancy, it is possible,
under certain circumstances, that only subchains of the robot can move and
not all axes.
All of the robot positions in this category have in common that slight Cartesian
changes result in very large changes to the axis angles. They are very similar
to the singularities in 6-axis robots since, in the LBR too, a division is made
into the position part and orientation part of the wrist root point.
Wrist axis singu- Wrist axis singularity means the axis position A6 = 0°. The position of axes A5
larity and A7 can thus no longer be resolved. There are an infinite number of ways
to position these two axes to generate the same position on the flange.
A1 singularity If the wrist root point is directly over A1, no reference value can be specified
for the redundancy circle according to the definition above. The reason for this
is that any A1 value is permissible here for A3 = 0°.
Every axis position of A1 can be compensated for with a combination of A5,
A6 and A7 so that the flange position remains unchanged.
A2 singularity With an extended “shoulder”, the position of axes A1 and A3 can no longer be
resolved according to the pattern above.
A2/A4 singularity If A1 and A7 coincide, the position of axes A1 and A7 can no longer be re-
solved according to the pattern above.
15 Programming
Description The Java Editor allows more than one file to be open simultaneously. If re-
quired, they can be displayed side by side or one above the other. This pro-
vides a convenient way of comparing contents, for example.
Alternative Right-click on the Java file and select Open or Open With > Java Editor
procedure from the context menu.
Item Description
1 This line contains the name of the package in which the robot ap-
plication is located.
2 The import section contains the imported classes which are re-
quired for programming the robot application.
Note: Clicking on the “+” icon opens the section, displaying the im-
ported classes.
3 Header of the robot application (contains the class name of the
robot application)
(>>> "Header" Page 336)
Item Description
4 Declaration section
The data arrays of the classes required in the robot application
are declared here.
When the robot application is created, instances of the necessary
classes are automatically integrated by means of dependency
injection. As standard, this is the instance of the robot used, here
an LBR.
(>>> 15.3.3 "Dependency Injection" Page 346)
5 initialize() method
In this method, initial values are assigned to data arrays that have
been created in the declaration section and are not integrated us-
ing dependency injection.
6 run() method
The programming of the robot application begins in this method.
When the robot application is created, a motion instruction which
moves the robot to the HOME position is automatically inserted.
(>>> 15.15 "HOME position" Page 390)
Element Description
public The keyword public designates a class which is publicly
visible. Public classes can be used across all packages.
class The keyword class designates a Java class. The name of
the class is derived from the name of the application.
extends The application is subordinate to the RoboticsAPIAppli-
cation class.
Description A variable name can be changed in a single action at all points where it occurs.
15.1.3.2 Auto-complete
ods, for example. All that is then required is to enter the variable elements in
the syntax manually.
When entering a dot operator for a data array or enum, the “Auto-
complete” list is automatically displayed. The list contains the follow-
ing entries:
Available methods of the corresponding class (only for data arrays)
Available constants of the corresponding class
2. Press CTRL + space bar. The “Auto-complete” list containing the available
entries is displayed.
If the list contains only one matching entry, this can automatically be
inserted into the program code by pressing CTRL + space bar.
3. Select the appropriate entry from the list and press the Enter key. The en-
try is inserted in the program code.
If an entry is selected, the Javadoc information on this entry is displayed
automatically.
(>>> 15.1.4 "Displaying Javadoc information" Page 339)
4. Complete the syntax if necessary.
Navigating and There are various ways to navigate to the “Auto-complete” list and to filter the
filtering available entries:
Use the arrow keys on the keyboard to move from one entry to the next
(up or down)
Scrolling
Complete the entered code with additional characters. The list is filtered
and only the entries which correspond to the characters are displayed.
Press CTRL + space bar. Only the available template suggestions are dis-
played.
Description Templates for fast entry are available for common Java statements, e.g. FOR
loops.
Description User-specific templates can be created, e.g. templates for motion blocks with
specific motion parameters which are used frequently during programming.
Procedure 1. In the Templates view, select the context in which the template is to be in-
serted.
2. Right-click on the context and select New... from the context menu.
or: Click on the Create a New Template icon.
The New Template window opens.
3. Enter a name for the template in the Name box.
4. Enter a description in the Description box (optional).
5. In the Pattern box, enter the desired code.
6. Confirm the template properties with OK. The template is created and in-
serted into the Templates view.
Description Parts of the program code can be extracted from the robot application and
made available as a separate method. This makes particular sense for fre-
quently recurring tasks, as it increases clarity within the robot application.
Access modifier This option defines which classes can call the extracted method.
Option Description
private This method can only be called by the corre-
sponding class itself.
default The following classes can call the method:
The corresponding class
The inner classes of the corresponding class
All classes of the package in which the corre-
sponding class is located
protected The following classes can call the method:
The corresponding class
The subclasses of the corresponding class
(inheritance)
The inner classes of the corresponding class
All classes of the package in which the corre-
sponding class is located
public All classes can call the method, regardless of the
relationship to the corresponding class and of
the package assignment.
2. In order to pin the window in the editor area, press the tab key or click in-
side the window.
Pinning the window makes it possible to navigate to the Javadoc descrip-
tion, e.g. by scrolling.
Displaying Javadoc information using the mouse pointer:
Move the mouse pointer to the desired element name in the program code.
The associated Javadoc information is automatically displayed in a win-
dow in the editor area.
The following elements react to the mouse pointer:
Methods
Classes (data types, not user-defined data arrays)
Interfaces
ENUM arrays
Navigation
Item Description
1 Linked class
Left-clicking on the linked class displays the complete Javadoc in-
formation relating to this class in the Javadoc browser.
Note: If the corresponding link in the Javadoc view is selected, the
complete Javadoc information is displayed in the view itself.
2 Show in Javadoc View button
The window in the editor section closes and the Javadoc informa-
tion is displayed in the Javadoc view.
3 Open Attached Javadoc Browser button
The window in the editor section closes and the complete Java-
doc information relating to the corresponding class is displayed in
the Javadoc browser.
The configuration of the Javadoc browser is described briefly using the exam-
ple of the LBR class.
Item Description
1 Navigation
2 Class hierarchy
(>>> Fig. 15-6 )
The inheritance relationships of the class are displayed here.
3 Description of the class
The task of the class and its functionality is described here. Special
aspects of using the class are normally indicated in this area. It
may also contain short examples for using the class.
The earliest library version in which the class is available is normal-
ly specified at the end of the description. The description may ad-
ditionally contain a list of references to further classes or methods
which may be of interest.
4 Overviews
Field Summary
Overview of the data fields which belong to the class
The data fields inherited from a parent class are listed here.
Constructor Summary
Overview of the constructors which belong to the class
Method Summary
Overview of the methods which belong to the class
The methods inherited from a parent class are listed here.
The overviews contain short descriptions of the data fields, con-
structors and methods of the class, provided that these were spec-
ified during the creation of Javadoc. Inherited data fields and
methods are only listed.
Detailed descriptions on the data fields, constructors and methods
can be found in the Details area. Click on the respective name to
directly access the detailed description.
5 Details
Field Detail
Detailed description of the data fields which belong to the class
Constructor Detail
Detailed description of the constructors which belong to the
class
Method Detail
Detailed description of the methods which belong to the class
The detailed description may, for example, contain a list and de-
scription of the transferred parameters and return value. Provided
there are any, the exceptions which may occur when executing a
method or constructor are also named here.
Item Description
1 Name of the package to which the class belongs
2 Name of the class
3 Class hierarchy (parentage of the class)
4 List of interfaces implemented by the class
5 List of subclasses derived from the class
The following symbols and fonts are used in the syntax descriptions:
The names of the primitive data types are displayed in violet in the
Java Editor.
15.3.1 Declaration
Description To allow programming in Java, the necessary objects must first be created
(declared), i.e. the data type and identifier must be defined.
Explanation of
Element Description
the syntax
Data type Data type of the variable
Name Name of the variable
15.3.2 Initialization
Before an object can be used in the program, it must be assigned an initial val-
ue.
Description In the case of primitive data types, the assignment is done by the operator =
followed by the desired value.
Primitive data types can be created and used in the run() method of an appli-
cation, for example.
Example The variables a and b are created in an application and assigned an initial val-
ue. Subsequently, the variable c is created and assigned the sum of the vari-
ables a and b.
@Override
public void run() {
// ...
int a = 3;
int b = 5;
// ...
int c = a + b;
// ...
}
Description Complex data types are always instanced by the call of a constructor in con-
junction with the keyword new. The instancing can take place either directly or
within a method that supplies an object of the data type as the return value.
Depending on the specific implementation of the associated class, parameters
for the first instancing can be transferred to the constructor if required.
Further values are assigned to the properties by the methods provided by the
class.
In robot applications, complex data types are usually created after the header
and initialized in the initialize() method.
Example In an application, data arrays for a Cartesian impedance mode and a force
break condition are created and initialized.
public class ExampleApplication extends RoboticsAPIApplication {
// ...
private CartesianImpedanceControlMode softInToolX;
private ForceCondition contactForceReached;
@Override
public void initialize() {
softInToolX = new CartesianImpedanceControlMode();
softInToolX.setDampingToDefaultValue();
// ...
contactForceReached =
ForceCondition.createSpatialForceCondition(…);
}
@Override
public void run() {
// ...
robot.move(ptp(getFrame("/P20")).
breakWhen(contactForceReached));
}
}
Description With the aid of dependency injection, it is no longer necessary to actually gen-
erate instances of certain object types. It is sufficient to provide the points
where the objects are to be used with an appropriate annotation so that the
runtime system performs the generation. This allows an application that is
based on multiple Java classes to access common objects without having to
transfer the objects to the classes in each case.
Dependency injection can only be used in classes that are themselves gener-
ated by dependency injection. If such a class is instanced with new, the corre-
sponding points remain non-initialized (“null”). As the runtime system
generates robot and background applications with dependency injection, the
function can be used there.
Syntax @Inject
<Modifier> Data type Name;
Explanation of
Element Description
the syntax
@Inject Annotation for initializing the array of type Data type with
dependency injection.
Modifier If required, valid modifiers can be used here for the array
declaration, e.g.:
public, private, protected, etc.
The modifier static cannot be used for arrays with
@Inject and final should also be avoided.
Data type Data type of the array
Name Name of the array
Description The most important types in Sunrise can be integrated using dependency in-
jection. This applies to the following types, among others:
Controller
Robot
LBR
Tool
Workpiece
ITaskLogger
IStatusController
IApplicationData
All generated I/O groups
All classes created in Sunrise.Workbench which are derived from Tool or
Workpiece and have been configured as Class of Template in the prop-
erties of an object template.
Examples An LBR iiwa and a gripper are integrated in a robot application by means of
dependency injection. An object template with the name “Gripper” has been
created for the gripper. The gripper is attached to the robot during initialization.
Motions with both devices are executed in the application.
In addition, a logger object is integrated which is used to display LOG informa-
tion of the smartHMI.
public class ExampleApplication extends RoboticsAPIApplication {
@Inject
private ITaskLogger logger;
@Inject
private IApplicationData data;
@Inject
private LBR robot;
@Inject
@Named("Gripper")
private Tool gripper;
@Override
public void initialize() {
// initialize your application here
gripper.attachTo(robot.getFlange());
logger.info("Application initialized!");
}
@Override
public void run() {
// your application execution starts here
robot.move(ptpHome());
robot.move(ptp(data.getFrame("/Start")));
// ...
logger.info("Move gripper");
gripper.move(linRel().setXOffset(25.0));
// ...
}
}
The signals of an I/O group are to be used in both the robot application and a
background application.
Use in robot application:
public class ExampleApplication extends RoboticsAPIApplication {
@Inject
private LBR robot;
@Inject
private LEDsIOGroup appLEDs;
@Override
public void initialize() {
// initialize your application here
}
@Override
public void run() {
// your application execution starts here
// ...
appLEDs.setBlueLight(true);
robot.move(handGuiding());
appLEDs.setBlueLight(false);
// ...
}
}
@Override
public void initialize() {
// initialize your task here
initializeCyclic(0, 500, TimeUnit.MILLISECONDS,
CycleBehavior.BestEffort);
}
@Override
public void runCyclic() {
// ...
if (appRunning) {
// If application is running,
// LED changes its state continuously
bgtLEDs.setYellowLight(!bgtLEDs.getYellowLight());
}
else {
// If application is not running, LED remains off
bgtLEDs.setYellowLight(false);
}
// ...
}
}
Description A class can be used via dependency injection if it meets one of the following
conditions:
The class has a public constructor without parameters. An @Inject anno-
tation on the constructor is not absolutely essential in this case.
The class has a public constructor with an @Inject annotation which either
contains no parameters or for which all parameters can be obtained via
dependency injection.
All classes that are present in an application and meet the specified conditions
can be integrated in all constituent parts of the application using @Inject. A
new instance of the class is generated as standard for each integration using
@Inject.
State variables, e.g. of tool and workpiece classes, can be used by various
program sections through this mechanism.
(>>> 15.10.4 "Integrating dedicated object classes with dependency injec-
tion" Page 373)
Example The classes Vehicle, Motor and Wheel are used in a project. The classes
Motor and Wheel are to be available in the Vehicle class via dependency
injection. As a vehicle usually only has one motor (or engine), the Motor class
is to be defined as a singleton.
2 objects of each of the classes Motor and Wheel are integrated in the Ve-
hicle class. Comparison of the objects is then intended to show that the ob-
jects of the Motor class refer to the same instance whereas the objects of the
Wheel class refer to different instances.
The Vehicle class is likewise integrated in a robot application using depen-
dency injection. An object of the ITaskLogger class is integrated in both the
robot application and the Vehicle class by means of dependency injection.
Integrating the ITaskLogger interface via dependency injection also enables
information from the Vehicle class to be displayed on the smartHMI.
Wheel class:
public class Wheel
{
@Inject
public Wheel() {}
// ...
}
Motor class:
@Singleton
public class Motor
{
@Inject
public Motor() {}
// ...
}
Vehicle class:
public class Vehicle {
@Inject
private ITaskLogger logger;
@Inject
private Wheel frontWheel;
@Inject
private Wheel rearWheel;
@Inject
private Motor motor;
@Inject
private Motor additionalMotor;
// ...
@Inject
private Vehicle() {
}
// ...
}
@Inject
private Vehicle myNewCar;
@Override
public void initialize() {
myNewCar.setName("Isolde")
// ...
}
@Override
public void run() {
logger.info("Name of vehicle:" + myNewCar.getName());
myNewCar.setCarStatus();
myNewCar.printCarStatus();
}
}
The screenshot (>>> Fig. 15-7 ) shows the information displayed on the sm-
artHMI when the robot application is executed. Besides the displays relating
to the robot application it also contains information from the Vehicle class.
This was made possible through integration of the ITaskLogger interface by
means of dependency injection.
Instances of Wheel
The compared objects are not identical. The result of the ELSE branch
was displayed on the smartHMI and the names of the 2 objects are differ-
ent.
Instances of Motor
The result of the IF branch was displayed on the smartHMI. As both ob-
jects refer to the same instance due to the @Singleton annotation, the
name is changed twice and corresponds to the one last set (here “Addi-
tionalMotor”).
Methods which request data from a frame generally return an object of the
Vector class (package: com.kuka.roboticsAPI.geometricModel.math). The
components of the vector can be requested individually.
Method Description
getX() Return value type: double
Requests the X component of the vector
getY() Return value type: double
Requests the Y component of the vector
getZ() Return value type: double
Requests the Z component of the vector
get(index) Return value type: double
Requests the components determined by the index param-
eter
Values of index (type: int):
0: X component of the vector
1: Y component of the vector
2: Z component of the vector
Certain ports are enabled on the robot controller for communication with ex-
ternal devices via UDP or TCP/IP.
The following port numbers (client or server socket) can be used in a robot ap-
plication:
30,000 to 30,010
Description In Sunrise, motion commands can be used for all movable objects of a station.
A movable object can be a robot, for example, but also a tool which is attached
to the robot flange or a workpiece held by a tool (e.g. a gripper).
Motion commands can be executed synchronously and asynchronously. The
following methods are available for this:
move(…) for synchronous execution
Synchronous means that the motion commands are sent in steps to the
real-time controller and executed. The further execution of the program is
interrupted until the motion has been executed. Only then is the next com-
mand sent.
moveAsync(…) for asynchronous execution
Asynchronous means that the next program line is executed directly after
the motion command is sent. The asynchronous execution of motions is
required for approximating motions, for example.
The way in which the different motion types are programmed is shown by way
of example for the object “robot”.
Motion programming for tools and workpieces is described here:
(>>> 15.10.3 "Moving tools and workpieces" Page 372)
Explanation of
Element Description
the syntax
Object Object of the station which is being moved
The variable name of the object declared and initialized in
the application is specified here.
Motion Motion which is being executed
The motion to be executed is defined by the following ele-
ments:
Motion type or block: ptp, lin, circ, spl or spline,
splineJP, batch
Target position
Further optional motion parameters
15.6.2 PTP
Description Executes a point-to-point motion to the end point. The coordinates of the end
point are absolute.
The end point can be programmed in the following ways:
Insert a frame from the application data in a motion instruction.
Create a frame in the program and use it in the motion instruction.
The redundancy information for the end point – Status, Turn and re-
dundancy angle – must be correctly specified. Otherwise, the end
point cannot be correctly addressed.
Specify the angles of axes A1 … A7. All axis values must always be spec-
ified.
Explanation of
Element Description
the syntax
End point Path of the frame in the frame tree or variable name of the
frame (if created in the program)
A1 … A7 Axis angles of axes A1 … A7 (type: double; unit: rad)
Motion Further motion parameters, e.g. velocity or acceleration
parameters
15.6.3 LIN
Description Executes a linear motion to the end point. The coordinates of the end point are
Cartesian and absolute.
The end point can be programmed in the following ways:
Insert a frame from the application data in a motion instruction.
Create a frame in the program and use it in the motion instruction.
Explanation of
Element Description
the syntax
End point Path of the frame in the frame tree or variable name of the
frame (if created in the program)
The redundancy information for the end point – Status and
Turn – are ignored in the case of LIN (and CIRC) motions.
Only the redundancy angle is taken into account.
Motion Further motion parameters, e.g. velocity or acceleration
parameters
15.6.4 CIRC
Description Executes a circular motion. An auxiliary point and an end point must be spec-
ified in order for the controller to be able to calculate the circular motion. The
coordinates of the auxiliary point and end point are Cartesian and absolute.
The auxiliary point and end point can be programmed in the following ways:
Insert a frame from the application data in a motion instruction.
Create a frame in the program and use it in the motion instruction.
Explanation of
Element Description
the syntax
Auxiliary point Path of the frame in the frame tree or variable name of the
frame (if created in the program)
The redundancy information for the end point – Status,
Turn and redundancy angle – are ignored.
End point Path of the frame in the frame tree or variable name of the
frame (if created in the program)
The redundancy information for the end point – Status and
Turn – are ignored in the case of CIRC (and LIN)
motions. Only the redundancy angle is taken into account.
Motion Further motion parameters, e.g. velocity or acceleration
parameters
Examples CIRC motion to the end frame “/Table/P4” via the auxiliary frame “/Table/P3”:
robot.move(circ(getApplicationData().getFrame("/Table/P3"),
getApplicationData().getFrame("/Table/P4")));
Description Executes a linear motion to the end point. The coordinates of the end point are
relative to the end position of the previous motion, unless this previous motion
is terminated by a break condition. In this case, the coordinates of the end
point are relative to the position at which the motion was interrupted.
In a relative motion, the end point is offset as standard in the coordinate sys-
tem of the moved frame. Another reference coordinate system in which to ex-
ecute the relative motion can optionally be specified. The coordinates of the
end point then refer to this reference coordinate system. This can for example
be a frame created in the application data or a calibrated base.
The end point can be programmed in the following ways:
Enter the Cartesian offset values individually.
Use a frame transformation of type Transformation. The frame transforma-
tion has the advantage that the rotation can also be specified in degrees.
Explanation of
Element Description
the syntax
x, y, z Offset in the X, Y and Z directions (type: double, unit: mm)
a, b, c Rotation about the Z, Y and X axes (type: double)
The unit depends on the method used:
Offset values and Transformation.ofRad: rad
Transformation.ofDeg: degrees
Reference Type: AbstractFrame
system
Reference coordinate system in which the motion is exe-
cuted
Examples The moving frame is the TCP of a gripper. This TCP moves 100 mm in the X
direction and 200 mm in the negative Z direction in the tool coordinate system
from the current position. The orientation of the TCP does not change.
gripper.getFrame("/TCP2").move(linRel(100, 0, -200));
The robot moves 10 mm from the current position in the coordinate system of
the P1 frame. The robot additionally rotates 30° about the Z and Y axes of the
coordinate system of the P1 frame.
robot.move(linRel(Transformation.ofDeg(10, 10, 10, 30, 30, 0),
getApplicationData().getFrame("/P1")));
15.6.6 MotionBatch
Description Several individual motions can be grouped in a MotionBatch and thus trans-
mitted to the robot controller at the same time. As a result, motions can be ap-
proximated within the MotionBatch.
The motion parameters, e.g. velocity, acceleration, orientation control, etc.
can be programmed for the entire batch or per motion.
Both variants can appear together, e.g. to assign another parameter value to
an individual motion than to the batch.
Syntax Object.move(batch(
Motion,
Motion,
…
Motion,
Motion
)<.Motion parameter>);
Explanation of
Element Description
the syntax
Object Object of the station which is being moved
The variable name of the object declared and initialized in
the application is specified here.
Motion Motion with or without motion parameters
ptp, lin, circ or spline
Motion Motion parameters which are programmed at the end of
parameters the batch apply to the entire batch.
Only axis-specific motion parameters can be programmed!
A collision can be avoided by inserting a LIN segment before the work sur-
face. Observe the recommendations for the LIN-SPL-LIN transition.
(>>> 14.6.3 "LIN-SPL-LIN transition" Page 320)
Avoid using SPL segments if the robot moves near the workspace limit. It
is possible to exceed the workspace limit with SPL, even though the robot
can reach the end frame in another motion type or by means of jogging.
Description A CP spline block can be used to group together several SPL, LIN and/or
CIRC segments to an overall motion.
A spline block must not include any other instructions, e.g. variable assign-
ments or logic statements.
The motion parameters, e.g. velocity, acceleration, orientation control, etc.
can be programmed for the entire spline block or per segment. Both variants
can appear together, e.g. to assign a different parameter value to an individual
segment than to the block.
Segment,
…
Segment,
Segment
)<.Motion parameter>;
Explanation of
Element Description
the syntax
Name Name of the spline block
Segment Motion with or without motion parameters
spl, lin or circ
Motion Motion parameters which are programmed at the end of
parameters the spline block apply to the entire spline block.
Description A JP spline block can be used to group together several PTP segments as an
overall motion.
A spline block must not include any other instructions, e.g. variable assign-
ments or logic statements.
The motion parameters, e.g. velocity, acceleration, etc. can be programmed
for the entire spline block or per segment. Both variants can appear together,
e.g. to assign a different parameter value to an individual segment than to the
block.
Segment,
…
Segment,
Segment
)<.Motion parameter>;
Explanation of
Element Description
the syntax
Name Name of the spline block
Segment PTP motion with or without motion parameters
Motion Motion parameters which are programmed at the end of
parameters the spline block apply to the entire spline block.
Description The spline motion programmed in a spline block is used as the motion type in
the motion instruction.
Explanation of
Element Description
the syntax
Object Object of the station which is being moved
The variable name of the object declared and initialized in
the application is specified here.
Spline block Name of the spline block
Example robot.move(mySpline);
The required motion parameters can be added in any order to the motion in-
struction. Dot operators and “set” methods are used for this purpose.
Method Description
setCartVelocity(…) Absolute Cartesian velocity (type: double, unit: mm/s)
> 0.0
This value specifies the maximum Cartesian velocity at which the robot
may move during the motion. Due to limitations in path planning, the
maximum velocity may not be reached and the actual velocity may be
lower.
If no velocity is specified, the motion is executed with the fastest possi-
ble velocity.
Note: This parameter cannot be set for PTP motions.
setJointVelocity Axis-specific relative velocity (type: double, unit: %)
Rel(…)
0.0 … 1.0
Refers to the maximum value of the axis velocity in the machine data.
(>>> 15.8.1 "Programming axis-specific motion parameters" Page 362)
setCart Absolute Cartesian velocity (type: double, unit: mm/s2)
Acceleration(…)
> 0.0
If no acceleration is specified, the motion is executed with the fastest
possible acceleration.
Note: This parameter cannot be set for PTP motions.
setJointAcceleration Axis-specific relative acceleration (type: double, unit: %)
Rel(…)
0.0 … 1.0
Refers to the maximum value of the axis acceleration in the machine
data.
(>>> 15.8.1 "Programming axis-specific motion parameters" Page 362)
Method Description
setCartJerk(…) Absolute Cartesian jerk (type: double, unit: mm/s3)
> 0.0
If no jerk is specified, the motion is executed with the fastest possible
change in acceleration.
Note: This parameter cannot be set for PTP motions.
setJointJerkRel(…) Axis-specific relative jerk (type: double, unit: %)
0.0 … 1.0
Refers to the maximum value of the axis-specific change in acceleration
in the machine data.
(>>> 15.8.1 "Programming axis-specific motion parameters" Page 362)
setBlendingRel(…) Relative approximation distance (type: double)
0.0 … 1.0
The relative approximation distance is the furthest distance before the
end point at which approximate positioning can begin. If “0.0” is set, the
approximation parameter does not have any effect.
The maximum distance (= 1.0) is always the length of the individual
motion or the length of the last segment in the case of splines. For
motions which are not commanded within a spline, only the range
between 0% and 50% is available for approximate positioning. In this
case, if a value greater than 50% is parameterized, approximate posi-
tioning nevertheless begins at 50% of the block length.
setBlendingCart(…) Absolute approximation distance (type: double, unit: mm)
≥ 0.0
The absolute approximation distance is the furthest distance before the
end point at which approximate positioning can begin. If “0.0” is set, the
approximation parameter does not have any effect.
setBlendingOri(…) Orientation parameter for approximate positioning (type: double, unit:
rad)
≥ 0.0
Approximation starts, at the earliest, when the absolute difference of the
dominant orientation angle for the end orientation falls below the value
set here. If “0.0” is set, the approximation parameter does not have any
effect.
setOrientation Orientation control (type: Enum)
Type(…)
Constant
Ignore
OriJoint
VariableOrientation (default)
(>>> 14.9 "Orientation control with LIN, CIRC, SPL" Page 324)
setOrientation Only relevant for CIRC motions: Reference system of orientation control
ReferenceSystem(…) (type: Enum)
Base
Path
(>>> 14.9.1 "CIRC – reference system for the orientation control"
Page 326)
Axis A5 moves at 50%, all other axes move at 20% of maximum velocity:
double[] velRelJoints = {0.2, 0.2, 0.2, 0.2, 0.5, 0.2, 0.2};
robot.move(ptp(getApplicationData().getFrame("/P1"))
.setJointVelocityRel(velRelJoints));
Axis A4 moves at 50% of maximum velocity, all other axes move at maximum
velocity:
robot.move(ptp(getApplicationData().getFrame("/P1"))
.setJointVelocityRel(JointEnum.J4, 0.5));
Description The robot can be guided using a hand guiding device. Manual guidance mode
can be switched on in the application using the motion command handGuid-
ing(). Manual guidance begins at the actual position which was reached before
the mode was switched on.
If Manual guidance mode is used in the application, at least 2 ESM states must
be configured:
ESM state for manual guidance motion
The ESM state contains the AMF Hand guiding device enabling inactive,
which checks whether the enabling signal has not been issued on the
hand guiding device.
(>>> 13.12.5 "Manual guidance with enabling device and velocity moni-
toring" Page 246)
It is advisable to configure a safety stop 1 (path-maintaining) as the stop
reaction for the AMF Hand guiding device enabling inactive. Following a
Preparation The handGuiding() motion command belongs to the HRCMotions class. The
class must be manually inserted into the import section of the robot applica-
tion. The following line must be programmed:
import static com.kuka.roboticsAPI.motionModel.HRCMotions.*;
Syntax Object.move(handGuiding());
Explanation of
Element Description
the syntax
Object Object of the station which is being moved
The variable name of the object declared and initialized in
the application is specified here.
Example 1 robot.setESMState("1");
2 robot.move(ptp(getApplicationData().getFrame("/P1")));
3 robot.setESMState("2");
4 robot.move(handGuiding());
5 robot.setESMState("1");
6 robot.move(ptp(getApplicationData().getFrame("/P2")));
Line Description
1 ESM state 1 is activated for the robot. In this example, ESM
state 1 monitors the operator safety.
2 Frame "/P1" is addressed with a PTP motion.
3 ESM state 2 is activated for the robot. ESM state 2 monitors
the enabling switch on the hand guiding device.
If a signal has not yet been issued via the switch, the config-
ured stop reaction is triggered and the application is paused.
4 Manual guidance mode is activated.
The robot can be guided manually as soon as the enabling
switch on the hand guiding device is pressed and held in the
center position.
When the signal for manual guidance has been cancelled, e.g.
by releasing the enabling switch, Manual guidance mode has
ended. The stop reaction configured for ESM state 2 is trig-
gered and motion execution is paused.
5 ESM state 1 is activated for the robot. In this example, ESM
state 1 monitors the operator safety.
Motion execution remains paused. The Start key must be
pressed in order to resume the application.
6 Frame "/P2" is addressed with a PTP motion.
For manual guidance, axis limits and velocity limits can be programmed. The
required motion parameters can be added in any order to the motion com-
mand handGuiding(). Dot operators and “set” methods are used for this pur-
pose.
Axis limits:
Method Description
setJointLimits Activation of the axis limits for manual guidance (type: boolean[])
Enabled(…)
true: Axis limit active
false: Axis limit not active
Note: This method refers to the limits that the user can set using the
methods setJointLimitsMax(…) and setJointLimitsMin(…). The outer-
most axis limits of the robot (software limit switches) are always moni-
tored.
setJointLimitsMax(…) Upper axis limits (type: double[]; unit: rad)
setJointLimitsMin(…) Lower axis limits (type: double[]; unit: rad)
Note: The lower axis limit must always be lower than the corresponding
upper axis limit.
setJointLimitViolation Response if an axis limit is reached (type: boolean)
FreezesAll(…)
true: If an axis limit is reached, all axes involved in the motion work
against a further motion towards the limit switch.
false: If an axis limit is reached, only the affected axis works against
a further motion towards the limit switch.
Default: true
If this value is not set, the default value is automatically applied.
setPermanentPullOn Response if an axis limit is already exceeded at the start of manual guid-
ViolationAtStart(…) ance (type: boolean)
true: When the enabling signal for manual guidance is issued, the
axis is moved automatically out of the non-permissible range. When
the permissible range is reached, the motion is stopped automatical-
ly.
false: When the enabling signal for manual guidance is issued, the
axis does not move. It must be moved out of the non-permissible
range manually.
Default: false
If this value is not set, the default value is automatically applied.
Velocity limits:
Method Description
setCartVelocity Cartesian velocity limit (type: double, unit: mm/s)
Limit(…)
> 0.0
Default: 500.0
If the velocity limit is exceeded, increasing torques act against the
motion and cushion it.
setJointVelocity Velocity limitation for all axes (type: double; unit: rad/s)
Limit(…)
> 0.0
Default: 1.0
If the velocity limit is exceeded, increasing torques act against the
motion and cushion it.
Description The motion range of each axis is limited by means of software limit switches.
For manual guidance, additional axis limitations can be programmed, thereby
further restricting the motion range:
setJointLimitsMin(…), setJointLimitsMax(…)
Define a lower and upper axis limit that must be specified individually for
each axis.
Defining a lower and an upper axis limit results in a permissible axis range,
in which manual guidance is freely possible, and 2 non-permissible axis
ranges between the upper/lower axis limit and the respective software limit
switch.
setJointLimitsEnabled(…)
The defined axis limitation must be activated or deactivated individually for
each axis.
If one of the axis limits is reached during manual guidance, a virtual spring
damper system is tensioned. This generates a resistance against any further
motion towards the limit switch, with the resistance becoming greater the near-
er an axis comes to the limit switch.
The following applies as standard:
If an axis limit is reached, all axes involved in the motion work against a
further motion towards the limit switch.
With setJointLimitViolationFreezesAll(false), it is possible
to define that only the axis that has reached the limit works against a fur-
ther motion towards the limit switch.
If an axis limit is already exceeded at the start of manual guidance, the af-
fected axis must be moved manually out of the non-permissible range.
With setPermanentPullOnViolationAtStart(true), it is possible
to define that the axis is to move automatically out of the non-permissible
range.
Example @Inject
private LBR robot;
private HandGuidingMotion motion;
// ...
motion = handGuiding()
.setJointLimitsMax(+1.407, +0.872, +0.087, -0.785, +0.087,
+1.571, +0.087)
.setJointLimitsMin(-1.407, +0.175, -0.087, -1.571, -0.087,
-1.571, -0.087)
.setJointLimitsEnabled(false, true, false, true, false,
true, false)
.setJointLimitViolationFreezesAll(false)
.setPermanentPullOnViolationAtStart(true);
robot.move(motion);
Description In addition to the axis limitation, velocity limits that may not be exceeded can
be programmed for manual guidance:
setCartVelocityLimit(…)
Defines the maximum permissible Cartesian velocity. It is monitored both
at the robot flange and at the TCP.
setJointVelocityLimit(…)
Defines the maximum permissible axis velocity for all axes.
As soon as the operator exceeds one of these maximum velocity limits axis in
manual guidance, increasing torques act against the motion and cushion it.
Axis velocity reduction before axis limitation:
If the programmed maximum permissible axis velocity is exceeded near the
axis limits during manual guidance, the axis velocity is continuously reduced
to a minimum axis velocity specified by KUKA. This ensures that the manual
guidance motion is automatically decelerated before the axis limits and the op-
erator can only approach the axis limits at reduced velocity.
The method setJointLimitViolationFreezesAll(…) determines whether only the
velocity of the affected axis is reduced, or the velocity of all axes involved in
the motion.
Example @Inject
private LBR robot;
private HandGuidingMotion motion;
// ...
motion = handGuiding()
.setJointLimitsMax(+1.407, +0.872, +0.087, -0.785, +0.087,
+1.571, +0.087)
.setJointLimitsMin(-1.407, +0.175, -0.087, -1.571, -0.087,
-1.571, -0.087)
.setJointLimitsEnabled(false, true, false, true, false,
true, false)
.setJointVelocityLimit(0.5)
.setCartVelocityLimit(500.0)
.setJointLimitViolationFreezesAll(false)
.setPermanentPullOnViolationAtStart(true);
robot.move(motion);
Data types The data types for the objects in a station are predefined in the RoboticsAPI,
e.g.:
Description Tools and workpieces created in the object templates can be integrated into
robot and background applications using dependency injection. The name of
the template is specified by means of an additional annotation.
Syntax @Inject
@Named("Template name")
private Data type Object name;
Explanation of
Element Description
the syntax
@Inject Annotation for integrating resources by means of depen-
dency injection
@Named Annotation for specifying the object template to be used
Template Name of the object template as specified in the Object
name templates view
Element Description
private The keyword designates locally valid variables. Locally
valid means that the data array can only be used by the
corresponding class.
Data type Class of the resource (Tool or Workpiece) that is to be inte-
grated
Object name Name of the identifier, as it is to be used in the application
The annotation @Named may be omitted for tools if there is only one
single object template for a tool. The annotation is always required for
workpieces.
@Inject
@Named("GuidingTool")
private Tool guidingTool;
@Inject
@Named("Pen")
private Workpiece pen;
@Override
public void initialize() {
// initialize your application here
}
@Override
public void run() {
// your application execution starts here
}
}
Workpieces are indirectly attached to the robot via a tool or another work-
piece.
As soon as a tool or workpiece is attached to the robot via the method attach-
To(…), the load data from the robot controller are taken into account. In addi-
tion, all frames of the attached object can be used for the motion programming.
(>>> 9.3.8 "Load data" Page 161)
Description Via the method attachTo(…), the origin frame of a tool is attached to the flange
of a robot used in the application. The robot flange is accessed via the method
getFlange().
Syntax Tool.attachTo(Robot.getFlange());
Explanation of
Element Description
the syntax
Tool Name of the tool variable
Robot Name of the robot variable
@Inject
private LBR robot;
@Inject
private Tool guidingTool;
// ...
@Override
public void initialize() {
// ...
guidingTool.attachTo(robot.getFlange());
// ...
Description As standard, the origin frame of the workpiece is used to attach it to the frame
of another object.
However, every other frame created for a workpiece can also be used as a ref-
erence point for attaching to another object.
Frames for tools and workpieces can be created in the Object templates
view.
Explanation of
Element Description
the syntax
Workpiece Name of the workpiece variable
Reference Reference frame of the workpiece which is used for the
frame attachment to the other object
End frame Frame of the object to which the reference frame of the
workpiece is attached
After the attach, the reference frame of the workpiece and the end
frame of the object attached to it match.
Example 1 A pen is attached to the gripper frame via its origin frame.
@Inject
private LBR robot;
@Inject
private Tool gripper;
@Inject
@Named("Pen")
private Workpiece pen;
// ...
@Override
public void run() {
// ...
pen.attachTo(gripper.getFrame("/TCP1"));
// ...
Example 2 A 2nd frame is defined at the tip of the gripper. If this is to be used to grip the
pen, attachment via the origin frame of the pen is not possible. For this pur-
pose, a grip point was created on the pen. This is used as the reference frame
for the attachment to the gripper.
@Inject
private LBR robot;
@Inject
private Tool gripper;
@Inject
@Named("Pen")
private Workpiece pen;
// ...
@Override
public void run() {
// ...
pen.getFrame("/Grip").attachTo(gripper.getFrame("/TCP2"));
// ...
15.10.2.3Detaching objects
Description If a tool is removed or a workpiece is set down, the object must also be de-
tached in the application. The method detach() is used for this purpose.
Syntax Object.detach();
Explanation of
Element Description
the syntax
Object Name of the object variable
Description Every movable object in a station can be moved with move(…) and move-
Async(…). The reference point of the motion is dependent on the object type:
If a robot is moved, the reference point is always the robot flange center
point.
If a tool or workpiece is moved, the standard reference point is the default
motion frame which was defined for this object in the Object templates
view.
(>>> 9.3.7 "Defining a default motion frame" Page 160)
In this case, the tool or workpiece is linked directly to the motion command
via the variable name declared in the application.
However, any other frame created for a tool or workpiece can also be pro-
grammed as a reference point of the motion.
In this case, using the method getFrame(…), the path to the frame of the
object used for the motion must be specified (on the basis of the origin
frame of the object).
Syntax To use the default frame of the object for the motion:
Object.move(Motion);
To use a different frame of the object for the motion:
Object.getFrame("Moved frame").move(Motion));
Explanation of
Element Description
the syntax
Object Object of the station which is being moved
The variable name of the object declared and initialized in
the application is specified here.
Moved frame Path to the frame of the object which is used for the motion
Motion Motion which is being executed
Examples The PTP motion to point P1 is executed with the default frame of the gripper.
gripper.attachTo(robot.getFlange());
gripper.move(ptp(getApplicationData().getFrame("/P1")));
The PTP motion to point P1 is executed with a different frame than the default
frame of the gripper, here TCP1:
gripper.attachTo(robot.getFlange());
gripper.getFrame("/TCP1").move(ptp(getApplicationData().getFrame("/P1
")));
A pen is gripped. The next motion is a PTP motion to point P20. This point is
executed with the default frame of the workpiece “pen”.
gripper.attachTo(robot.getFlange());
// ...
pen.attachTo(gripper.getFrame("/TCP1"));
pen.move(ptp(getApplicationData().getFrame("/P20"));
Description Tools and workpieces created in the object templates are based on the class-
es Tool and Workpiece. Specific properties or functions that tools and work-
pieces generally have are not considered by these basic classes. For a
gripper, examples might include functions for opening and closing.
Such specific object properties and functions can be defined in separate object
classes. The following steps are required in order to be able to use the user-
defined object classes in the same way as the basic classes in applications:
Step Description
1 Derive a new object class from a suitable basic class:
Basic class for tools:
com.kuka.roboticsAPI.geometricModel.Tool
Basic class for workpieces:
com.kuka.roboticsAPI.geometricModel.Workpiece
The constructor of the created object class must have the fol-
lowing properties:
Visibility level public
Transfer parameter of type String (name of the object tem-
plate is transferred)
Must not be annotated with @Inject
2 Define object properties and functions in the new object class.
3 In the object templates, assign the new object class to the
desired objects. For this, enter the full identifier (Package
name.Class name) of the object class under Template class
in the Properties view.
Note: Object templates that use an object class derived from
a basic class are integrated into an application such as this by
means of dependency injection.
Singletons Object classes that are derived from Tool and used in a Sunrise project as de-
scribed here are always integrated as singletons. This means that each object
annotated with the type of the object class refers to the same instance.
As standard, object classes that are derived from Workpiece are not single-
tons. When annotating, a new instance is therefore created every time. Work-
pieces can be made singletons by placing the annotation @Singleton before
the header of the class.
7. Confirm the selection with OK. The name of the basic class is now dis-
played in the Superclass: box.
8. Click on Finish. The Java package with the newly created class is inserted
into the source folder of the Sunrise project and opened in the editor area.
9. Create a constructor with the desired properties.
10. The required arrays and methods can now be defined.
Example Step 1:
For a gripper, the object class Gripper is created using the procedure de-
scribed above. The class Gripper is derived from the basic class Tool and ex-
pands the basic class to include the functions for opening and closing the
gripper.
1 package tools;
2 import com.kuka.roboticsAPI.geometricModel.Tool;
3 public class Gripper extends Tool {
4 @Inject
5 private ITaskLogger logger;
6
7 public Gripper(String name) {
8 super(name);
9 }
10
11 /**
12 * Opens the gripper
13 */
14 public void openGripper(){
15 // ...
16 logger.info("Gripper is open.");
17 }
18
19 /**
20 * Closes the gripper
21 */
22 public void closeGripper(){
23 // ...
24 logger.info("Gripper is closed.");
25 }
26 }
Line Description
1 Name of the Java package that contains the class Gripper
4…5 Integration of the ITaskLogger interface by means of depen-
dency injection
7…9 Standard constructor of the class Gripper (adopted from Tool)
14 … 11 Method openGripper() for opening the gripper
22 … 25 Method closeGripper() for closing the gripper
16, 24 Information displayed on smartHMI with the aid of the ITask-
Logger interface
Step 2:
An object template with the name “ExampleGripper” is created for the gripper.
The object class Gripper is assigned to the object template:
Entry under Template class in the Properties view: tools.Gripper
The name of the Java package (here “tools”) that contains the class Grip-
per must be specified.
Step 3:
The object class Gripper and the corresponding functions can be used in the
robot application.
1 public class ExampleApplication extends RoboticsAPIApplication {
2 @Inject
3 private LBR robot;
4 @Inject
5 private Gripper gripper;
6
7 @Override
8 public void initialize() {
9 // initialize your application here
10 // ...
11 gripper.attachTo(robot.getFlange());
12 // ...
13 }
14
15 @Override
16 public void run() {
17 // your application execution starts here
18 // ...
19 gripper.openGripper();
20 gripper.move(lin(getFrame("/GripPos")));
21 gripper.closeGripper();
22 // ...
23 }
24 }
Line Description
4…5 A tool of type Gripper is integrated.
The tool has the functions defined in the object class Gripper.
11 The tool is attached to the robot flange.
19 … 21 The functions defined in the object class Gripper are used to
program a gripping process:
Open gripper, move to grip position, close gripper.
Description If a workpiece load-specific AMF is used in the safety configuration and, at the
same time, workpieces are picked up, the user must use the setSafetyWork-
piece(…) method to communicate to the safety controller which workpiece is
currently being used. Workpiece load-specific AMFs include:
TCP force monitoring
Base-related TCP force component
Collision detection
(>>> 9.3.10 "Safety-oriented use of workpieces" Page 166)
The setSafetyWorkpiece(…) method belongs to the LBR class and can be
used in both robot applications and background applications.
setSafetyWorkpiece(…) is used to transfer the workpiece load data to the
safety controller. If a workpiece is set down again and there are no longer any
workpiece load data to be taken into consideration, the value null must be
transferred.
The workpiece load data transferred to the safety controller using set-
SafetyWorkpiece(…) are not safety-oriented. For this reason, in the
event of an error, the AMF Collision detection may use load data
which deviate from the actual workpiece load. These deviations are misinter-
preted as external axis torques.
Syntax robot.setSafetyWorkpiece(Workpiece);
Explanation of
Element Description
the syntax
robot Type: LBR
Name of the robot
Workpiece Type: PhysicalObject
Workpiece whose load data are to be transferred to the
safety controller
If no workpiece is to be taken into consideration any longer,
null must be transferred.
Example A safety-oriented tool and 2 workpieces are created in the object templates.
The tool contains the frame “GrippingPoint”, which serves as a gripping point
for workpieces and which is selected as the standard frame for motions.
In the application, the workpiece “ComponentA” is picked up and set down.
The workpiece “ComponentB” is then picked up. All load changes are to be
taken into consideration in both the safety-oriented and non-safety-oriented
part of the robot controller.
public class ChangeOfLoadExample extends RoboticsAPIApplication {
@Inject
private LBR robot;
@Inject
private Tool gripper;
@Inject
@Named("ComponentA")
private Workpiece componentA;
@Inject
@Named("ComponentB")
private Workpiece componentB;
@Override
public void initialize() {
// ...
// attach gripper to robot flange
gripper.attachTo(robot.getFlange());
}
@Override
public void run() {
// ...
// after pick-up, attach workpiece to set load data for
// motion control
componentA.attachTo(gripper.getDefaultMotionFrame());
// set load data for safety controller
robot.setSafetyWorkpiece(componentA);
// ...
// after putting it down, detach workpiece to no longer
// consider its load for motion control
componentA.detach();
// ...
// pick-up of second workpiece
componentB.attachTo(gripper.getDefaultMotionFrame());
robot.setSafetyWorkpiece(componentB);
// ...
}
}
To use the inputs/outputs of an I/O group in the application, the user must in-
tegrate the I/O group by means of dependency injection.
Item Description
1 com.kuka.generated.ioAccess Java package
The class created for an I/O group and the methods of this class
are saved in the package.
The Java class NameIOGroup.java (here: LampSwitchIO-
Group.java) contains the following elements:
Class name of the I/O group: NameIOGroup
Constructor for assigning the robot controller to the I/O group:
NameIOGroup(Controller)
get and set methods for every configured output: getOutput(),
setOutput(Value)
“Get” method for every configured input: getInput()
2 generatedFiles folder
IODescriptions folder
The data in an I/O group are saved in an XML file. The XML file
can be displayed but not edited.
3 IOTemplates folder
The data of an I/O group saved as a template are saved in an
XML file. The XML file can be displayed but not edited.
A template can be copied into another Sunrise project in order to
be used there. The template can then be imported into WorkVi-
sual, edited there and re-exported.
(>>> 11.5.8 "Importing an I/O group from a template" Page 195)
(>>> 11.5.7 "Exporting an I/O group as a template" Page 195)
Description I/O groups can be integrated into robot and background applications by means
of dependency injection. As a result, the Java package com.kuka.generat-
ed.ioAccess is automatically imported with the classes and methods of the I/O
group.
Syntax @Inject
private Data type Group name;
Explanation of
Element Description
the syntax
@Inject Annotation for integrating resources by means of depen-
dency injection
private The keyword designates locally valid variables. Locally
valid means that the data array can only be used by the
corresponding class.
Data type Class of the resource (I/O group) that is to be integrated
Class name of the I/O group:
NameIOGroup
@Override
public void initialize() {
// initialize your application here
}
@Override
public void run() {
// your application execution starts here
}
}
Description The “get” method of an input/output is used to request the state of the in-
put/output.
Explanation of
Element Description
the syntax
Group name Name of the identifier of the I/O group
Input Name of the input (as defined in WorkVisual)
Output Name of the output (as defined in WorkVisual)
Example The state of the switch at input “Switch1” and of the lamp at output “Lamp1” is
requested.
Description The “set” method of an output is used to change the value of the output.
No “set” methods are available for inputs. They can only be read.
Explanation of
Element Description
the syntax
Group name Name of the identifier of the I/O group
Output Name of the output (as defined in WorkVisual)
Value Value of the output
The data type of the value to be transferred depends on
the output type.
Example The lamp at output “Lamp1” is switched on and then switched off after
2000 ms.
public void run() {
// ...
switchLamp.setLamp1(true);
ThreadUtil.milliSleep(2000);
switchLamp.setLamp1(false);
// ...
}
Description Certain robot types, e.g. the LBR iiwa, have a joint torque sensor in each axis
which measures the torque acting on the axis. The interface ITorqueSensiti-
veRobot contains the methods required for requesting sensor data from the ro-
bot.
getMeasuredTorque()
The measured torque values can be requested and evaluated in the appli-
cation via the method getMeasuredTorque().
getExternalTorque()
Frequently, it is not the pure measured values which are of interest but
rather only the externally acting torques, without the component resulting
from the weight of the robot and mass inertias during motion. These values
are referred to as external torques. These external torques be accessed
via the method getExternalTorque().
getSingleTorqueValue(…), getTorqueValues()
The methods getMeasuredTorque() and getTorqueValues() return an ob-
ject of the type TorqueSensorData containing the torque sensor data of all
axes. From this object, it is then possible to request either all values as an
array with getTorqueValues(…) or a single axis value with getSingle-
TorqueValue(…).
When requesting the torque sensor data with Java, no real-time be-
havior is available. This means that the data supplied by the system
in the program were already created several milliseconds earlier.
Explanation of
Element Description
the syntax
measured Type: TorqueSensorData
Data
Variable for the return value of getMeasuredTorque(). The
return value contains the measured sensor data.
externalData Type: TorqueSensorData
Variable for the return value of getExternalTorque(). The
return value contains the externally acting torques.
robot Type: LBR
Name of the robot from which the sensor data are
requested
Element Description
allValues Type: double[]; unit: Nm
Array with all torque values which are requested from the
sensor data
singleValue Type: double; unit: Nm
Torque value of the axis which is requested from the sen-
sor data
joint Type: JointEnum
Axis whose torque value is to be requested
Example For a specific process step, the measured and externally acting torques are
requested in all axes and saved in an array to be evaluated later. The mea-
sured torque in axis A2 is read and displayed on the smartHMI. For output pur-
poses, a logger object has been integrated with dependency injection.
TorqueSensorData measuredData = robot.getMeasuredTorque();
TorqueSensorData externalData = robot.getExternalTorque();
Certain robot types, e.g. an LBR, have a joint torque sensor in each axis which
measures the torque acting on the axis. The robot controller calculates the
Cartesian forces and torques using the measured torques.
The interface IForceSensitiveRobot contains the methods for requesting the
external Cartesian forces and torques currently acting on the robot flange, the
TCP of a tool or any point of a gripped workpiece.
The following points must be taken into consideration:
The Cartesian forces and torques are estimated based on the measured
values of the joint torque sensors.
A force application point must be specified for the calculation. The external
Cartesian forces and torques calculated for the force application point are
only meaningful in terms of the physics involved if there are no external
forces acting on any other points on the robot.
The reliability of the calculated values can decrease considerably in ex-
treme poses, e.g. extended positions or singularities.
The quality and validity of the calculated values can be checked.
When changing the load data, e.g. with the attachTo command, the re-
quest can only be executed after the motion command has been sent to
the robot controller. For this purpose, a null space motion or the motion
command positionHold(…) is sufficient.
Description The method getExternalForceTorque(…) is used by the robot to read the ex-
ternal Cartesian forces and torques currently acting on the robot flange, the
TCP of a tool or any point of a gripped workpiece.
The method receives a frame as the transfer parameter. The transferred frame
is the reference frame for calculating the forces and torques, e.g. the tip of a
probe. The method calculates the externally applied forces and torques for the
position described by the frame.
For a meaningful calculation in terms of the physics involved, the transferred
frame must describe a point which is mechanically fixed to the flange. The giv-
en frame must also be statically connected to the robot flange frame in the
frame structure.
Optionally, a second frame can be transferred to the method as a parameter.
This frame specifies the orientation of a coordinate system in which the forces
and torques are represented.
Explanation of
Element Description
the syntax
data Type: ForceSensorData
Variable for the return value of getExternalForce-
Torque(…). The return value contains the calculated Carte-
sian forces and torques.
robot Type: LBR
Name of the robot
measure Type: AbstractFrame
Frame
Reference frame for calculation of the Cartesian forces and
torques.
orientation Type: AbstractFrame
Frame
Optional: Orientation of the frame in which the forces and
torques are represented.
Examples Requesting the external forces and torques acting on the robot flange:
ForceSensorData data =
robot.getExternalForceTorque(robot.getFlange());
Requesting the external forces and torques acting on the robot flange with the
orientation of the world coordinate system:
ForceSensorData data =
robot.getExternalForceTorque(robot.getFlange(),
World.Current.getRootFrame());
Description The external Cartesian forces and torques requested with getExternalForce-
Torque() can be requested separately from one another. The class ForceSen-
sorData provides the following methods for this:
getForce()
getTorque()
The result of these requests is a vector in each case. The values for each de-
gree of freedom can be requested individually with the methods of the Vector
class.
(>>> 15.4 "Requesting individual values of a vector" Page 352)
Explanation of
Element Description
the syntax
force Type: vector (com.kuka.roboticsAPI.geometricModel.math)
Vector with the Cartesian forces which act in the X, Y and Z
directions (unit: N)
torque Type: vector (com.kuka.roboticsAPI.geometricModel.math)
Vector with the Cartesian torques which act about the X, Y
and Z axes (unit: Nm)
data Type: ForceSensorData
Variable for the return value of getExternalForce-
Torque(…). The return value contains the calculated Carte-
sian forces and torques.
Example Requesting the Cartesian force which is currently acting on the robot flange in
the X direction:
ForceSensorData data =
robot.getExternalForceTorque(robot.getFlange());
Description In unfavorable robot positions, the calculated Cartesian forces and torques
can deviate from the actual forces and torques applied. In particular near sin-
gularities, several of the calculated values are highly unreliable and can be in-
valid. Depending on the axis position, this only applies to some of the
calculated values.
The quality and validity of the calculated values can be evaluated and request-
ed in the program. The class ForceSensorData provides the following meth-
ods for this:
getForceInaccuracy(), getTorqueInaccuracy()
The inaccuracy of the calculated force and torque values can be request-
ed.
The result of these requests is a vector in each case. The values for each
degree of freedom can be requested individually with the methods of the
Vector class.
(>>> 15.4 "Requesting individual values of a vector" Page 352)
Depending on the axis position, the quality of the calculated values for the
individual degrees of freedom may be different. By requesting the individ-
ual values, it is possible to determine the degrees of freedom for which the
calculation of forces and torques in the current pose supplies valid values.
isForceValid(…), isTorqueValid(…)
The validity of the calculated force and torque values can be requested.
A limit value for the maximum permissible inaccuracy up to which the cal-
culated values are still valid is transferred as a parameter for each method.
Explanation of
Element Description
the syntax
force Type: vector (com.kuka.roboticsAPI.geometricModel.math)
Vector with the values for the inaccuracy with which the
Cartesian forces acting in the X, Y and Z directions are cal-
culated (unit: N)
torque Type: vector (com.kuka.roboticsAPI.geometricModel.math)
Vector with the values for the inaccuracy with which the
Cartesian torques acting about the X, Y and Z axes are cal-
culated (unit: Nm)
data Type: ForceSensorData
Variable for the return value of getExternalForce-
Torque(…). The return value contains the calculated Carte-
sian forces and torques.
tolerance Type: double; unit: N or Nm
Limit value for the maximum permissible inaccuracy up to
which the calculated Cartesian forces and torques are still
valid
valid Type: boolean
Variable for the return value of isForceValid(…) or
isTorqueValid(…)
true: The inaccuracy value in all Cartesian directions is
less than or equal to the limit value defined with toleran-
ce.
false: The inaccuracy value in one or more Cartesian
directions exceeds the tolerance value
Example A certain statement block should only be executed if the external Cartesian
forces acting along the axes of the flange coordinate system have been calcu-
lated with an accuracy of 20 N or better.
ForceSensorData data =
robot.getExternalForceTorque(robot.getFlange());
if (data.isForceValid(20)){
//do something
}
The axis-specific and Cartesian robot position can be requested in the appli-
cation. It is possible to request the actual and the setpoint position for each.
Method Description
getCommandedCartesianPo- Return value type: Frame
sition(…)
Requests the Cartesian setpoint position
getCommandedJointPosition() Return value type: JointPosition
Requests the axis-specific setpoint position
Method Description
getCurrentCartesianPosi- Return value type: Frame
tion(…)
Requests the Cartesian actual position
getCurrentJointPosition() Return value type: JointPosition
Requests the axis-specific actual position
getPositionInformation(…) Return value type: PositionInformation
Requests the Cartesian position information
The return value contains the following information:
Axis-specific actual position
Axis-specific setpoint position
Cartesian actual position
Cartesian setpoint position
Cartesian setpoint/actual value difference (rotational)
Cartesian setpoint/actual value difference (translational)
Description For requesting the axis-specific actual or setpoint position of the robot, the po-
sition of the robot axes is first saved in a variable of type JointPosition.
From this variable, the positions of individual axes can then be requested. The
axis whose position is to be requested can be specified using either its index
or the Enum JointEnum.
Explanation of
Element Description
the syntax
position Type: JointPosition
Variable for the return value. The return value contains the
requested axis positions.
robot Type: Robot
Name of the robot from which the axis positions are
requested
value Type: double; unit: rad
Position of the requested axis
axis Type: int or JointEnum
Index or JointEnum of the axis whose position is requested
0 … 11: Axis A1 … Axis A12
JointEnum.J1 … JointEnum.J12: Axis A1 … Axis A12
Example First the axis-specific actual position of the robot and then the position of axis
A3 are requested via the index of the axis. The angle for axis A3 is displayed
in degrees on the smartHMI. For output purposes, a logger object has been
integrated with dependency injection.
Description It is possible to request the Cartesian actual or setpoint position of the robot
flange as well as any other frame below it. This means every frame of an object
which is attached to the robot flange via the attachTo command, e.g. the TCP
of a tool or the frame of a gripped workpiece.
As standard, the result of the request, i.e. the Cartesian position, refers to the
world coordinate system. Optionally, it is possible to specify another reference
coordinate system relative to which the Cartesian position is requested. This
can for example be a frame created in the application data or a calibrated
base.
The result of the request is saved in a variable of type Frame and contains all
the necessary redundancy information (redundancy angle, Status and Turn).
From this variable, the position (X, Y, Z) and orientation (A, B, C) of the frame
can be requested via the type-specific get methods.
Explanation of
Element Description
the syntax
position Type: Frame
Variable for the return value. The return value contains the
requested Cartesian position.
robot Type: Robot
Name of the robot from which the Cartesian position is
requested
frameOn Type: ObjectFrame
Flange
Robot flange or a frame subordinated to the flange whose
Cartesian position is requested
reference Type: AbstractFrame
Frame
Reference coordinate system relative to which the Carte-
sian position is requested. If no reference coordinate sys-
tem is specified, the Cartesian position refers to the world
coordinate system.
Examples Cartesian actual position of the robot flange with reference to the world coor-
dinate system:
Frame cmdPos = robot.getCurrentCartesianPosition(robot.getFlange());
Description The Cartesian setpoint/actual value difference (= difference between the pro-
grammed and measured position) can be requested with the getPositionInfor-
mation(…) method.
The result of the request is saved in a variable of type PositionInformation.
From this variable, the translational and rotational setpoint/actual value differ-
ences can be requested separately from each other.
Explanation of
Element Description
the syntax
info Type: PositionInformation
Variable for the return value. The return value contains the
requested position information.
robot Type: Robot
Name of the robot from which the position information is
requested
frameOn Type: ObjectFrame
Flange
Robot flange or a frame subordinated to the flange whose
position information is being requested
reference Type: AbstractFrame
Frame
Reference coordinate system relative to which the position
information is requested. If no reference coordinate system
is specified, the position information refers to the world
coordinate system.
Element Description
translatoryDiff Type: vector (com.kuka.roboticsAPI.geometricModel.math)
Translational setpoint/actual value difference in the X, Y, Z
directions (type: double, unit: mm)
The offset values for each degree of freedom can be
requested individually with the “get” methods of the Vector
class.
(>>> 15.4 "Requesting individual values of a vector"
Page 352)
rotatoryDiff Type: rotation (com.kuka.roboticsAPI.geometric-
Model.math)
Setpoint/actual value difference of the axis angles A, B, C
(type: double, unit: rad)
The offset values for each degree of freedom can be
requested individually with the “get” methods of the Rota-
tion class - getAlphaRad(), getBetaRad, getGammaRad().
Axis A1 A2 A3 A4 A5 A6 A7
Pos. 0° 0° 0° 0° 0° 0° 0°
Syntax robot.setHomePosition(home);
Explanation of
Element Description
the syntax
robot Type: Robot
Name of the robot to which the new HOME position refers
home Type: JointPosition; unit: rad
1st option: transfer the axis position of the robot in the new
HOME position.
Type: AbstractFrame
2nd option: transfer a frame as the new HOME position.
Note: The frame must contain all redundancy information
so that the axis positions of the robot in the HOME position
are unambiguous. This is the case with a taught frame, for
example.
To transfer the taught frame as the HOME position and move to it with pt-
pHome():
@Inject
private LBR robot;
// ...
ObjectFrame newHome = getApplicationData().getFrame("/Homepos");
robot.setHomePosition(newHome);
robot.moveAsync(ptpHome());
Different system states can be requested from the robot and processed in the
application. The requesting of system states is primarily required when using
a higher-level controller so that the controller can react to changes in state.
Description The following methods of the Robot class are available for requesting the
HOME position:
getHomePosition()
Requests the HOME position currently defined for the robot
isInHome()
Checks whether the robot is currently in the HOME position
Explanation of
Element Description
the syntax
homePos Type: JointPosition
Variable for the return value of getHomePosition(). The
return value contains axis angles of the requested HOME
position.
robot Type: Robot
Name of the robot from which the HOME position is
requested
result Type: boolean
Variable for the return value of isInHome(). The return
value is true when the robot is in the HOME position.
Example As long as the robot is not yet in the HOME position, a certain statement block
is to be executed.
@Inject
private LBR robot;
// ...
while(! robot.isInHome()){
//do something
}
Description The method isMastered() is available for requesting the mastering state. The
method belongs to the Robot class.
Explanation of
Element Description
the syntax
robot Type: Robot
Name of the robot whose mastering state is requested
result Type: Boolean
Variable for the return value
true: All axes are mastered.
false: One or more axes are unmastered.
Description The method isReadyToMove() is available for checking whether the robot is
ready for motion. The method belongs to the Robot class. It returns the value
“true” if the robot is ready to move.
The robot is ready to move if the following conditions are met:
No safety stop is active.
The drives are in an error-free state.
Automatic mode is set.
OR:
In mode T1 or T2, the enabling signal is issued via the smartPAD (enabling
switch in center position).
If the check returns the value “true”, this does not necessarily mean
that the brakes are open and that the robot is under active servo con-
trol.
Explanation of
Element Description
the syntax
robot Type: Robot
Name of the robot which is checked as to whether it is
ready for motion
result Type: boolean
Variable for the return value
true: Robot is ready for motion.
false: Robot is not ready for motion.
Description There is a notification service of the Controller class in RoboticsAPI which re-
ports changes in the “ready for motion” signal. To register for the service,
transfer an IControllerStateListener object to the Controller attribute in the ro-
bot application. The method addControllerListener(…) is used for this pur-
pose.
The method onIsReadyToMoveChanged(…) is called every time the “ready to
move” signal changes. The reaction to the change can be programmed in the
body of the method onIsReadyToMoveChanged(…).
Syntax kuka_Sunrise_Cabinet.addControllerListener(new
IControllerStateListener() {
...
@Override
public void onIsReadyToMoveChanged(Device device,
boolean isReadyToMove) {
// Reaction to change
}
...
});
Explanation of
Element Description
the syntax
kuka_Sunrise Type: Controller
_Cabinet
Controller attribute of the robot application (= name of the
robot controller in the application)
Description A robot is active if a motion command is active. This affects both motion com-
mands from the application and jogging commands.
The method hasActiveMotionCommand() is available for checking whether
the robot is active. The method belongs to the Robot class.
The request does not provide any information as to whether the robot is cur-
rently in motion:
If the request returns the value “false” (no motion command active), this
does not necessarily mean that the robot is stationary. For example, robot
activity may be checked directly after a synchronous motion command
with a break condition. If the break condition occurs, the request supplies
the value “false” if the robot is braked and moving.
If the request returns the value “true” (motion command active), this does
not necessarily mean that the robot is moving. For example, the request
returns the value “true” if a position-controlled robot executes the motion
command positionHold(…) and is stationary.
Explanation of
Element Description
the syntax
robot Type: Robot
Name of the robot whose activity is checked
result Type: boolean
Variable for the return value
true: A motion command is active.
false: No motion command is active.
Description The state of the following safety signals can be requested and evaluated in an
application:
Active operating mode
Enabling
Local EMERGENCY STOP
External EMERGENCY STOP
“Operator safety” signal
Stop request (safety stop)
Referencing state of position and joint torque sensors
The state of the different safety signals is first requested via the method get-
SafetyState() and grouped in an object of type ISafetyState.
From this object, the states of individual safety signals can then be requested.
The interface ISafetyState contains the methods required for this.
Explanation of
Element Description
the syntax
currentState Type: ISafetyState
Variable for the return value. The return value contains the
state of the safety signals at the time of requesting with
getSafetyState().
Note: This does not apply to the referencing states. Refer-
encing states are not requested until the corresponding
methods of the ISafetyState object are called.
kinematics Type: MovableDevice
Kinematic system for which the state of the safety signals
is requested
Precondition The EMERGENCY STOP signal and the “Operator Safety” signal can only be
evaluated if the following conditions are met in the safety configuration:
The selected category matches the safety function:
Category Local EMERGENCY STOP for local EMERGENCY STOP
Category External EMERGENCY STOP for external EMERGENCY
STOP
Category Operator safety for operator safety
The configured reaction is a safety stop (no output).
Method Description
getEmergencyStopInt() Return value type: Enum of type EmergencyStop
Checks whether a local E-STOP is activated.
ACTIVE: Local E-STOP is activated.
INACTIVE: Local E-STOP is not activated.
NOT_CONFIGURED: Not relevant, as a local E-STOP is al-
ways configured.
getEmergencyStopEx() Return value type: Enum of type EmergencyStop
Checks whether an external E-STOP is activated.
ACTIVE: External E-STOP is activated.
INACTIVE: External E-STOP is not activated.
NOT_CONFIGURED: No external EMERGENCY STOP is
configured.
getEnablingDeviceState() Return value type: Enum of type EnablingDeviceState
Checks whether an enabling switch is pressed.
HANDGUIDING: Enabling switch on the hand guiding de-
vice is pressed.
NORMAL: Enabling switch on the smartPAD is pressed.
NONE: No enabling switch is pressed or a safety function
has been violated and is blocking motion enable.
getOperationMode() Return value type: Enum of type OperationMode (package:
com.kuka.roboticsAPI.deviceModel)
Checks which operating mode is active.
T1, T2, AUT, CRR
Method Description
getOperatorSafetyState() Return value type: Enum of type OperatorSafety
Checks the “Operator safety” signal.
OPERATOR_SAFETY_OPEN: Operator safety is violated
(e.g. safety gate is open).
OPERATOR_SAFETY_CLOSED: Operator safety is not vi-
olated.
NOT_CONFIGURED: No operator safety is configured.
getSafetyStopSignal() Return value type: Enum of type SafetyStopType
Checks whether a safety stop is activated.
NOSTOP: No safety stop is activated.
STOP0: A safety stop 0 or a safety stop 1 is activated.
STOP1: A safety stop 1 (path-maintaining) is activated.
STOP2: This value is currently not returned.
The methods for requesting the referencing state are described here:
(>>> 15.16.5.1 "Requesting the referencing state" Page 396)
Example The system checks whether a safety stop is activated. If this is the case, the
operator safety is then checked. If this is violated, a message is displayed on
the smartHMI. For output purposes, a logger object has been integrated with
dependency injection.
ISafetyState safetyState = robot.getSafetyState();
if(safetyStop != SafetyStopType.NOSTOP){
OperatorSafety operatorSafety =
safetyState.getOperatorSafetyState();
if(operatorSafety == OperatorSafety.OPERATOR_SAFETY_OPEN) {
logger.warn("The safety gate is open!");
}
}
Description An LBR has position and joint torque sensors that can be referenced. The ref-
erencing state of these sensors can be requested by the robot, e.g. to check
whether referencing needs to be carried out again.
If a robot has no position or joint torque sensors that can be referenced, the
request returns the value “false”.
Method Description
isAxisGMSReferenced(…) Return type: Boolean
Checks whether the joint torque sensor of a specific robot axis
is referenced. The axis to be checked is transferred as a param-
eter (type: JointEnum).
true: Joint torque sensor of the axis is referenced.
false: Joint torque sensor of the axis is not referenced or the
robot has no joint torque sensors that can be referenced.
If an invalid axis is transferred, i.e. an axis that is not present on
the robot, an Illegal Argument Exception is triggered.
areAllAxesGMSReferenced() Return type: Boolean
Checks whether all joint torque sensors of the robot are refer-
enced.
true: All joint torque sensors are referenced.
false: At least 1 joint torque sensor is not referenced or the
robot has no joint torque sensors that can be referenced.
isAxisPositionReferenced(…) Return type: Boolean
Checks whether the position sensor of a specific robot axis is
referenced. The axis to be checked is transferred as a parame-
ter (type: JointEnum).
true: Position sensor of the axis is referenced.
false: Position sensor of the axis is not referenced or the ro-
bot has no position sensors that can be referenced.
If an invalid axis is transferred, i.e. an axis that is not present on
the robot, an Illegal Argument Exception is triggered.
areAllAxesPosition Return type: Boolean
Referenced()
Checks whether all position sensors of the robot are referenced.
true: All position sensors are referenced.
false: At least 1 position sensor is not referenced or the robot
has no position sensors that can be referenced.
Description There is a notification service of the Controller class in RoboticsAPI which re-
ports changes in the state of safety signals. This service enables a direct re-
action to the change in a signal state.
To register for the service, transfer an ISunriseControllerStateListener object
to the Controller attribute in the robot application. The method addController-
Listener(…) is used for this purpose.
The method onSafetyStateChanged(…) is called every time the state of a
safety signal changes. The reaction to the change can be programmed in the
body of the method onSafetyStateChanged(…).
Syntax kuka_Sunrise_Cabinet.addControllerListener(new
ISunriseControllerStateListener() {
...
@Override
public void onSafetyStateChanged(Device device,
SunriseSafetyState safetyState) {
// Reaction to change in state
}
});
Explanation of
Element Description
the syntax
kuka_Sunrise Type: Controller
_Cabinet
Controller attribute of the robot application (= name of the
robot controller in the application)
Example If the state of a safety signal changes, the operator safety is checked via the
method onSafetyStateChanged()(…). If this is violated, a message is dis-
played on the smartHMI. For output purposes, a logger object has been inte-
grated with dependency injection.
kuka_Sunrise_Cabinet.addControllerListener(new
ISunriseControllerStateListener() {
// ...
@Override
public void onSafetyStateChanged(Device device,
SunriseSafetyState safetyState) {
OperatorSafety operatorSafety =
safetyState.getOperatorSafetyState();
if(operatorSafety == OperatorSafety.OPERATOR_SAFETY_OPEN){
logger.warn("The saftey gate is open!");
}
}
});
Description The program run mode can be changed and requested via the methods setEx-
ecutionMode(…) and getExecutionMode() of the SunriseExecutionService.
The SunriseExecutionService itself is requested by the Controller.
Explanation of
Element Description
the syntax
service: Type: SunriseExecutionService
Variable for the return value (contains the SunriseExecu-
tionService reequested by the Controller)
newMode Type: Enum of type ExecutionMode
New program run mode
ExecutionMode.Step: Step mode (program sequence
with a stop after each motion command)
ExecutionMode.Continuous: Standard mode (contin-
uous program sequence without stops)
currentMode Type: ExecutionMode
Variable for the return value (contains the program run
mode requested by the SunriseExecutionService)
The system first switches to Step mode and then back to standard mode.
public void run() {
// ...
serv.setExecutionMode(ExecutionMode.Step);
// ...
serv.setExecutionMode(ExecutionMode.Continuous);
// ...
}
Method Description
getApplicationOverride() Return value type: double
Requests the application override
getManualOverride() Return value type: double
Requests the manual override
getEffectiveOverride() Return value type: double
Requests the effective program override
Method Description
setApplicationOverride(…) Sets the application override to the specified value (type: dou-
ble)
0…1
clipApplicationOverride(…) Reduces the application override to the specified value (type:
double)
0…1
If a value is specified that is higher than the value currently pro-
grammed for the application override, the statement clipApplica-
tionOverride(…) is ignored.
clipManualOverride(…) Reduces the manual override to the specified value (type: dou-
ble)
0…1
If a value is specified that is higher than the currently pro-
grammed manual override, the statement clipManualOver-
ride(…) is ignored.
Example getApplicationControl().setApplicationOverride(0.5);
// ...
double actualOverride =
getApplicationControl().getEffectiveOverride();
Description It is possible for an application to inform itself when an override changes. A lis-
tener of type IApplicationOverrideListener must be defined and registered for
this purpose.
When changing an override, the method overrideChanged(…) is called. The
reaction to the change can be programmed in the body of the method overri-
deChanged(…).
IApplicationOverrideListener overrideListener =
new IApplicationOverrideListener(){
@Override
public void overrideChanged(double effectiveOverride,
double manualOverride, double applicationOverride) {
// Reaction to override change
};
};
Registering a listener:
getApplicationControl().
addOverrideListener(overrideListener);
Removing a listener:
getApplicationControl().
removeOverrideListener(overrideListener);
Explanation of
Element Description
the syntax
override Type: IApplicationOverrideListener
Listener
Name of the listener
Often, values are to be monitored in applications and if definable limits are ex-
ceeded or not reached, specific reactions are to be triggered. Possible sources
for these values include the sensors of the robot or configured inputs. The
progress of a motion can also be monitored. Possible reactions are the termi-
nation of a motion being executed or the execution of a handling routine.
A condition can have 2 states: It is met (state = TRUE) or or not met (state =
FALSE). To define a condition, an expression is formulated. In this expression,
data, such as measurements provided by the system, are compared with a
permissible limit value. The result of the evaluation of the expression defines
the state of the condition.
Since different system data can be used for formulating conditions, there are
different kinds of conditions. Each condition type is made available as its own
class in the RoboticsAPI. They belong to the com.kuka.roboticsAPI.condition-
Model package and implement the ICondition interface.
Some system data, e.g. axis torques or Cartesian forces and torques on the
robot flange, are only available for sensitive robot types equipped with corre-
sponding sensor systems. These sensitive robot types include the LBR iiwa.
Condition types using forces or torques are only supported by these sensitive
robot types. If these condition types are applied to robots that do not provide
information about forces or torques, this results in a runtime error (Exception).
Categories The various condition types can be subdivided into the following categories:
Sensor-related conditions
Path-related conditions
Distance-related conditions
I/O-related conditions
Sensor-related conditions:
Path-related conditions:
Distance-related conditions:
I/O-related conditions:
Operators
Operator Description/syntax
NOT Inversion of the calling ICondition object
ICondition invert();
XOR EITHER/OR operation linking the calling ICondition object
with a further condition
ICondition xor(ICondition other);
other: further condition
Operator Description/syntax
AND AND operation linking the calling ICondition object with one
or more additional conditions
ICondition and(ICondition other1, ICondition
other2, …);
// NOT A
combi1 = condA.invert();
// A AND B AND C
combi2 = condA.and(condB, condC);
// (A OR B) AND C
combi3 = condA.or(condB).and(condC);
// (A OR B) AND (C OR D)
combi4 = condA.or(condB).and(condC.or(condD));
Description The axis torque condition is used to check whether the external torque deter-
mined in an axis lies outside of a defined range of values.
(>>> 15.12 "Requesting axis torques" Page 382)
Element Description
joint Axis whose torque value is to be checked
Element Description
minTorque Lower limit value for the axis torque (unit: Nm)
The condition is met if the torque is less than or equal to
minTorque.
maxTorque Upper limit value for the axis torque (unit: Nm)
The condition is met if the torque is greater than or equal to
minTorque.
The following must apply when determining the upper and lower limit values
for the torque: minTorque ≤ maxTorque.
Description The force condition can be used to check whether a Cartesian force exerted
on a frame below the robot flange exceeds a defined limit value.
For example, it is possible to react to the force generated when the robot
presses on a surface using a tool mounted on the flange. For the force condi-
tion, the projections of the force vector exerted on a frame below the flange
are considered. The position of this frame is defined by the point of application
of the force (here the tool tip). The orientation of the frame should correspond
to the orientation of the surface.
of the force exerted vertically on the surface. For example, pressure is ex-
erted via the normal force in order to fit a component.
Shear force S:
The shear force is the projection of the force exerted on the surface. This
results in the part of the force exerted parallel to the surface. The shear
force is generated by friction.
Methods Force conditions are of the data type ForceCondition. ForceCondition contains
the following static methods for programming conditions:
createSpatialForceCondition(…): Condition for Cartesian force from all di-
rections
createNormalForceCondition(…): Condition for normal force
createShearForceCondition(…): Condition for shear force
To formulate the condition, a frame below the flange coordinate system (e.g.
the tip of a tool) is defined as a reference system. The forces which are exerted
relative to this frame are determined. The orientation of the reference system
can be optionally defined via an orientation frame. This can be used, for ex-
ample, to define the position of the surface on which the force is exerted.
A limit value is defined to determine the minimum force magnitude which
meets the condition.
The Cartesian force is calculated from the values of the joint torque sensors.
The reliability of the calculated force values varies depending on the axis con-
figuration. If the quality of the force calculation is also to be taken into account,
it is possible to specify a value for the maximum permissible inaccuracy. If the
system calculates an inaccuracy exceeding this value, the force condition is
also met.
Syntax ForceCondition.createSpatialForceCondition(
AbstractFrame measureFrame<, AbstractFrame orientationFrame>,
double threshold<, double tolerance>)
Explanation of
Element Description
the syntax
measure Frame below the robot flange relative to which the exerted
Frame force is determined.
The position of the point of application of the force is
defined using this parameter.
orientation Optional. The orientation of the reference system is defined
Frame using this parameter.
If the orientationFrame parameter is not specified, measu-
reFrame defines the orientation of the reference system.
threshold Maximum magnitude of force which may act on the refer-
ence system (unit: N).
≥ 0.0
The condition is met if the magnitude of force exerted on
the reference system from any direction exceeds the value
specified here.
tolerance Optional. Maximum permissible inaccuracy of the calcu-
lated values (unit: N).
> 0.0
Default: 10.0
The condition is met if the inaccuracy of the force calcula-
tion is greater than or equal to the value specified here.
If the parameter is not specified, the default value is auto-
matically used.
Example The condition is met as soon as the magnitude of the force acting from any
direction on the TCP of a tool exceeds 30 N.
public class ExampleApplication extends RoboticsAPIApplication {
@Inject
private LBR robot;
@Inject
private Tool gripper;
// ...
@Override
public void initialize() {
// ...
gripper.attachTo(robot.getFlange());
// ...
}
@Override
public void run() {
// ...
ForceCondition spatialForce_tcp = ForceCondition.
createSpatialForceCondition(
gripper.getFrame("/TCP"),
30.0);
// ...
}
}
Description A condition for the normal force can be defined via the static method create-
NormalForceCondition(…). The component of the force exerted along a defin-
able axis of a frame below the flange (e.g. along an axis of the TCP) is
considered here. This axis is generally defined so that it is perpendicular to the
surface on which the force is exerted (surface normal).
Syntax ForceCondition.createNormalForceCondition(AbstractFrame
measureFrame<, AbstractFrame orientationFrame>, CoordinateAxis
direction, double threshold<, double tolerance>)
Explanation of
Element Description
the syntax
measure Frame below the robot flange relative to which the exerted
Frame force is determined.
The position of the point of application of the force is
defined using this parameter.
orientation Optional. The orientation of the reference system is defined
Frame using this parameter.
If the orientationFrame parameter is not specified, measu-
reFrame defines the orientation of the reference system.
direction Coordinate axis of the reference system.
The force component acting on the axis specified here is
checked with the condition.
CoordinateAxis.X
CoordinateAxis.Y
CoordinateAxis.Z
threshold Maximum magnitude of force which may act along the axis
of the reference system (unit: N).
≥ 0.0
The condition is met if the magnitude of force exceeds the
value specified here.
tolerance Optional. Maximum permissible inaccuracy of the calcu-
lated values (unit: N).
> 0.0
Default: 10.0
The condition is met if the inaccuracy of the force calcula-
tion is greater than or equal to the value specified here.
If the parameter is not specified, the default value is auto-
matically used.
Example A gripper mounted on the flange presses on a table plate. The robot is to react
to that part of the force exerted at the TCP of the gripper which acts vertically
on the table plate. The reference system is therefore defined such that its Z
axis runs along the surface normal of the table plate.
The condition is met as soon as the normal force exceeds a magnitude of
45 N. The condition is also to be considered met if the inaccuracy value of the
calculated data exceeds 8.
public class ExampleApplication extends RoboticsAPIApplication {
@Inject
private LBR robot;
@Inject
private Tool gripper;
// ...
@Override
public void initialize() {
// ...
gripper.attachTo(robot.getFlange());
// ...
}
@Override
public void run() {
// ...
ForceCondition normalForce_z = ForceCondition.
createNormalForceCondition(
gripper.getFrame("/TCP"),
getFrame("/Table/Edge/Tabletop"),
CoordinateAxis.Z,
45.0,
8.0);
// ...
}
}
Description A condition for the shear force can be defined via the static method createS-
hearForceCondition(…). The component of the force acting parallel to a plane
is considered here. The position of the plane is determined by specifying the
axis which is vertical to the plane.
Syntax ForceCondition.createShearForceCondition(AbstractFrame
measureFrame<, AbstractFrame orientationFrame>, CoordinateAxis
normalDirection, double threshold<, double tolerance>)
Explanation of
Element Description
the syntax
measure Frame below the robot flange relative to which the exerted
Frame force is determined.
The position of the point of application of the force is
defined using this parameter.
orientation Optional. The orientation of the reference system is defined
Frame using this parameter.
If the orientationFrame parameter is not specified, measu-
reFrame defines the orientation of the reference system.
normal Coordinate axis of the reference system.
Direction
The axis specified here defines the surface normal of a
plane. The force component acting parallel to this plane is
checked.
CoordinateAxis.X
CoordinateAxis.Y
CoordinateAxis.Z
Element Description
threshold Maximum magnitude of force which may be exerted paral-
lel to the reference system plane defined by its surface nor-
mal (unit: N).
≥ 0.0
The condition is met if the magnitude of force exceeds the
value specified here.
tolerance Optional. Maximum permissible inaccuracy of the calcu-
lated values (unit: N).
> 0.0
Default: 10.0
The condition is met if the inaccuracy of the force calcula-
tion is greater than or equal to the value specified here.
If the parameter is not specified, the default value is auto-
matically used.
Example A gripper mounted on the flange presses on a table plate. The force at the TCP
of the gripper is to be determined using the orientation of the table plate. This
process considers the shear force which acts parallel to the XY plane of the
measurement point, defined by the TCP and the position of the table.
To define the XY plane, the axis perpendicular to this plane must be specified
as a parameter. This is the Z axis.
The condition is met as soon as the shear force exceeds a magnitude of 25 N.
The condition is also to be considered met if the inaccuracy value of the cal-
culated data exceeds 5.
public class ExampleApplication extends RoboticsAPIApplication {
@Inject
private LBR robot;
@Inject
private Tool gripper;
// ...
@Override
public void initialize() {
// ...
gripper.attachTo(robot.getFlange());
// ...
}
@Override
public void run() {
// ...
ForceCondition shearForce_xyPlane = ForceCondition.
createShearForceCondition(
gripper.getFrame("/TCP"),
getFrame("/Table/Edge/Tabletop"),
CoordinateAxis.Z,
25.0,
5.0);
// ...
}
}
Description The force component condition can be used to check whether the Cartesian
force exerted on a frame below the robot flange (e.g. at the TCP) in the X, Y
or Z direction exceeds a defined range.
Explanation of
Element Description
the syntax
measure Frame below the robot flange relative to which the exerted
Frame force is determined.
The position of the point of application of the force is
defined using this parameter.
orientation Optional: The orientation of the reference system is defined
Frame using this parameter.
If the orientationFrame parameter is not specified, measu-
reFrame defines the orientation of the reference system.
Element Description
coordinate Coordinate axis of the frame relative to which the exerted
Axis force is determined. Defines the direction from which the
acting force is checked.
CoordinateAxis.X
CoordinateAxis.Y
CoordinateAxis.Z
min Lower limit of the range of values for the force exerted
along the coordinate axis of the reference system (unit: N).
The force component condition is met if the force falls
below the value specified here.
max Upper limit of the range of values for the force exerted
along the coordinate axis of the reference system (unit: N).
The force component condition is met if the force exceeds
the value specified here.
Note: The upper limit value must be greater than the lower
limit value: max > min.
tolerance Optional: Maximum permissible inaccuracy of the calcu-
lated values.
> 0.0
Default: 10.0
The force component condition is met if the inaccuracy of
the force calculation is greater than or equal to the value
specified here.
If the parameter is not specified, the default value is auto-
matically used.
@Override
public void run() {
// ...
bolt.attachTo(gripper.getFrame("/Root"));
ForceComponentCondition assemblyForce_inverted =
new ForceComponentCondition(
bolt.getFrame("/Assembly"),
CoordinateAxis.Z,
20.0,
25.0);
Description The condition can be used to check whether a Cartesian torque exerted on a
frame below the robot flange exceeds a defined limit value. The point of appli-
cation of the torque is specified by means of a frame below the robot flange
coordinate system.
One application for this condition is the monitoring of torques that occur in a
screw fastening process.
1 Power wrench
2 Screw
3 Reference frame, here the tip of the power wrench
The condition for the Cartesian torque can be used to check different projec-
tions of the torque vector acting on the axes of the reference frame:
Torque MTurn
The torque exerted about an axis arises from the projection of the torque
vector on this axis.
Tilting torque MTilt
The tilting torque arises from the projection of the torque vector on a plane.
The torque is applied about the longitudinal axis of the power wrench during a
screw fastening process in order to screw in the screw. If the condition for the
torque is used, it is possible to ensure that the maximum permissible values
are not exceeded when fastening screws.
The tilting torque arises during a screw fastening process as a result of unde-
sired tilting of the power wrench about the longitudinal axis, forwards or to the
side. If the condition for the tilting torque is configured, it is possible to check
whether the tilting torque is within an acceptable range of values.
Methods Conditions for the Cartesian torque are of data type CartesianTorqueCondi-
tion. CartesianTorqueCondition contains the following static methods for pro-
gramming conditions:
createSpatialTorqueCondition(…): Condition for Cartesian torque from all
directions
createTurningTorqueCondition(…): Condition for torque
createTiltingTorqueCondition(…): Condition for tilting torque
To formulate the condition, a frame is defined as a reference system below the
flange coordinate system. The torque is determined at this frame, e.g. at the
tip of a power wrench. The orientation of the reference system can be option-
ally defined via an orientation frame. In this way, the desired orientation of the
screw can be specified, for example.
A limit value is defined to determine the minimum Cartesian torque magnitude
which meets the condition.
The Cartesian torque is calculated from the values of the joint torque sensors.
The reliability of the calculated Cartesian torques varies depending on the axis
configuration. If the quality of the calculation is also to be taken into account,
it is possible to specify a value for the maximum permissible inaccuracy. If the
system calculates an inaccuracy exceeding this value, the condition for the
Cartesian torque is also met.
Syntax CartesianTorqueCondition.createSpatialTorqueCondition(
AbstractFrame measureFrame<, AbstractFrame orientationFrame>,
double threshold<, double tolerance>)
Element Description
measure Frame below the robot flange at which the exerted torque
Frame is determined.
The position of the point of application of the torque is
defined using this parameter.
orientation Optional. The orientation of the reference system is defined
Frame using this parameter.
If the orientationFrame parameter is not specified, measu-
reFrame defines the orientation of the reference system.
Element Description
threshold Maximum magnitude of the torque which may act on the
reference system (unit: Nm).
≥ 0.0
The condition is met if the magnitude of the torque exerted
on the reference system from any direction exceeds the
value specified here.
tolerance Optional. Maximum permissible inaccuracy of the calcu-
lated values (unit: Nm).
> 0.0
Default: 10.0
The condition is met if the inaccuracy of the torque calcula-
tion is greater than or equal to the value specified here.
If the parameter is not specified, the default value is auto-
matically used.
Description A condition for the torque can be defined via the static method createTurning-
TorqueCondition(…). The component of the overall torque applied about a de-
finable axis of a frame below the flange (e.g. about an axis of the TCP) is
considered here.
Syntax CartesianTorqueCondition.createTurningTorqueCondi-
tion(AbstractFrame measureFrame<, AbstractFrame orienta-
tionFrame>, CoordinateAxis direction, double threshold<, double
tolerance>)
Element Description
measure Frame below the robot flange at which the exerted torque
Frame is determined.
The position of the point of application of the torque is
defined using this parameter.
orientation Optional. This parameter defines the orientation of the
Frame frame relative to which the torque is determined.
If the orientationFrame parameter is not specified, measu-
reFrame defines the orientation of the reference system.
direction Coordinate axis of the reference system.
The component of the overall torque acting on the axis
specified here of the reference system is checked using
this condition.
CoordinateAxis.X
CoordinateAxis.Y
CoordinateAxis.Z
Element Description
threshold Maximum magnitude of the torque that may be applied to
the axis of the reference system (unit: Nm).
≥ 0.0
The condition is met if the magnitude of the torque exceeds
the value specified here.
tolerance Optional. Maximum permissible inaccuracy of the calcu-
lated values (unit: Nm).
> 0.0
Default: 10.0
The condition is met if the inaccuracy of the torque calcula-
tion is greater than or equal to the value specified here.
If the parameter is not specified, the default value is auto-
matically used.
Description A condition for the tilting torque can be defined via the static method createTil-
tingTorqueCondition(…). The component of the overall torque applied to a
plane of the reference system is considered here. The position of the plane is
determined by specifying the axis which is vertical to the plane (surface nor-
mal).
Syntax CartesianTorqueCondition.createTiltingTorqueCondi-
tion(AbstractFrame measureFrame<, AbstractFrame orienta-
tionFrame>, CoordinateAxis normalDirection, double threshold<,
double tolerance>)
Element Description
measure Frame below the robot flange at which the exerted torque
Frame is determined.
The position of the point of application of the torque is
defined using this parameter.
orientation Optional. This parameter defines the orientation of the
Frame frame relative to which the torque is determined.
If the orientationFrame parameter is not specified, measu-
reFrame defines the orientation of the reference system.
normal Coordinate axis of the reference system.
Direction
The axis specified here defines the surface normal of a
plane. The component of the overall torque applied to the
plane is checked.
CoordinateAxis.X
CoordinateAxis.Y
CoordinateAxis.Z
Element Description
threshold Maximum magnitude of the tilting torque that may be
applied to the plane of the reference system defined by its
surface normal (unit: Nm).
≥ 0.0
The condition is met if the magnitude of the torque exceeds
the value specified here.
tolerance Optional. Maximum permissible inaccuracy of the calcu-
lated values (unit: Nm).
> 0.0
Default: 10.0
The condition is met if the inaccuracy of the torque calcula-
tion is greater than or equal to the value specified here.
If the parameter is not specified, the default value is auto-
matically used.
Description The torque component condition can be used to check whether the Cartesian
torque exerted about the X, Y or Z axis of a frame below the robot flange (e.g.
about an axis of the TCP) is outside a defined range. It is used for monitoring
the Cartesian torque in a specific direction, e.g. for monitoring screw fastening
processes.
Element Description
measure Frame below the robot flange relative to which the exerted
Frame torque is determined.
The position of the point of application of the torque is
defined using this parameter.
orientation Optional. This parameter defines the orientation of the
Frame frame relative to which the torque is determined.
If the orientationFrame parameter is not specified, measu-
reFrame defines the orientation of the reference system.
coordina- Coordinate axis of the frame relative to which the exerted
teAxis torque is determined. Defines the direction in which the
acting torque is checked.
CoordinateAxis.X
CoordinateAxis.Y
CoordinateAxis.Z
min Lower limit of the range of values for the torque exerted
about the coordinate axis of the reference system (unit:
Nm).
The torque component condition is met if the torque falls
below the value specified here.
max Upper limit of the range of values for the torque exerted
about the coordinate axis of the reference system (unit:
Nm).
The torque component condition is met if the torque
exceeds the value specified here.
Note: The upper limit value must be greater than the lower
limit value: max > min.
tolerance Optional. Maximum permissible inaccuracy of the calcu-
lated values (unit: Nm).
> 0.0
Default: 10.0
The condition is met if the inaccuracy of the torque calcula-
tion is greater than or equal to the value specified here.
If the parameter is not specified, the default value is auto-
matically used.
Description Path-related conditions are always used in conjunction with a motion com-
mand. They serve as break conditions or triggers for path-related switching ac-
tions.
The condition defines a point on the planned path (switching point) on which
a motion is to be terminated or a desired action is to be triggered. If the switch-
ing point is reached, the condition is met.
The braking process or the defined action is only triggered when the
switching point is reached. When using a path-related condition as a
break condition, this results in the robot coming to a standstill after the
switching point rather than directly at it.
The switching point can be defined by a shift in space and/or time. The shift
can optionally refer to the start or end point of a motion.
Static methods A MotionPathCondition object can also be created via one of the following stat-
ic methods:
MotionPathCondition.createFromDelay(ReferenceType refe-
rence, long delay)
MotionPathCondition.createFromDistance(ReferenceType refe-
rence, double distance)
Element Description
reference Data type: com.kuka.roboticsAPI.conditionModel.Refer-
enceType
Reference point of the condition
ReferenceType.START: Start point
ReferenceType.DEST: End point
Element Description
distance Offset in space relative to the reference point of the condi-
tion.
For CP motions, distance specifies the Cartesian distance
between the switching point and reference point (= dis-
tance along the path which connects the switching point
and reference point) and not the shortest distance between
these points. (unit: mm)
For PTP motions, distance does not specify a Cartesian dis-
tance but rather a path parameter without a unit.
Negative value: Offset contrary to the direction of mo-
tion
Positive value: Offset in the direction of motion
(>>> "Maximum offset" Page 420)
delay Offset in time relative to the path point defined by distance.
Or if distance is not defined, to the reference point of the
condition. (unit: ms)
Negative value: Offset contrary to the direction of mo-
tion
Positive value: Offset in the direction of motion
Phases in which the application is paused are not included
in the time measurement.
(>>> "Maximum offset" Page 420)
Maximum offset The switching point can only be offset within certain limits. The limits apply to
the entire offset, comprising the shift in space and time.
Negative offset, at most to the start point of the motion
Positive offset, at most to the end point of the motion
The following parameterizations may not be used, as they will inevitably lead
to an offset beyond the permissible limits and thus to a runtime error:
Even if a valid value combination has been used, the switching point can nev-
ertheless be offset beyond the permissible limits. In these cases, the response
is as follows:
A condition which is met before the start of the motion triggers the motion
at the start point.
A condition which is met after the end of the motion is never a trigger.
Description The distance condition can be used to check whether the Cartesian distance
between 2 frames is less than a defined distance.
One of the frames must be a movable frame that is located beneath the
robot flange, e.g. a TCP on the tool or the frame of a gripped workpiece.
The other frame must be a static frame.
The movable frame must be linked to the robot flange if the condition is to
be used in a motion command or monitored with a listener.
1 Movable frame
2 Static frame
3 Minimum distance between the 2 frames
Explanation of
Element Description
the syntax
frameA One of the frames whose distance from another frame is
being checked
frameB One of the frames whose distance from another frame is
being checked
distance Minimum distance between the 2 frames (unit: mm)
Threshold
> 0.0
The distance condition is met if the distance between the 2
frames is less than the specified minimum distance.
Description The distance component condition, like the distance condition, can be used to
check the Cartesian distance between 2 frames. Additionally, different projec-
Explanation of
Element Description
the syntax
frameA One of the frames whose distance from another frame is
being checked
frameB One of the frames whose distance from another frame is
being checked
distance Minimum distance between the 2 frames (unit: mm)
> 0.0
The distance component condition is met if the distance
between the 2 frames is less than the specified minimum
distance.
orienta- Frame which specifies the orientation of the distance vec-
tionFrame tor
Note: The frame must be a static frame and must not be
connected to the robot flange.
coordinate Coordinate axes of the orientation frame to be considered
Axes
CoordinateAxis.X
CoordinateAxis.Y
CoordinateAxis.Z
At least one coordinate axis must be specified.
Description The Boolean signal condition can be used to check Boolean digital inputs or
outputs. The condition is met if a Boolean input or output has a specific state.
Boolean signal conditions are of data type BooleanIOCondition.
Explanation of
Element Description
the syntax
boolean Boolean input/output signal that is checked
Signal
boolean State of the input/output signal with which the condition is
IOValue met
true, false
Example A Boolean digital input signal is returned via a switch. In order to react to the
signal in an application, a Boolean signal condition is to be formulated. The
condition must be fulfilled as soon as a high level (state TRUE) is present
when the switch is activated.
public class ExampleApplication extends RoboticsAPIApplication {
// ...
@Inject
private SwitchesIOGroup switches;
// ...
@Override
public void run() {
// ...
AbstractIO switch_1 = switches.getInput("Switch1");
BooleanIOCondition switch1_active =
new BooleanIOCondition(switch_1, true);
}
}
Description The value of a digital or analog input or output can be checked with the condi-
tion for the range of values of a signal. The condition is met if the value of the
signal lies within a defined range.
Conditions for ranges of values are of data type ForceComponentCondition.
Explanation of
Element Description
the syntax
signal Analog or digital input/output signal that is checked
minValue Lower limit of the range of values in which the condition is
met
The value returned by the signal must be greater than or
equal to minValue.
maxValue Upper limit of the range of values in which the condition is
met
The value returned by the signal must be less than or equal
to maxValue.
Example A temperature sensor returns an analog input signal whose value can lie in the
range between 0 °C and 2000 °C. As soon as a threshold of 35 °C is exceed-
ed, a condition for monitoring the sensor signal should be met.
public class ExampleApplication extends RoboticsAPIApplication {
// ...
@Inject
private SensorIOGroup sensors;
// ...
@Override
public void run() {
// ...
AbstractIO temperatureSensor =
sensors.getInput("TemperatureSensor2");
IORangeCondition tempHigher35 =
new IORangeCondition(temperatureSensor, 35.0, 2000.0);
}
}
For certain processes a planned motion must not be fully executed but rather
terminated when definable events occur. For example, in joining processes,
the robot must stop if a force threshold is reached.
Explanation of
Element Description
the syntax
motion Type: Motion
Motion for which a break condition is to be defined
Example:
ptp(getApplicationData().getFrame("/P1"))
condition Type: ICondition
Parameterized ICondition object which describes a break
condition
Example A LIN motion is terminated if the torque in axis A3 is less than or equal to -
12 Nm or greater than or equal to 0 Nm.
JointTorqueCondition cond_1 = new JointTorqueCondition(JointEnum.J3,
-12.0, 0.0);
robot.move(lin(getApplicationData().getFrame("/P10"))
.breakWhen(cond_1));
Description If break conditions have been defined for a motion command, it is possible to
view various information on the termination of a motion: For this purpose, the
motion command is temporarily stored in an IMotionContainer variable. Via
the method getFiredBreakConditionInfo(), this variable can be requested for
an object of type IFiredConditionInfo, which contains the information about ter-
mination of the motion. If no break condition occurs during the motion, get-
FiredBreakConditionInfo() returns zero.
Explanation of
Element Description
the syntax
motion Motion instruction
Example:
lbr.move(ptp(getApplicationData().getFrame("/P1"))
motionCmd Type: IMotionContainer
Temporary memory for the motion command
firedCondInfo Type: IFiredConditionInfo
Information about termination of the motion
Method Description
getFiredCondition() Return value type: ICondition
Requests the condition which caused a motion to be terminated
getPositionInfo() Return value type: PositionInformation
Requests robot position valid at the time when the break condi-
tion was triggered.
getStoppedMotion() Return value type: IMotion
Requests the segment of a spline block or the motion of a
MotionBatch which was terminated
Description The condition which caused the termination of a motion can be requested via
the method getFiredCondition(). The return value is of type ICondition and can
be compared to the transferred break conditions via the equals(…) method.
The request is particularly useful if several break conditions for a motion have
been defined by repeatedly calling the breakWhen(…) method.
Explanation of
Element Description
the syntax
firedCondition Type: ICondition
Variable for the return value. The variable contains the
condition which caused the motion to be terminated.
firedCondInfo Type: IFiredConditionInfo
Information about termination of the motion
The break conditions “cond1” and “cond2” are transferred to a LIN motion with
breakWhen(…). The “motionCmd” variable of type IMotionContainer can be
used to evaluate the motion command.
IMotionContainer motionCmd =
robot.move(lin(getApplicationData().getFrame("P10"))
.breakWhen(cond1).breakWhen(cond2));
if(firedInfo != null){
ICondition firedCond = firedInfo.getFiredCondition();
if(firedCond.equals(cond1)){
// ...
}
// ...
}
Description The robot position at the time when the break condition was triggered can be
requested via the method getPositionInfo().
The following position information can be accessed via the return value of type
PositionInformation.
Axis-specific actual position
Cartesian actual position
Axis-specific setpoint position
Cartesian setpoint position
Explanation of
Element Description
the syntax
firedPosInfo Type: PositionInformation
Variable for the return value. The return value contains the
position information at the time when the break condition
was triggered.
firedCondInfo Type: IFiredConditionInfo
Information about termination of the motion
Example The Cartesian actual position of the robot at the time when the break condition
was triggered is requested via the method getCurrentCartesianPosition().
PositionInformation firedPosInfo = firedInfo.getPositionInfo();
Frame firedCurrPos = firedPosInfo.getCurrentCartesianPosition();
Description Break conditions can be defined for an entire spline block or MotionBatch. If a
break condition occurs, the entire spline block or MotionBatch is terminated.
The method getStoppedMotion() can be used to check which spline segment
or which motion of a MotionBatch has been terminated. The return value is of
type IMotion.
Explanation of
Element Description
the syntax
stoppedMotion Type: IMotion
Variable for the return value. The variable contains the
terminated motion.
firedCondInfo Type: IFiredConditionInfo
Information about termination of the motion
IFiredConditionInfo firedInfoSpline =
splineCont.getFiredConditionInfo();
if(firedInfoSpline != null){
IMotion stoppedMotion = firedInfoSpline.getStoppedMotion();
// ...
}
Description Events which activate path-related switching actions are called triggers.
Events are defined using conditions. An event occurs if the defined condition
already has the state TRUE before the start of the motion or if it switches to
the state TRUE during the motion.
Conditions are defined as objects of type ICondition. The available condition
types belong to the package com.kuka.roboticsAPI.conditionModel.
An overview of the available condition types can be found here:
To program a trigger, an object of the desired condition type and an ITrigger-
Action object which describes the action to be executed are transferred to the
motion command via the method triggerWhen(…).
triggerWhen(…) can be called several times when programming a motion
command to define different triggers for a motion. The execution of the corre-
sponding switching actions is only dependent on whether the triggering event
occurs, and is not influenced by the order of calling via triggerWhen(…).
Explanation of
Element Description
the syntax
motion Type: Motion
Motion for which a trigger must be defined
Example:
ptp(getApplicationData().getFrame("/P1"))
condition Type: ICondition
Parameterized ICondition object which describes the con-
dition for the trigger
action Type: ITriggerAction
ITriggerAction object which describes the action to be exe-
cuted
(>>> 15.21.2 "Programming a path-related switching
action" Page 429)
Description The path-related action to be executed when an event occurs is defined via an
ITriggerAction object. ITriggerAction is an interface from the com.kuka.robot-
icsAPI.conditionModel package. This interface currently does not provide any
methods.
The ICallbackAction interface, which is derived from ITriggerAction, can be
used for programming actions. The interface has the method onTrigger-
Fired(…). The action to be carried out when the trigger is activated can be pro-
grammed in the body of the method onTriggerFired(…).
An ICallbackAction object can be used in any number of triggers.
Explanation of
Element Description
the syntax
action Type: ICallbackAction
ICallbackAction object which describes the action trans-
ferred with triggerWhen(…)
onTrigger Method whose execution is fired by the trigger
Fired(…)
triggerIn Type: IFiredTriggerInfo
formation
Contains information about the firing trigger
(>>> 15.16.5.2 "Reacting to a change in state of safety
signals" Page 397)
Example During motion to point “P1”, output “DO1” is always switched at the moment
when input “DI1” is TRUE.
//set trigger action
ICallbackAction toggleOut_1 = new ICallbackAction() {
@Override
public void onTriggerFired(IFiredTriggerInfo triggerInformation)
{
//toggle output state when trigger fired
if(IOs.getDO1())
{
IOs.setDO1(false);
}
else
{
IOs.setDO1(true);
}
}
};
Method Description
getFiredCondition() Return value type: ICondition
Requests the condition which fired the trigger
getMissedEvents() Return value type: int
Checks how many times the event which fired the trigger still occurred
while the triggered action was being executed
Note: The triggering event cannot re-trigger an action while it is being
executed.
getMotionContainer() Return value type: IMotionContainer
Requests the motion command, during the execution of which the trig-
ger was fired
getPositionInforma- Return value type: PositionInformation
tion()
Requests position information valid at the time when the trigger was
fired.
The return value contains the following position information:
Axis-specific actual position
Cartesian actual position
Axis-specific setpoint position
Cartesian setpoint position
Setpoint/actual value difference (translational)
Setpoint/actual value difference (rotational)
getTriggerTime() Return value type: java.util.Date
Requests the time at which the trigger was fired
Method Description
getCommandedCartesianPo- Return value type: Frame
sition()
Requests the Cartesian setpoint position at triggering time
getCommandedJointPosition() Return value type: JointPosition
Requests the axis-specific setpoint position at triggering time
getCurrentCartesianPosition() Return value type: Frame
Requests the Cartesian actual position at triggering time
getCurrentJointPosition() Return value type: JointPosition
Requests the axis-specific actual position at triggering time
Example 1 When the trigger is fired, the triggering time and condition are displayed on the
smartHMI. For output purposes, a logger object has been integrated with de-
pendency injection.
BooleanIOCondition in1 = new BooleanIOCondition(_input_1, true);
@Override
public void onTriggerFired(IFiredTriggerInfo triggerInformation)
{
logger.info("TriggerTime: "+ triggerInformation
.getTriggerTime().toString());
logger.info("TriggerCondition: "+ triggerInformation
.getFiredCondition().toString());
}
};
robot.move(ptp(getApplicationData().getFrame("/P1"))
.triggerWhen(in1, ica));
Example 2 The axis-specific and Cartesian robot position at triggering time are requested.
BooleanIOCondition in1 = new BooleanIOCondition(_input_1, true);
@Override
public void onTriggerFired(IFiredTriggerInfo triggerInformation)
{
PositionInformation posInfo = triggerInformation
.getPositionInformation();
posInfo.getCommandedCartesianPosition();
posInfo.getCommandedJointPosition();
posInfo.getCurrentCartesianPosition();
posInfo.getCommandedJointPosition();
};
};
robot.move(ptp(getApplicationData().getFrame("/P1"))
.triggerWhen(in1, ica));
These events are changes in state of defined conditions. The listener monitors
the state of the condition. If the state of the condition changes, the listener is
notified and the predetermined handling routine is triggered as a reaction.
During execution of a handling routine, the listener is not informed if further
events occur. Once the handling routine has been completed, these events
are only transferred to the listener and handled if the appropriate notification
type has been defined.
(>>> 15.22.3 "Registering a listener for notification of change in state"
Page 433)
Overview The following programming steps are required in order to be able to react to
the change in state of a condition:
Step Description
1 Create a listener object to monitor the condition.
(>>> 15.22.2 "Creating a listener object to monitor the condi-
tion" Page 433)
2 Program the desired handling routine in the listener method.
3 Register the listener for notification in case of a change in
state of the condition.
(>>> 15.22.3 "Registering a listener for notification of change
in state" Page 433)
4 If this has not already been done by the method selected for
registration, activate the notification service for the listener.
(>>> 15.22.4 "Activating or deactivating the notification ser-
vice for listeners" Page 435)
Description The syntax of a listener object is described here using the listener IAnyEdge-
Listener as an example. The listener method onAnyEdge(…), which is auto-
matically declared when the object is created, has input parameters. These
input parameters contain information about the event triggered by the execu-
tion of the method, and can be requested and evaluated.
The listener objects of the other listener types are created in the same way and
are structured analogously.
Explanation of
Element Description
the syntax
condListener Type: IAnyEdgeListener
Name of the listener object
condition Type: ConditionObserver
Observer
Object notified by the listener
time Type: Date
Date and time the listener was notified
missed Type: int
Events
Number of changes in state which have occurred but not
been handled.
Possible causes of non-handled events:
The notification service was deactivated when the trig-
gering event occurred.
The handling routine was being executed when the trig-
gering event occurred again.
These events can be handled using the notification type
NotificationType.MissedEvents. (>>> "NotificationType"
Page 434)
condition Type: Boolean
Value
Only present with the listener method onAnyEdge(…).
Specifies the edge via which the method was called.
true: rising edge (change in state FALSE > TRUE)
false: falling edge (change in state TRUE > FALSE)
The ObserverManager class provides various methods for creating the re-
quired object.
createAndEnableConditionObserver(…)
The notification service for the listener is active immediately.
createConditionObserver(…)
The notification service for the listener is not active immediately, but rather
must be explicitly activated.
(>>> 15.22.4 "Activating or deactivating the notification service for listen-
ers" Page 435)
The transferred parameters in each case are identical for both methods.
Explanation of
Element Description
the syntax
myObserver Type: ConditionObserver
Object which monitors the defined condition
condition Type: ICondition
Condition which is monitored
notification Type: Enum of type NotificationType
Type
Notification type
Defines the events at which the listener is to be notified in
order to execute the desired handling routine.
(>>> "NotificationType" Page 434)
listener Type: IRisingEdgeListener, IFallingEdgeListener or
IAnyEdgeListener
Listener object which is registered
Value Description
EdgesOnly The listener is only notified in the event of an edge
change (according to the listener type used).
OnEnable The listener is notified in the event of an edge change
(according to the listener type used).
In addition, the state of the monitored condition is
checked upon activation of the listener. Depending on
the listener type, the listener is notified when the follow-
ing events occur:
IRisingEdgeListener: only if the condition is met
upon activation
IFallingEdgeListener: only if the condition is not met
upon activation
IAnyEdgeListener: if the condition is met or not met
upon activation
Value Description
MissedEvents The listener is notified in the event of an edge change
(according to the listener type used).
In addition, following the execution of the handling rou-
tine, the listener is notified if triggering events were
missed. This means that if the triggering edge change
again occurs during execution of the handling routine,
the listener is also notified again, and the handling rou-
tine is executed a second time.
All Combination of OnEnable and MissedEvents
The listener is notified in the case of all events
described under OnEnable and MissedEvents.
Description The methods for activating or deactivating the notification service belong to the
ConditionObserver class.
The notification service must only be activated if the method createCondition-
Observer(…) was used to register the listener.
Explanation of
Element Description
the syntax
myObserver Type: ConditionObserver
Object which monitors the defined condition
@Override
public void onRisingEdge(ConditionObserver conditionObserver,
Date time, int missedEvents) {
signals.setWarningLED(true);
}
});
.createConditionObserver(collision, NotificationType.MissedEvents,
collisionListener);
collisionObserver.enable();
Latency times may occur while the wait command is being processed.
It is not possible to guarantee that the programmed wait time will be
maintained exactly.
Explanation of
Element Description
the syntax
condition Type: ICondition
Condition which is waited for
If the condition is already met when waitFor(…) is called,
the application is immediately resumed.
timeout Type: long
Maximum wait time
If the condition of the defined wait time does not occur, the
application is also resumed without the occurrence of the
condition.
timeUnit Type: Enum of type TimeUnit
Unit of the specified wait time
The Enum TimeUnit is an integral part of the standard Java
library.
result Type: boolean
Variable for the return value of waitFor(…). The return
value is true if the condition occurs within the specified wait
time.
Note: If no wait time is defined, waitFor(…) does not sup-
ply a return value.
Example A wait for a Boolean input signal is required in the application. The application
is to be blocked for a maximum of 30 seconds. If the input signal is not sup-
plied within this time, a defined handling routine is then to be executed.
public class ExampleApplication extends RoboticsAPIApplication {
// ...
@Inject
@Override
public void run() {
// ...
Input input = inputs.getInput ("Input");
BooleanIOCondition inputCondition =
new BooleanIOCondition(input, true);
if(!result){
//do something
}
else{
//continue program
}
}
}
Description For data recording, an object of type DataRecorder must first be created and
parameterized. The following default parameters are set if the standard con-
structor is used for this purpose:
The file name under which the recorded data are saved is created auto-
matically. The name also contains an ID which is internally assigned by the
system: DataRecorderID.log
No recording duration is defined. Data are recorded until the buffer (cur-
rently 16 MB) is full or the maximum number of data sets (currently
30,000) is reached.
The recording rate, i.e. the minimum time between 2 recordings, is 1 ms.
Explanation of
Element Description
the syntax
fileName File name (with extension) under which the recorded data
are saved
Example: "Recording_1.log"
timeout Recording duration
-1: No recording duration is defined.
≥1
Default: -1
The time unit is defined with timeUnit.
timeUnit Time unit for the recording duration
Example: TimeUnit.SECONDS
The Enum TimeUnit is an integral part of the standard Java
library.
sampleRate Recording rate (unit: ms)
≥1
Default: 1
Example 1 Data are to be recorded every 100 ms for a duration of 5 s and written to the
file Recording_1.log.
DataRecorder rec_1 = new DataRecorder("Recording_1.log", 5,
TimeUnit.SECONDS, 100);
Example 2 The DataRecorder object is generated using the standard constructor. This
only specifies that data are recorded every 1 ms for an indefinite duration. The
recorded data are to be written to the file Recording_2.log. The file name is de-
fined with the corresponding “set” method.
DataRecorder rec_2 = new DataRecorder();
rec_2.setFileName("Recording_2.log");
Using dot operators and the corresponding “add” method, the data to be re-
corded are added to the DataRecorder object created for this purpose. The si-
multaneous recording of various data is possible.
Overview The following “add” methods of the DataRecorder class are available:
Method Description
addInternalJointTorque(…) Return value type: DataRecorder
Recording of the measured axis torques of the robot which is
transferred as a parameter (type: Robot)
addExternalJointTorque(…) Return value type: DataRecorder
Recording of the external axis torques (adjusted to the model)
of the robot which is transferred as a parameter (type: robot)
addCartesianForce(…) Return value type: DataRecorder
Recording of the Cartesian forces along the X, Y and Z axes of
the frame which is transferred as a parameter (unit: N).
A second frame can be transferred as a parameter in order to
define the orientation for the force measurement. If no separate
frame is specified for the orientation, null must be transferred.
addCartesianTorque(…) Return value type: DataRecorder
Recording of the Cartesian torques along the X, Y and Z axes of
the frame transferred as a parameter (unit: Nm).
A second frame can be transferred as a parameter in order to
define the orientation for the torque measurement. If no sepa-
rate frame is specified for the orientation, null must be trans-
ferred.
Parameters:
AbstractFrame measureFrame
Frame attached to the robot flange, e.g. the TCP of a tool.
Defines the position of the measurement point.
AbstractFrame orientationFrame
Defines the orientation of the measurement point.
Note: Both parameters must always be transferred together.
The orientation may be null.
addCommandedJointPosi- Return value type: DataRecorder
tion(…)
Recording of the axis-specific setpoint position of the robot
which is transferred as a parameter (type: robot). As a second
parameter, the unit in which the axis angles are recorded must
be transferred (Enum of type: AngleUnit).
addCurrentJointPosition(…) Return value type: DataRecorder
Recording of the axis-specific actual position of the robot which
is transferred as a parameter (type: Robot). As a second param-
eter, the unit in which the axis angles are recorded must be
transferred (Enum of type: AngleUnit).
Parameters:
Robot robot
AngleUnit angleUnit
AngleUnit.Degree: Axis angle in degrees
AngleUnit.Radian: Axis angle in rad
addCommandedCartesianPo- Return value type: DataRecorder
sitionXYZ(…)
Recording of the Cartesian setpoint position (translational sec-
tion)
The measurement point and reference coordinate system rela-
tive to which the position is recorded are transferred as parame-
ters.
Method Description
addCurrentCartesianPosition- Return value type: DataRecorder
XYZ(…)
Recording of the Cartesian actual position (translational sec-
tion)
The measurement point and reference coordinate system rela-
tive to which the position is recorded are transferred as parame-
ters.
Parameters:
AbstractFrame measureFrame
Frame attached to the robot flange, e.g. the TCP of a tool.
Defines the position of the measurement point.
AbstractFrame referenceFrame
Defines the reference coordinate system.
Note: Both parameters must always be transferred together.
None of the parameters may be null.
Example For an LBR iiwa, the following data are to be recorded using a DataRecorder
object:
Axis torques which are measured on the robot
Force on the TCP of a gripper mounted on the robot with the orientation of
a base frame
public class ExampleApplication extends RoboticsAPIApplication {
@Inject
private LBR robot;
@Inject
private Tool gripper;
// ...
@Override
public void run() {
// ...
gripper.attachTo(robot.getFlange());
// ...
DataRecorder rec = new DataRecorder();
rec.addInternalJointTorque(robot);
rec.addCartesianForce(gripper.getFrame("TCP"),
getApplicationData().getFrame("/Base"));
// ...
}
}
Synchronous via A condition of type ICondition and an action must be formulated for a trigger.
a trigger When this condition is met, the trigger is fired, causing the action to be carried
out.
(>>> 15.21.1 "Programming triggers" Page 428)
This action starts the data recording. An object of type StartRecordingAction
must be transferred for this purpose. When the object is created, the Da-
taRecorder object to be used for data recording must be specified.
Constructor syntax:
StartRecordingAction(DataRecorder recorder)
The ICondition object and the StartRecordingAction object are subsequently
linked to a motion command with triggerWhen(…).
Example 1 Data recording is to start when the robot has carried out the approach motion
to a pre-position. The DataRecorder object is activated before the pre-position
is addressed so as to reduce the delay when starting the recording.
public class ExampleApplication extends RoboticsAPIApplication {
@Inject
private LBR robot;
// ...
@Override
public void run() {
// ...
DataRecorder rec = new DataRecorder();
// ...
rec.enable();
// ...
robot.move(lin(getApplicationData()
.getFrame("/PrePosition")));
rec.startRecording();
// ...
}
}
@Override
public void run() {
// ...
DataRecorder rec = new DataRecorder();
// ...
StartRecordingAction startAct =
new StartRecordingAction(rec);
MotionPathCondition startCond = new MotionPathCondition(
ReferenceType.START, 0.0, 2000);
robot.move(lin(getApplicationData()
.getFrame("/Destination"))
.triggerWhen(startCond, startAct));
// ...
}
}
Independent of Recording can be stopped at any time via the stopRecording() method.
robot motion
Synchronous via A condition of type ICondition and an action must be formulated for a trigger.
a trigger When this condition is met, the trigger is fired, causing the action to be carried
out.
(>>> 15.21.1 "Programming triggers" Page 428)
This action ends the data recording. An object of type StopRecordingAction
must be transferred for this purpose. When the object is created, the Da-
taRecorder object to be used for data recording must be specified.
Constructor syntax:
StopRecordingAction(DataRecorder recorder)
The ICondition object and the StopRecordingAction object are linked to a mo-
tion command with triggerWhen(…).
Method Description
isEnabled() Return value type: Boolean
The system checks whether the DataRecorder object is activated (=
true).
isRecording() Return value type: Boolean
The system checks whether data recording is running (= true).
Method Description
isFileAvailable() Return value type: Boolean
The system checks whether the file with the recorded data is already
saved on the robot controller and whether it is available for evaluation
(= true).
awaitFileAvailable(…) Return value type: Boolean
Blocks the calling application until the defined blocking duration has
expired or until the file with the recorded data is saved on the robot con-
troller and is available for evaluation (= true).
The blocking statement returns the value “false” if the file is not available
within the maximum blocking duration.
Syntax:
awaitFileAvailable(long timeout, java.util.concur-
rent.TimeUnit timeUnit)
Parameters:
timeout: maximum blocking duration
timeUnit: time unit for the maximum blocking time
The following are to be recorded during an assembly process: the torques act-
ing externally on the axes of an LBR iiwa and the Cartesian forces acting on
the TCP of a gripper on the robot flange. The data are to be recorded every
10 ms.
Recording is to begin synchronously with robot motion when the force acting
from any direction on the TCP of the gripper exceeds 20 N. When the assem-
bly process ends, recording is to end as well.
The file is then to be evaluated if it is available after a maximum of 5 s.
public class ExampleApplication extends RoboticsAPIApplication {
@Inject
private LBR robot;
@Inject
private Tool gripper;
// ...
@Override
public void run() {
// ...
gripper.attachTo(robot.getFlange());
// ...
DataRecorder rec = new DataRecorder();
rec.setFileName("Recording.log");
rec.setSampleRate(10);
rec.addExternalJointTorque(robot);
rec.addCartesianForce(gripper.getFrame("/TCP"), null);
StartRecordingAction startAction =
new StartRecordingAction(rec);
ForceCondition startCondition = ForceCondition
.createSpatialForceCondition(
gripper.getFrame("/TCP"), 20.0);
robot.move(ptp(getApplicationData()
.getFrame("/StartPosition")));
robot.move(lin(getApplicationData()
.getFrame("/MountingPosition"))
.triggerWhen(startCondition, startAction));
robot.move(lin(getApplicationData()
.getFrame("/DonePosition")));
rec.stopRecording();
if (rec.awaitFileEnable(5, TimeUnit.SECONDS)){
// Evaluation of the file if available
}
// ...
}
}
Description Functions can be freely assigned to the 4 user keys on the smartPAD. For this
purpose, various user key bars can be defined in the source code of the robot
or background applications.
The user keys are assigned functions using the user key bar. One user key on
the bar must be assigned a function, but it is not necessary for all of the keys
to be assigned. In addition, graphical or text elements illustrating the function
of each user key are located on the side panel of the smartHMI screen next to
the user keys.
All the user key bars defined in the running robot application or a running back-
ground application are available to the operator. For example, one user key
bar can be used for controlling a gripper, and in another bar the same keys can
be used to select different program sections.
User key bars are available until the application which created them has end-
ed.
Overview The following steps are required in order to program a user key bar:
Step Description
1 Create a user key bar.
(>>> 15.25.1 "Creating a user key bar" Page 445)
2 Add user keys to the bar (at least one).
(>>> 15.25.2 "Adding user keys to the bar" Page 446)
3 Define the function which is to be executed if the user key is
actuated.
(>>> 15.25.3 "Defining the function of a user key" Page 447)
4 Assign at least one graphical or text element to the area along
the left side panel of the smartHMI next to the user key.
(>>> 15.25.4 "Labeling and graphical assignment of the user
key bar" Page 449)
5 For user keys which trigger functions associated with a risk:
Define the warning message to be displayed when the user
key is actuated. The message appears before the function
can be triggered.
(>>> 15.25.5 "Identifying safety-critical user keys" Page 452)
6 Publish a user key bar.
(>>> 15.25.6 "Publishing a user key bar" Page 453)
Description The following methods are required in order to create a user key bar:
getApplicationUI()
This method is used to access the interface to the smartHMI graphical
user interface from a robot application or a background application. Return
value type: ITaskUI
createUserKeyBar(…)
This method is used to create the user key bar. It is part of the ITaskUI in-
terface.
Explanation of
Element Description
the syntax
keybar Type: IUserKeyBar
Name of the user key bar created with createUserKey-
Bar(…)
name Type: String
Name under which the user key bar is displayed on the
smartHMI (>>> Fig. 6-11 )
The number of characters which can be displayed is lim-
ited.
A maximum of 12 to 15 characters is recommended.
Description A newly created user key bar does not have any user keys to start with. The
user keys to be used must be added to the bar.
The IUserKeyBar interface provides the following methods for this purpose:
addUserKey(…)
Adds a single user key to the bar.
addDoubleUserKey(…)
Combines 2 neighboring user keys to a double key and adds this to the
bar. The corresponding areas on the side panel of the smartHMI screen
are also combined into a larger area.
When adding a user key to a bar, the user defines the function to be executed
when the user key is actuated (e.g. opening a gripper, changing a parameter,
etc.). Depending on the programming, both pressing and releasing the user
key can be interpreted as actuation and linked to a function.
A user key bar must have at least one user key. Each user key is assigned a
unique number. This number is transferred when a user key is added.
Explanation of
Element Description
the syntax
keybar Type: IUserKeyBar
Name of the user key bar to which a user key is added
key Type: IUserKey
Name of the single key added to the bar
doubleKey Type: IUserKey
Name of the double key added to the bar
Element Description
slot Type: int
Number of the user key which is added.
Single keys:
0…3
Double keys:
0, 2
listener Type: IUserKeyListener
Name of the listener used to define the function to be exe-
cuted when the user key is actuated
(>>> 15.25.3 "Defining the function of a user key"
Page 447)
ignoreEvents Type: boolean
Defines whether there is a reaction if the user key is re-
actuated while the key function is being executed
true: If the key is actuated while the function is being ex-
ecuted, it has no effect.
false: It is counted how many times the key is actuated
while the function is being executed. The function is re-
peated this many times.
Example The user keys are assigned the following functions for controlling a gripper:
The top user key is to be used to open the gripper, and the key below it is
to close the gripper.
The two lower user keys are combined in a double key. This is to be used
to increase and decrease the velocity of the gripper.
The functions for opening and closing the gripper are not to be called again
until the respective function has ended.
IUserKeyBar gripperBar =
getApplicationUI().createUserKeyBar("Gripper");
Description In order to define which function is to be executed when a user key is actuated,
a listener object of type IUserKeyListener must be created. The on-
KeyEvent(…) method is automatically declared when the object is created.
The listener method onKeyEvent(…) is called when the following events oc-
cur:
The user key is pressed.
The user key is released.
Explanation of
Element Description
the syntax
listener Type: IUserKeyListener
Name of the listener object
Input parameters of the listener method onKeyEvent(…):
key Type: IUserKey
User key which has been actuated
The parameter can be used to directly access the user key,
for example to change the corresponding labelling or
graphical assignment. In addition, it is possible to deter-
mine which user key has been actuated, especially when
the same reaction is used for different user keys.
event Type: Enum of type UserKeyEvent
Event called by the listener method onKeyEvent(…)
Enum values for single keys:
UserKeyEvent.KeyDown: Key has been pressed.
UserKeyEvent.KeyUp: Key has been released.
Enum values for double keys:
UserKeyEvent.FirstKeyDown: Of the two keys, the
upper one has been pressed.
UserKeyEvent.SecondKeyDown: Of the two keys, the
lower one has been pressed.
UserKeyEvent.FirstKeyUp: Of the two keys, the upper
one has been released.
UserKeyEvent.SecondKeyUp: Of the two keys, the
lower one has been released.
Example The user key bar for controlling a gripper is expanded by a method which can
be used to adapt the velocity of the gripper. The two lower user keys combined
in a double key are used for this purpose.
The attribute velocity is declared for setting the velocity. The attribute spec-
ifies the current velocity as a proportion of the maximum velocity (range of val-
ues: 0.1 … 1.0). Pressing the upper user key increases the value by 0.1 and
pressing the lower user key decreases it by 0.1.
double velocity = 0.1;
// ...
IUserKeyBar gripperBar = ...;
// ...
IUsertKeyListener gripperVelocityListener = new IUserKeyListener(){
@Override
public void onKeyEvent(IUserKey key, IUserKeyEvent event){
if(event == UserKeyEvent.FirstKeyDown && velocity <= 0.9){
velocity = velocity + 0.1;
}
else if(event == UserKeyEvent.SecondKeyDown && velocity >= 0.2){
velocity = velocity – 0.1;
}
}
};
// ...
IUserKey velocityKey = gripperBar.addDoubleUserKey(2,
gripperVelocityListener, false);
Description At least one graphical or text element must be assigned to the area along the
left side panel of the smartHMI next to the user key. LED icons of various col-
ors and sizes are available as graphical elements. These elements can be
adapted during the runtime of the robot application or the background task.
In order to clearly position the individual elements, the area next to the user
key is divided into a grid with 3x3 spaces. This also applies for user keys that
have been grouped together as a double key. In the case of double keys, the
grid stretches over both fields.
One element can be set in each grid space. This grid space is defined by the
value of the enum UserKeyAlignment. If a new element is allocated to a grid
space which has already been assigned, the existing element is deleted.
Description Each grid space can be assigned a text element. The setText(…) method is
used for this purpose. The method belongs to the IUserKey interface.
Explanation of
Element Description
the syntax
key Type: IUserKey
User key to which a text element is assigned
position Type: Enum of type UserKeyAlignment
Position of the element (grid space)
(>>> "UserKey alignment" Page 449)
text Type: String
Text to be displayed
Often, a text length of 2 or more characters will exceed the
size of the grid space. The text display area is then
expanded. However, it is only practical to use a limited
number of characters. The possible number of characters
depends on the text elements of the neighboring grid
spaces and the characters used.
Example The user key bar for controlling a gripper is to be expanded. A suitable label
should be displayed continuously next to each of the user keys.
Label for the user keys for opening and closing the gripper: OPEN and
CLOSE
Label for the user keys for increasing and decreasing the gripper velocity:
Plus sign and minus sign
In addition, the current velocity is to be displayed and automatically updat-
ed each time a change is made.
double velocity = 0.1;
// ...
IUserKeyBar gripperBar = ...;
// ...
IUserKeyListener gripperVelocityListener = new IUserKeyListener(){
@Override
public void onKeyEvent(IUserKey key, IUserKeyEvent event){
if(event == UserKeyEvent.FirstKeyDown && velocity <= 0.9){
velocity = velocity + 0.1;
}
else if(event == UserKeyEvent.SecondKeyDown && velocity >= 0.2){
velocity = velocity - 0.1;
}
// The following line formats the velocity display
// The first three characters are displayed
String value = String.valueOf(velocity).substring(0, 3);
key.setText(UserKeyAlignment.Middle, value);
}
}
};
IUserKey openKey = ...;
openKey.setText(UserKeyAlignment.TopLeft, "OPEN");
Description Each grid space can be assigned an LED icon. The setLED(…) method is
used for this purpose. The method belongs to the IUserKey interface.
Explanation of
Element Description
the syntax
key Type: IUserKey
User key to which a graphical element is assigned
position Type: Enum of type UserKeyAlignment
Position of the element (grid space)
(>>> "UserKey alignment" Page 449)
led Type: Enum of type UserKeyLED
Color of the LED icon
UserKeyLED.Grey: Gray
UserKeyLED.Green: Green
UserKeyLED.Yellow: Yellow
UserKeyLED.Red: Red
size Type: Enum of type UserKeyLEDSize
Size of the LED icon
UserKeyLEDSize.Small: Small
UserKeyLEDSize.Normal: Large
Example The user key bar for controlling a gripper is to be expanded. The user keys for
opening and closing the gripper should each be assigned a small LED icon.
As long as the gripper is opening or closing, the LED icons should be dis-
played in green. If the gripper is stationary, the LED icons should be displayed
in gray.
IUserKeyBar gripperBar = getApplicationUI()
.createUserKeyBar("Gripper");
UserKeyLEDSize.Small);
closeGripper(); // Method for closing the gripper
key.setLED(UserKeyAlignment.BottomMiddle, UserKeyLED.Grey,
UserKeyLEDSize.Small);
}
};
Description User keys can trigger functions that are associated with a risk. In order to pre-
vent damage caused by the unintentional actuation of such user keys, a warn-
ing message can be added identifying them as safety-critical. The
setCriticalText(…) method is used for this purpose. The method belongs to the
IUserKey interface.
If the operator actuates a user key designated as safety-critical, the message
defined with setCriticalText(…) is displayed on the smartHMI in a window with
the name Critical operation. The user key is then deactivated for approx. 5 s.
Once this time has elapsed, the operator can trigger the desired function by
actuating the user key again within 5 s.
If the user key is not actuated within this time or if an area outside of the Crit-
ical operation window is touched, the window is closed and the user key is
reset to its previous state.
Syntax key.setCriticalText("text");
Explanation of
Element Description
the syntax
key Type: IUserKey
User key which is provided with a warning message
text Type: String
Message text displayed when the user key is actuated
Example The user key bar for controlling a gripper is to be expanded. If the user key for
opening the gripper is actuated, a warning message should appear. The oper-
ator is requested to ensure that no damage can result from workpieces falling
out when the gripper is opened.
IUserKeyBar gripperBar =
getApplicationUI().createUserKeyBar("Gripper");
// ...
IUserKey openKey = ...;
openKey.setText...;
openKey.setLED...;
openKey.setCriticalText("Gripper opens when key is actuated again.
Ensure that no damage can result from workpieces falling out!");
Description Once a user key bar has been equipped with all the necessary user keys and
functionalities, it must be published with the publish() method. Only then can
the operator access it on the smartPAD.
Once a user key bar has been published, further user keys may not be added
later in the program sequence. In other words, it is not possible to add an un-
assigned user key and assign a function to it at a later time. It is, however, pos-
sible to change the labeling or graphical element displayed next to the user
key on the smartHMI at a later time.
Syntax keybar.publish();
Explanation of
Element Description
the syntax
keybar Type: IUserKeyBar
Name of the user key bar created with createUserKey-
Bar(…).
Example The user key bar created for controlling a gripper is published.
IUserKeyBar gripperBar =
getApplicationUI().createUserKeyBar("Gripper");
// ...
gripperBar.publish();
Description It is possible to program notification, warning and error messages which are
displayed on the smartHMI and written to the LOG file of the application while
the application is running. In addition, it is possible to program messages
which are not displayed on the smartHMI but are only written to the LOG file.
In order to program a user message, an object of the ITaskLogger class is in-
tegrated by means of dependency injection. At this object, the corresponding
methods can be called in order to generate a message display with the appro-
priate LOG level.
Dependency injection makes it possible for messages to be displayed on the
smartHMI from all classes of an application, including those which are not a
task.
Explanation of
Element Description
the syntax
logger Name of the logger object, as it is to be used in the applica-
tion
Message text Text which is to be displayed on the smartHMI and/or writ-
ten to the LOG file
Example Once the robot has reached an end point, a notification message is to be dis-
played. If the motion ended with a collision, a warning notification is displayed
instead.
public class ExampleApplication extends RoboticsAPIApplication {
@Inject
private ITaskLogger logger;
@Inject
private IApplicationData data;
@Inject
private LBR robot;
@Override
public void initialize() {
// initialize your application here
collision = ForceCondition
.createSpatialForceCondition(robot.getFlange(), 15.0);
}
@Override
public void run() {
// ...
IMotionContainer motion = robot.move(lin(getFrame("/P20"))
.breakWhen(collision));
if (motion.getFiredBreakConditionInfo() == null){
logger.info("End point reached.");
}
else {
logger.warn("Motion canceled after collision!");
}
// ...
}
}
Description User dialogs can be programmed in an application. These user dialogs are
displayed in a dialog window on the smartHMI while the application is being
run and require user action.
Various dialog types can be programmed via the method displayModalDia-
log(…). The following icons are displayed on the smartHMI according to type:
Icon Type
INFORMATION
Dialog with information of which the user must take note
QUESTION
Dialog with a question which the user must answer
WARNING
Dialog with a warning of which the user must take note
ERROR
Dialog with an error message of which the user must take
note
The user answers by selecting a button that can be labeled by the program-
mer. Up to 12 buttons can be defined.
The application from which the dialog was called is stopped until the user re-
acts. How program execution continues can be made dependent on which but-
ton the user selects. The method displayModalDialog(…) returns the index of
the button which the user selects on the smartHMI. The index begins at “0” (=
index of the first button).
Explanation of
Element Description
the syntax
Dialog type Type: Enum of type ApplicationDialogType
INFORMATION: The dialog with the information icon is
displayed.
QUESTION: The dialog with the question icon is dis-
played.
WARNING: The dialog with the warning icon is dis-
played.
ERROR: The dialog with the error icon is displayed.
Dialog text Type: String
Text which is displayed in the dialog window on the
smartHMI
Button_1 … Type: String
Button_12
Labeling of buttons 1 … 12 (proceeding from left to right on
the smartHMI)
Example The following user dialog of type QUESTION is to be displayed on the smartH-
MI:
switch (direction) {
case 0:
robot.move(ptp(getApplicationData().getFrame("/Left")));
break;
case 1:
robot.move(ptp(getApplicationData().getFrame("/Right")));
break;
case 2:
robot.move(ptpHome());
break;
}
Syntax getApplicationControl().halt();
pause() does not cause a blocking wait. The application continues to be exe-
cuted until a synchronous motion command is reached.
Motion execution may only be resumed via the Start key on the smartPAD.
Syntax getApplicationControl().pause();
Description The FOR loop, also called counting loop, repeats a statement block as long as
a defined condition is met.
A counter is defined, which is increased or decreased by a constant value with
each execution of the loop. At the beginning of a loop execution, the system
checks if a defined condition is met. This condition is generally formulated by
comparing the counter with a limit value. If the condition is no longer met, the
loop is no longer executed and the program is continued after the loop.
The FOR loop is generally used if it is known how often a loop must be exe-
cuted.
FOR loops can be nested.
(>>> 15.27.8 "Examples of nested loops" Page 463)
Explanation of
Element Description
the syntax
Counters Counter for the number of loops executed
The counter is assigned a start value. With each execution
of the loop, the counter is increased or decreased by a
constant value.
Start value Start value of the counter
Condition Condition for the loop execution
The counter is generally compared with a limit value. The
result of the comparison is always of type Boolean. The
loop is ended as soon as the comparison returns FALSE,
meaning that the condition is no longer met.
Counting The counting statement determines the amount by which
statement the counter is changed with each execution of the loop.
The increment and counting direction can be specified in
different ways.
Examples:
Start value ++|--: With each execution of the loop, the
start value is increased or decreased by a value of 1.
Start value +|- Increment: With each execution of the
loop, the start value is increased or decreased by the
specified increment.
The value of the variable i is increased by 1 with every cycle. The current val-
ue of i is displayed on the smartHMI with every cycle. The loop is executed a
total of 10 times. The values of 0 to 9 are displayed in the process. For output
purposes, a logger object has been integrated with dependency injection.
Description The WHILE loop repeats a statement block for as long as a certain condition
is fulfilled. It is also called a rejecting loop because the condition is checked
before every loop execution.
If the condition is no longer met, the statement block of the loop is no longer
executed and the program is resumed after the loop. If the condition is not al-
ready fulfilled before the first execution, the statement block is not executed at
all.
The WHILE loop is generally used if it is unknown how often a loop must be
executed, e.g. because the repetition condition is calculated or is a specific
signal.
WHILE loops can be nested.
(>>> 15.27.8 "Examples of nested loops" Page 463)
Explanation of
Element Description
the syntax
Repetition Type: boolean
condition
Possible:
Variable of type Boolean
Logic operation, e.g. a comparison, with a result of type
Boolean
Before the loop is executed the system checks whether an input signal is set.
As long as this is the case, the loop will be executed again and again and the
smartHMI will display the input as TRUE. If the input signal has been reset, the
loop will not be executed (any longer) and the input will be displayed as
FALSE. For output purposes, a logger object has been integrated with depen-
dency injection.
Example 2 int w = 0;
Random num = new Random();
With every loop execution, the value of the variable w is increased by a random
number between 1 and 6. As long as the sum of all random numbers is less
than 21, the loop will be executed. It is not possible to predict the exact number
Description The DO WHILE loop repeats a statement block until a certain condition is ful-
filled. It is also called a post-test loop because the condition is only checked
after every loop execution.
The statement block is executed at least once. When the condition is met, the
loop is terminated and the program is resumed.
The DO WHILE loop is generally used if a loop must be executed at least once,
but it is unknown how often e.g. because the break condition is being calculat-
ed or is a specific signal.
DO WHILE loops can be nested.
(>>> 15.27.8 "Examples of nested loops" Page 463)
Syntax do {
Statement_1;
<...
Statement_n;
} while (Break condition);
Explanation of
Element Description
the syntax
Break condi- Type: Boolean
tion
Possible:
Variable of type Boolean
Logic operation, e.g. a comparison, with a result of type
Boolean
do {
num = (int) (Math.random()*6+1);
} while (num!=6);
Random numbers between 1 and 6 are generated until the “dice” shows a 6.
The dice must be thrown at least once.
Description The IF ELSE branch is also called a conditional branch. Depending on a con-
dition, either the first statement block (IF block) or the second statement block
(ELSE block) is executed.
The ELSE block is executed if the IF condition is not met. The ELSE block may
be omitted. If the IF condition is not met, then no further statements are exe-
cuted.
It is possible to check further conditions and to link them to statements after
the IF block using else if. As soon as one of these conditions is met and
the corresponding statements are executed, the subsequent branches are no
longer checked.
Several IF statements can be nested in each other.
Syntax if (Condition_1){
Statement_1;
<...
Statement_n;
}
<else if (Condition_2){
Statement_1;
<...
Statement_n;
}>
<else {
Statement_1;
<...
Statement_n;
}>
Explanation of
Element Description
the syntax
Condition Type: boolean
Possible:
Variable of type Boolean
Logic operation, e.g. a comparison, with a result of type
Boolean
if (a == 17){
b = 1;
}
The loop is executed 5 times. If variable a has the value 3, the value of a is
increased by 5 once only.
The values 1, 2, 8, 9 and 10 are displayed on the smartHMI. For output pur-
poses, a logger object has been integrated with dependency injection.
// ...
In a program, a test run for a vehicle is to be carried out. This test run is only
meaningful at a specific command velocity.
The IF statement checks whether the actual velocity velAct is lower than the
command velocity velDesired. If this is the case, the vehicle accelerates. If
this is not the case, it continues with else if.
The IF ELSE statement checks whether the actual velocity velAct is higher
than the command velocity velDesired. If this is the case, the vehicle is
braked. If this is not the case, the ELSE block is executed with the test run.
Description The SWITCH branch is also called a multiple branch. Generally, a SWITCH
branch corresponds to a multiply nested IF branch.
In a SWITCH block, different CASE blocks can be executed which are desig-
nated by CASE labels (jump labels). Depending on the result of an expression,
the corresponding CASE block is selected and executed. The program jumps
to the CASE label and is resumed at this point.
The keyword break at the end of a CASE block means that the SWITCH
block is left. If no break follows at the end of an instruction block, all subse-
quent instructions (not only instructions with CASE labels) are executed until
either a BREAK label is reached or all instructions have been executed.
A DEFAULT block can optionally be programmed. If no condition is met for
jumping to a CASE label, the DEFAULT block is executed.
<...
Statement_n;>
< break;>
<...
case Constant_n:
Statement_1;>
<...
Statement_n;>
< break;>
< default:
Statement_1;>
<...
Statement_n;>
< break;>
}
Explanation of
Element Description
the syntax
Expression Type: int, byte, short, char, enum
Constant Type: int, byte, short, char, enum
The data type of the constant must match the data type of
the expression.
Note: Constants of type char must be specified with ' , e.g.
case 'a'
If variable a has the value 1, the program jumps to the label case 1. The vari-
able b is assigned the value 10. The BREAK instruction causes the SWITCH
block to be left. Program execution is resumed with the next command after
the closing bracket of the SWITCH block.
If variable a has the value 2 at the start, variable b is assigned the value 20. If
a has the value 3 at the start, b is assigned the value 30.
The DEFAULT statement is optional. It is nonetheless advisable for it always
to be set. If variable a has a value at the start that is not covered by a CASE
statement (e.g. 0 or 5), the instructions in the DEFAULT block are executed.
In this example, this means that variable b is assigned the value 40.
Example 2 The keyword break may be omitted in a CASE statement. Cases in which this
is practically applied include the following:
The identical statement is to be executed in multiple CASE instances (e.g.
a = 1, 2 or 3). See SWITCH statement with fall-through (variant 1).
For a CASE instance, specific statements and additional statements appli-
cable to another instance are to be executed. See SWITCH statement with
fall-through (variant 2).
SWITCH statement with fall-through (variant 1):
// ...
int a, b;
switch (a){
case 1:
// fall-through
case 2:
// fall-through
case 3:
b = 20;
break;
case 4:
b = 30;
break;
default:
b = 40;
break;
}
// next command
In variant 1, the statements to be executed are only written to the last of the
grouped CASE blocks. Omission of the BREAK statement in case 1 and
case 2 makes the assignment of variable b in these CASE blocks obsolete
too, as variable b will be overwritten in case 3 anyway. To make it evident
that the BREAK statement has not been forgotten but intentionally omitted,
fall-through is entered as a comment.
SWITCH statement with fall-through (variant 2):
// ...
int a, b, c;
switch (a){
case 1:
b = 10;
// fall-through
case 2:
c = 20;
break;
case 3:
b = 30;
break;
case 4:
c = 30;
break;
default:
b = 40;
c = 40;
break;
}
// next command
The outer loop is first executed until the inner loop is reached. The inner loop
is then executed completely. The outer loop is then executed until the end, and
the system checks whether the outer loop must be executed again. If this is
the case, the inner loop must also be executed again.
There is no limit on the nesting depth of loops. The inner loops are always ex-
ecuted as often as the outer loop.
The outer loop determines that the inner loop is executed 3 times. The counter
of the outer loop starts with the value i = 1.
Once the smartHMI has displayed the start of the 1st cycle, the counter of the
inner loop starts with the value k = 10. The value of variable k is decreased
by 1 with every cycle. The current value of k is displayed on the smartHMI with
every cycle. If variable k has the value 1, the inner loop will be executed for
the last time.
Then the outer loop is ended and the value of variable i is increased by 1. The
2nd cycle begins. For output purposes, a logger object has been integrated
with dependency injection.
Overview The IRecovery interface provides methods for requesting information about
whether robots must be repositioned in order to resume a paused application
and which return strategy is applied.
Method Description
isRecoveryRequired() Return value type: Boolean
Checks whether one or more robots used in the application
must be repositioned in a paused application.
true: At least one robot must be repositioned for the application
to be resumed.
false: The application can be resumed immediately.
isRecoveryRequired(…) Return value type: Boolean
Checks whether a specific robot must be repositioned in a
paused application. The robot is transferred as a parameter
(type: Robot).
true: The robot must be repositioned for the application to be
resumed.
false: The application can be resumed immediately.
getRecoveryStrategy(…) Return value type: RecoveryStrategy
Requests the strategy being applied in order to return a specific
robot to the path. The robot is transferred as a parameter (type:
Robot).
PTPRecoveryStrategy: The robot is repositioned with a
PTP motion.
The robot is moved at 20% of the maximum possible axis ve-
locity and the effective program override.
No further strategies are available at this time.
The method returns null in the following cases:
No return strategy is required or available.
The application is not paused.
PTPRecovery The PTPRecoveryStrategy class provides “get” methods which are used to re-
Strategy quest the characteristics of the PTP motion. With these methods, it is possible
to evaluate whether the return strategy may be carried out in Automatic mode.
Method Description
getStartPosition() Return value type: JointPosition
Requests the start position of the PTP motion (= axis position
from which the robot can be repositioned)
The start position is the currently commanded setpoint position
of the robot and not the currently measured actual position.
getMotion() Return value type: PTP
Requests the PTP motion carried out on execution of the strat-
egy
Further information can be requested from the returned motion
object:
getDestination(): Target position of the PTP motion (= axis
position at which the robot left the path)
getMode(): Controller mode of the motion which was inter-
rupted
External The robot controller must inform the higher-level controller whether the robot
controller must be repositioned. The higher-level controller may only allow the return
strategy to be carried out if this can be done without risk. Otherwise, the robot
may only be manually repositioned.
The following system signals are available:
Output AutExt_AppReadyToStart
With this output, the robot controller communicates to the higher-level con-
troller whether or not the application may be resumed.
If isRecoveryRequired(…) supplies the value false (= no repositioning
required), the output can be set to TRUE.
If getRecoveryStrategy(…) supplies null (= no return strategy avail-
able), the output must be set to FALSE.
If the evaluation of the return strategy shows that it can be executed in
Automatic mode, the output can be set to TRUE.
If this is not the case, the output must be set to FALSE.
Input App_Start
The higher-level controller informs the robot controller via a rising edge
that the application should resume. (Precondition:
AutExt_AppReadyToStart is TRUE)
The higher-level controller must send the start signal App_Start twice:
1. Start signal for repositioning
2. Start signal for resuming the application
Motion commands that are communicated to the robot controller can fail for
various reasons, e.g.:
End point lies outside of a workspace
End point cannot be reached with the given axis configuration
The frame used is not present in the application data
As standard, a failed motion command results in termination of the application.
Handling routines can be defined in order to prevent the application from ter-
minating in case of error.
The following handling options are available depending on the error:
Failed synchronous motion commands are handled using a try-catch block
Failed asynchronous motion commands are handled using an event han-
dler
Syntax try {
// Code in which a runtime error can occur when executed
}
catch(Exception e){
// Code for treating the runtime error
}
< finally{
// Final treatment (optional)
}>
Explanation of
Element Description
the syntax
try{…} The try block contains a code which can result in a runtime
error.
If an error occurs, the execution of the try block is termi-
nated and the catch block is executed.
catch(…) The catch block contains the code for treating the runtime
{…} error.
The catch block will only be executed if an error occurs in
the try block.
Excep- The error data type (here: Exception) can be used to define
tion e the error type to be handled in the catch block. The error
type Exception is the superclass of most error data
types.
However, it is also possible to focus on more specific
errors. Information about errors which have occurred can
be requested using the parameter e.
In particular, the error data type CommandInvalidException
(package: com.kuka.roboticsAPI.executionModel) is impor-
tant for handling failed motion commands. It occurs, for
example, when the end point of the motion cannot be
reached.
finally The finally block is optional.
{…}
Here it is possible to specify a final treatment to be exe-
cuted in all cases, whether or not an error occurs in the try
block.
Example A robot executes a motion under impedance control with very low stiffness.
For this reason, it is not guaranteed to reach the end position. It is then to move
relatively by 50 cm in the positive Z direction of the flange coordinate system.
If the robot is in an unfavorable position following the motion under impedance
control, the linear motion cannot be executed and a runtime error will occur. In
order to prevent the application from aborting in this case, the critical linear
motion is programmed in a try-catch block. If the motion planning fails, the ro-
bot should be moved to an auxiliary point before the application is resumed.
public class ErrorHandler extends RoboticsAPIApplication {
@Inject
private ITaskLogger logger;
@Inject
private LBR robot;
// ...
@Override
public void run() {
// ...
CartesianImpedanceControlMode softMode =
new CartesianImpedanceControlMode();
softMode.parametrize(CartDOF.ALL).setStiffness(10.0);
robot.move(ptp(getFrame("/Start"))
.setMode(softMode).setJointVelocityRel(0.3));
try{
logger.info("1: Try to execute linear motion");
robot.move(linRel(0.0, 0.0, 500.0)
.setJointVelocityRel(0.5));
}
catch(CommandInvalidException e){
logger.info("2: Motion not executable");
robot.move(ptp(getFrame("/AuxiliaryPoint"))
.setJointVelocityRel(0.5));
}
finally{
logger.info(
"3: Commands in finally block are executed");
}
Explanation of
Element Description
the syntax
errorHandler Type: IErrorHandler
Name of the event handler responsible for handling failed
asynchronous motion commands
Input parameters of the handleError(…) method:
device Type: Device
The parameter can be used to access the robot for which
the failed motion command is commanded.
failed Type: IMotionContainer
Container
The parameter can be used to access the failed motion
command.
canceled Type: List<IMotionContainer>
Container
The parameter can be used to access a list of all deleted
s
motion commands. It contains all motion commands which
have already been sent to the real-time controller when the
method handleError(…) is called.
reaction Type: Enum of type ErrorHandlingAction
Return value of the handleError(…) method by means of
which the final reaction to the error is defined:
ErrorHandlingAction.EndApplication:
The application is terminated with an error.
ErrorHandlingAction.PauseMotion:
The motion execution is paused until the user resumes
the application via the smartPAD.
ErrorHandlingAction.Ignore:
The error is ignored and the application is resumed.
The handleError(…) method is ended with the return of the value ErrorHan-
dlingAction.Ignore.
public class ErrorHandler extends RoboticsAPIApplication {
// fields which need to be injected
@Inject
private ITaskLogger logger;
@Inject
private LBR robot;
@Override
public void initialize(){
return ErrorHandlingAction.Ignore
}
};
getApplicationControl()
.registerMoveAsyncErrorHandler(errorHandler);
}
@Override
public void run(){
robot.move(ptpHome());
robot.move(ptp(getFrame("/PrePos")));
// ...
robot.moveAsync(ptp(getFrame("/P1")));
robot.moveAsync(ptp(getFrame("/P2")));
robot.moveAsync(lin(getFrame("/P3")));
robot.moveAsync(ptp(getFrame("/P4")));
robot.moveAsync(ptp(getFrame("/P5")));
robot.moveAsync(ptp(getFrame("/P6")));
robot.moveAsync(ptp(getFrame("/P7")));
robot.moveAsync(ptp(getFrame("/P8")));
robot.moveAsync(ptp(getFrame("/P9")));
// ...
robot.move(ptpHome());
}
}
To explain the system behavior, it is assumed that the linear motion to P3 can-
not be planned. This means that the method handleError(…) is called. In our
example, the robot is situated at end point P2 at this time.
If, for example, the motion commands to P4, P5, P6 are already in the real-
time controller at the same time, these motion commands will be deleted and
no longer executed.
Calling the method handleError(…) will block further motion commands from
being sent to the real-time controller. In this case, the application will be
stopped before the motion command to P7. If the handleError(…) method is
ended with the return of the value ErrorHandlingAction.Ignore, the ap-
plication is resumed. The robot then moves directly from its current position P2
to P7.
16 Background tasks
Activities
t
Background tasks are used in order to be able to perform tasks in the back-
ground, parallel to a running robot application, or to implement cyclical pro-
cesses that are to be run continuously in the background. Multiple background
tasks can run simultaneously and independently of the running robot applica-
tion.
Background tasks are used, in particular, to control and monitor peripheral de-
vices and to implement the corresponding higher-level logic. Examples:
Switching signal lamps
Monitoring and evaluating sensor information
This means that no higher-level controller, e.g. a PLC, is required for smaller
applications, as the robot controller can perform such tasks by itself.
In the case of outputs that are switched by a background task, the fol-
lowing points must be observed:
The outputs are switched, irrespective of whether a robot application
is currently being executed.
The outputs are also switched if the robot application is paused due to an
EMERGENCY STOP or missing enabling signal.
The outputs are also switched if a stop request from the safety controller
is active (this also applies if outputs are switched by a robot application).
Properties Background tasks, like robot applications, are implemented as Java classes.
They are similar in structure to robot applications: they have a run() method
that contains the commands to be executed.
Background tasks are an integral feature of the Sunrise project. They are cre-
ated in Sunrise.Workbench and transferred to the robot controller when the
project is synchronized.
(>>> 5.4.4 "Creating a new background application" Page 59)
There are 2 types of background task that differ in terms of their duration:
Cyclic background task
Executed cyclically. The cyclical behavior can be adapted by the program-
mer depending on the task to be performed.
Non-cyclic background task
Executed once.
Background tasks also differ in terms of their start type:
Manual
The task must be started manually via the smartPAD. (This function is not
yet supported.)
Automatic
The task is automatically started when the robot controller is booted and
stopped when it is shut down.
Synchronization When synchronizing the Sunrise project, the associated background tasks
behavior with Automatic start type exhibit the following behavior:
Tasks not yet present
Both cyclic and non-cyclic tasks are transferred to the controller and sub-
sequently started.
Tasks already present
If the task to be synchronized (cyclic or non-cyclic) is already present on
the controller, it will be terminated if it is still running. The synchronization
is then executed and the task automatically restarted.
Tasks no longer present
If a background task has been deleted from the associated project and
synchronization is carried out, the task is terminated before synchroniza-
tion on the controller. It is then no longer available after synchronization.
Runtime behavior After a non-cyclic task has been started, it is executed fully in accordance with
its programming. When it reaches the end of its run() method, it is terminated
and not restarted until the next synchronization or the next reboot of the con-
troller.
When started, a cyclic task is first instanced. The run() method of the task is
then repeatedly called on a regular basis. These background tasks are there-
fore permanently executed as long as the controller is running.
If an error which cannot be intercepted and rectified occurs in a task (cyclic or
non-cyclic), the task is automatically terminated.
Structure
Item Description
1 This line contains the name of the package in which the task is lo-
cated.
2 Import section
The section contains the imported classes which are required for
programming the task
3 Header of the task
The cyclic background task is a subclass of RoboticsAPICyclic-
BackgroundTask.
4 Declaration section
The data arrays of the task that are required for its execution are
declared here.
As an example, the controller is automatically integrated via
dependency injection when the task is created.
5 initialize() method
Initial values are assigned here to data arrays that are not integrat-
ed using dependency injection.
The initializeCyclic(…) method is available as standard. This meth-
od is used to define the cyclical behavior of the task.
(>>> "Initialization" Page 475)
Note: The method must not be deleted or renamed.
6 runCyclic() method
The code that is to be executed cyclically is programmed here.
Note: The method must not be deleted or renamed.
Period: 500 ms
Behavior if the defined period is exceeded: execution of runCyclic() con-
tinues.
The initial values can be changed by the programmer.
initializeCyclic(long initialDelay, long period, TimeUnit timeUnit,
CycleBehavior behavior);
Element Description
initialDelay Delay after which the cyclical background task is executed
for the first time after the start. All further cycles are exe-
cuted without a delay.
The time unit is defined with timeUnit.
period Period (= time between 2 calls of runCyclic())
The period is maintained even if the execution time of run-
Cyclic() is less than the defined period. The behavior in the
event of runCyclic() exceeding the period is defined by
behavior.
The time unit is defined with timeUnit.
timeUnit Time unit of initialDelay and period
The Enum TimeUnit is an integral part of the standard Java
library.
behavior Timeout behavior
The behavior of the background task if the period defined
with period is exceeded by the runtime of runCyclic() is
defined here.
CycleBehavior.BestEffort
runCyclic() is executed completely and then called
again.
CycleBehavior.Strict
Execution of the background task is canceled with an
error of type CycleExceededException.
Example A robot is to assemble workpieces that it takes from a magazine. The maga-
zine can contain a maximum of 100 workpieces and is loaded manually. If the
remaining number of workpieces in the magazine falls below 20, this is sig-
naled to the robot controller via a digital input. An LED is then to flash every
500 ms to signal to the operator that the magazine needs filling. Another LED
is to flash if the force determined at the robot flange exceeds a limit of 150 N.
A cyclic background task is used for data evaluation and activation of the
LEDs. The background task is executed every 500 ms.
public class LEDTask extends RoboticsAPICyclicBackgroundTask {
@Inject
private LBR robot;
@Inject
private ProcessParametersIOGroup processParaIOs;
@Inject
private ProcessParametersLEDsIOGroup LED_IOs;
Structure
Item Description
1 This line contains the name of the package in which the task is lo-
cated.
2 Import section
The section contains the imported classes which are required for
programming the task
3 Header of the task
The non-cyclic background task is a subclass of RoboticsAPI-
BackgroundTask.
4 Declaration section
The data arrays of the task that are required for its execution are
declared here.
As an example, the controller is automatically integrated via
dependency injection when the task is created.
5 initialize() method
Initial values are assigned here to data arrays that are not integrat-
ed using dependency injection.
Note: The method must not be deleted or renamed.
6 run() method
The code that is to be executed once is programmed here. The
runtime is not limited.
Note: The method must not be deleted or renamed.
Description The mechanism described here can be used to exchange data between run-
ning tasks. One task can provide task functions (providing task) that can be
accessed by other tasks (requesting tasks).
Example: Accessing and processing information from the running robot appli-
cation in a background task.
It is not relevant for programming whether data are exchanged between a
background task and a robot application or between 2 background tasks. The
providing task may be either a robot application or a background task. For this
reason, background tasks and robot applications are grouped together as
tasks.
Overview The following steps are required in order for the providing task and the re-
questing task to be able to communicate with one another:
Step Description
1 Create an interface and declare the desired task functions.
(>>> 16.4.1 "Declaring task functions" Page 479)
2 Implement the interface in which the task functions are
declared.
The interface can be implemented directly by the providing
task or by a specially created class. The declared task func-
tions must be programmed in the implementing class.
(>>> 16.4.2 "Implementing task functions" Page 480)
Step Description
3 Create the providing task.
The providing task must contain a parameterless public
method with the annotation @TaskFunctionProvider which
returns the implementation of the interface.
(>>> 16.4.3 "Creating the providing task" Page 481)
4 In the requesting task, use the getTaskFunction(…) method to
request the interface in which the task functions are declared.
The method is available in all task classes.
The ITaskFunctionMonitor interface can be used to check
whether the task functions are available.
(>>> 16.4.4 "Using task functions" Page 483)
Example The data exchange between tasks is described step by step in the sections be-
low, using the following example:
An assembly process is to be implemented using the robot application “As-
semblyApplication”. An LED is to flash during the assembly process. If the ro-
bot leaves the path during the application and has to be repositioned, a further
LED is to flash.
The LEDs are activated by the background task “LEDTask”. In this example,
the background task is the requesting task.
The robot application is the providing task. It must enable access to your Re-
covery interface, which is used to check whether repositioning of the robot is
required. Furthermore, it must also signal the start and end of the assembly
process.
Description The desired task functions must be declared in a specially created interface.
The interface may only declare those methods that are to be made
available to the requesting task. It is thus advisable not to use set
methods for setting fields in the interface. Instead, such methods can
be offered by the implementing class.
Line Description
1 … 16 Interface IApplicationInformationFunction
7 Method isAssemblyRunning()
Called by the requesting task to check whether the assembly
process is currently running.
14 Method isManualRepositioningRequired()
Called by the requesting task to check whether repositioning
of the robot is required.
Description A class must be made available that implements the interface and in which the
declared task functions are programmed. The providing task or a specially cre-
ated class can be used as the implementing class.
25
26 /**
27 * Called from application to give access to its
28 * recovery interface
29 * @param applicationRecoveryInterface Recovery
30 * interface of the application
31 */
32 public void setApplicationRecoveryInterface(
33 IRecovery applicationRecoveryInterface) {
34 _applicationRecoveryInterface =
35 applicationRecoveryInterface;
36 }
37 }
Line Description
1 … 37 Class ApplicationInformation
The task functions are programmed in the class.
3, 4 Declaration of the data fields
_assembly: saves the current status of the assembly pro-
cess
_applicationRecoveryInterface: refers to the Recovery in-
terface of the robot application
6…9 Method isAssemblyRunning()
Called by the requesting task to check whether the assembly
process is currently running.
17 … 19 Method setAssemblyRunning(…)
Called by the robot application when the assembly process is
started or ended.
21 … 2 Method isManualRepositioningRequired()
Called by the requesting task to check whether repositioning
of the robot is required.
32 … 36 Method setApplicationRecoveryInterface(…)
Called by the robot application for making its Recovery inter-
face available.
Description A task can provide task functions of various interfaces. For each interface
whose task functions are provided by the task, a parameterless public method
with the annotation @TaskFunctionProvider must be inserted which returns
the implementation of the interface.
Syntax @TaskFunctionProvider
public Interface Method name()
return Interface instance;
}
Explanation of
Element Description
the syntax
Interface Interface whose task functions the task provides
Element Description
Method name Name of the method that returns the implementation of the
interface (the name can be freely selected)
Interface Instance of the implementing class
instance
If the providing task does not, itself, implement the interface derived
from ITaskFunction, it requires an instance of the implementing class.
It is advisable to create this instance as an array.
If the providing task implements the interface itself, transfer the in-
stance of the task for the Interface instance parameter:
return this;
Each interface may only be provided once. This means that there
must not be 2 tasks that return the same interface in their @Task-
FunctionProvider annotation.
Example The robot application contains a data array of type ApplicationInformation. Its
method setApplicationRecoveryInterface(…) provides the Recovery interface
of the robot application. Calling the method setAssembly(…) announces that
the assembly process is being carried out.
public class AssemblyApplication extends RoboticsAPIApplication {
@Inject
private LBR robot;
private ApplicationInformation appInformation;
/**
* Implements the assembly process
*/
private void assembly() {
// ...
/**
* TaskFunctionProvider method that has to be
* implemented by the task
*/
@TaskFunctionProvider
public IApplicationInformationFunction getAppInfoFunction() {
return appInformation;
}
}
Enabling access The following steps are required in order to enable access to the task functions
of an interface in the requesting task:
1. Create the data array of the type of the interface.
private Interface Interface instance;
2. Request the interface with the getTaskFunction(…) method. The task
functions of the interface are saved in the data array just created.
Interface instance = getTaskFunction(Interface.class);
Explanation of the syntax:
Interface: Interface whose task functions the task wants to access
Interface instance: Instance of the interface in which the task functions are
declared
Example:
In the requesting background task “LEDTask”, access to the functions defined
by IApplicationInformationFunction is to be enabled. The interface instance re-
quired for this is created as a data array and generated in the initialize() meth-
od of the task:
public class LEDTask extends RoboticsAPICyclicBackgroundTask {
// ...
// ...
Checking avail- The task functions of the providing task are only available when the providing
ability task is being executed or is paused.
Method Description
isAvailable() Return value type: Boolean
Specifies whether the task functions of the providing task
are available (true = available).
await(time, Return value type: Boolean
unit)
If the task functions are not available when the providing
task is called, the system waits a defined time for them to
become available (true = task functions available within the
defined wait time).
Parameters:
time (type: long): duration of maximum wait time. The
unit is defined by the parameter unit.
unit (type: TimeUnit): unit of time
Example:
In the method runCyclic() of the background task “LEDTask”, a check is to be
carried out to ascertain whether the assembly process is currently being exe-
cuted. For this, the interface IApplicationInformationFunction offers the meth-
od isAssemblyRunning().
The requesting background task “LEDTask” can only check whether the as-
sembly process is being executed if the robot application is running or paused.
For this reason, the availability of the function must be checked before isAs-
semblyRunning() is called:
public class LEDTask extends RoboticsAPICyclicBackgroundTask {
// ...
// ...
// Get Task Function Interface
appInfoFunction =
getTaskFunction(IApplicationInformationFunction.class);
/*
* Create ITaskFunctionMonitor for
* IApplicationInformationFunction
*/
appInfoMonitor = TaskFunctionMonitor.create(appInfoFunction);
// ...
}
// ...
}
}
Overall example The requesting task “LED Task” is executed cyclically every 500 ms. It first
checks whether the required task functions of the robot application are avail-
able. If they are available, a check is carried out to ascertain whether the as-
sembly process is running and the corresponding LED is activated. The
system then checks whether repositioning of the robot is required. If this is the
case, a further LED is activated.
public class LEDTask extends RoboticsAPICyclicBackgroundTask {
@Inject
private ProcessParametersLEDsIOGroup LED_IOs;
private IApplicationInformationFunction appInfoFunction;
private ITaskFunctionMonitor appInfoMonitor;
/*
* Create ITaskFunctionMonitor for
* IApplicationInformationFunction
*/
appInfoMonitor = TaskFunctionMonitor.create(appInfoFunction);
}
boolean currentStateAssemblyLED =
LED_IOs.getLED_Assembly();
LED_IOs.setLED_Assembly(!currentStateAssemblyLED);
} else{
LED_IOs.setLED_Assembly(false);
}
/*
* Use task function to check whether the application
* requires repositioning
*/
boolean recoveryRequired =
appInfoFunction.isManualRepositioningRequired();
if(recoveryRequired){
/*
*If recovery is required, the appropriate LED changes
* its state with every execution of runCyclic()
*/
boolean currentStateRecoveryLED =
LED_IOs.getLED_RecoveryRequired();
LED_IOs.setLED_ForceExceeded(!currentStateRecoveryLED);
} else{
LED_IOs.setLED_RecoveryRequired(false);
}
} else{
// If application is not running, LEDs remain off
LED_IOs.setLED_Assembly(false);
LED_IOs.setLED_RecoveryRequired(false);
}
}
}
17 KUKA Sunrise.EnhancedVelocityController
A
17.1 Overview
s
KUKA Sunrise.EnhancedVelocityController (EVC) is an installable option for
the limitation of Cartesian velocities. EVC automatically adapts the robot ve-
locity so that safety-oriented and application-specific Cartesian velocity limits
are adhered to. These are:
Cartesian velocity limits that are active on the safety controller
Cartesian velocity monitoring functions can be combined in the safety con-
figuration with the “Brake” safety reaction.
(>>> 17.2 "“Brake” safety reaction" Page 487)
Mode-specific Cartesian velocity limitation of 250 mm/s that is active in T1
or CRR mode
Device-specific Cartesian velocity limitation that is set by the application
(>>> 17.3 "Cartesian velocity limitation via application" Page 489)
Aggregated Cartesian velocity limitation
If multiple Cartesian velocity limitations are active simultaneously, the ve-
locity is reduced to the lowest of these limits.
EVC limits Cartesian velocities in position-controlled spline motions:
Impedance-controlled motions and motions that are not spline motions,
e.g. manual guidance motions, are not compatible with EVC.
EVC takes safety-oriented velocity limits into account, thereby preventing the
robot from being stopped with a safety stop:
If a Cartesian velocity monitoring function is active, the velocity is automat-
ically reduced so that the monitored velocity limit is not exceeded.
If multiple Cartesian velocity monitoring functions are active simultaneous-
ly, the velocity is automatically reduced so that the lowest currently moni-
tored velocity limit is not exceeded.
The velocity is always reduced to 90% of the lowest current velocity limit in or-
der to maintain a buffer between the exact limit and the target velocity. If, for
example, only the mode-specific Cartesian velocity limitation of 250 mm/s is
active, EVC regulates the velocity to 225 mm/s.
Description The “Brake” safety reaction is available for safety functions of the PSM mech-
anism for monitoring the Cartesian velocity.
Brake can only be used as a reaction if the safety function (PSM row) contains
the AMF Cartesian velocity monitoring. The PSM row can contain further
AMFs. An exception is extended AMFs, such as the AMF Time delay. Extend-
ed AMFs cannot be used in conjunction with the “Brake” reaction.
Benefits The “Brake” safety reaction can be used to prevent the robot being stopped
with a safety stop if its velocity is higher than the configured limit when the ve-
locity monitoring is activated.
With very high accelerations, it is possible, in rare cases, that the ve-
locity limitation does not act quickly enough. This can result in the ro-
bot stopping with a safety stop 1. Possible remedy: Reduce the
acceleration for the motions in which this occurs.
Example 1 Cartesian velocity monitoring is activated with an input signal (LOW), e.g. a la-
ser scanner.
If the laser scanner detects a person and the robot is too fast, i.e. its velocity
is above the configured limit, the “Brake” reaction is triggered. The resulting
braking process is monitored and the velocity continues being reduced until it
is below the configured limit. The AMF Cartesian velocity monitoring is then no
longer violated and the “Brake” reaction is terminated. As long as the input sig-
nal has the LOW level, EVC keeps the velocity below this limit value.
Brake ramp
monitoring
1 Monitoring time
v Velocity
t Time
t1 Moment in time at which the “Brake” reaction is triggered (PSM row is
violated)
t2 Moment in time at which the “Brake” reaction is stopped (AMF Carte-
sian velocity monitoring is no longer violated)
vR Robot velocity (blue curve)
vS Velocity limit of the AMF Cartesian velocity monitoring
v0 Velocity at time t1 at which the “Brake” reaction is triggered
v1 Start value of the monitoring ramp (green curve)
v1 = v0 + 250 mm/s
EVC can limit the Cartesian robot velocity via the application. Once set, this
device-specific Cartesian velocity limitation can be deactivated again via the
application. Additionally, information about all Cartesian velocity limitations
carried out by EVC can be requested by the robot.
Description The robot class contains the methods that can be used to set device-specific
Cartesian velocity monitoring in the application and then deactivate it again.
Explanation of
Element Description
the syntax
robot Type: Robot
Instance of the robot used in the application
limit Type: int
Value > 0 that is set as the Cartesian velocity limit for the
robot (unit: mm/s)
@Override
public void initialize() {
// initialize your application here
}
@Override
public void run() {
// your application execution starts here
int integerValue = 69; //in mm/s
robot.move(ptpHome());
Explanation of
Element Description
the syntax
infoObject Type: CartesianVelocityLimitInfo
Variable for the information requested from the robot using
getCartesianVelocityLimitInfo()
robot Type: Robot
Instance of the robot used in the application
Overview The information stored in the containers can be read using the following meth-
ods:
Method Description
getDeviceVelocityLimit() Return value type: Integer
Returns the device-specific Cartesian velocity limitation that is
currently set by an application (unit: mm/s).
The return value -1 means that the device-specific Cartesian
velocity limitation is deactivated.
getOperationModeVelocity Return value type: Integer
Limit()
Returns the mode-specific Cartesian velocity limitation (unit:
mm/s).
The return value -1 means that the Cartesian velocity is not cur-
rently limited by an operating mode.
getSafetyVelocityLimit() Return value type: Integer
Returns the Cartesian velocity limit that is active on the safety
controller (unit: mm/s).
The return value -1 means that there is currently no Cartesian
velocity limit active on the safety controller.
getAggregatedVelocityLimit() Return value type: Integer
Returns the minimum of all currently active and valid Cartesian
velocity limitations (unit: mm/s). This aggregated velocity value
is the actual limitation that regulates the robot velocity.
The return value -1 means that the Cartesian velocity is not cur-
rently limited. In other words, it is not currently limited by either
the operating mode or the application and there is no active
Cartesian velocity limit on the safety controller.
getVelocityLimitSources() Return value type: Set<CartVelocityLimitSourceType>
Returns an Enum data set with the sources from which the
value for the aggregated Cartesian velocity limitation was
formed.
The Enum CartVelocityLimitSourceType contains the following
values:
DEVICE
Array for device-specific Cartesian velocity limitation
OPERATIONMODE
Array for mode-specific Cartesian velocity limitation
SAFETY
Array for Cartesian velocity limit that is active on the safety
controller
@Override
public void initialize() {
// initialize your application here
}
@Override
public void run() {
// your application execution starts here
int integerValue = 69; //in mm/s
robot.move(ptpHome());
deviceVelocityLimit = infoObject.getDeviceVelocityLimit();
safetyVelocityLimit = infoObject.getSafetyVelocityLimit();
operationModeVelocityLimit = infoObject
.getOperationModeVelocityLimit();
aggregatedVelocityLimit = infoObject
.getAggregatedVelocityLimit();
velocityLimitSources = infoObject.getVelocityLimitSources();
18 KUKA Sunrise.StatusController
A
Description
s
KUKA Sunrise.StatusController is a programming interface that can be used
to signal various system states. It can be used in robot and background appli-
cations.
The status controller distinguishes between status group and status. Various
statuses are grouped together in a status group, e.g. the status group
SAFETY_STOP contains the statuses that belong to a safety stop.
Examples of statuses of the status group SAFETY_STOP:
E-STOP actuated on smartPAD
Safety configuration not activated
Safety stop 0 active
etc.
Functional Some statuses are automatically set by the station monitoring, e.g. whether a
principle safety stop is active. Additionally, user-defined statuses can be set via the sta-
tus monitor of a task. A status listener can then be used to respond to status
changes, e.g. switching on of a red lamp in the case of an error state.
IStatusController
Interface with the methods of the status controller
Automatically set statuses and statuses set by the user on a status
monitor are signaled to the status controller.
If a status listener is registered on the status controller, the status lis-
tener is notified of status changes.
All currently active statuses and status groups can be requested via
the status controller.
IStatusMonitor
Interface for setting the status from robot and background applications
IStatusListener
Interface for responding to status changes
The following status groups are defined in the class DefaultStatusGroups. For
some of these groups, the statuses are automatically set if certain precondi-
tions are met. For other groups, the statuses must be generated and set by the
user.
Status groups whose statuses are only set automatically (cannot be
used by the user):
Status groups whose statuses are set automatically (can also be used by
the user):
Description In order to be able to define a new status, a status group is necessary. If the
new status matches one of the predefined status groups, this status group can
be used.
If the new status does not match any of the predefined status groups, new sta-
tus groups can be created. The designation of the status group must enable
clear identification of the group.
If new status groups are created and used, these cannot be pro-
cessed automatically by the supplied status handlers, e.g. the
flexFELLOW status handler. In all cases, user-defined status groups
must be handled by the user.
Explanation of
Element Description
the syntax
statusGroup Status group for which the new status is being created
description Description of the new status (optional)
The description can be used in status listeners.
Explanation of
Element Description
the syntax
id ID of the new status group
If several status groups exist with the same ID, these are
treated as one status group.
Description The IStatusController interface can be used, for example, for requesting all ac-
tive statuses and status groups or for registering to be notified of status chang-
es. An instance of IStatusController can be integrated into tasks by means of
dependency injection.
The methods for requesting statuses and status groups always return all re-
quested statuses and status groups, irrespective of the IStatusMonitor in-
stance on which they were set.
Method Description
getActiveStatusGroups() Return value type: List<StatusGroup>
Returns the list of currently active status groups.
getActiveStatuses() Return value type: List<Status>
Returns the list of currently set statuses.
getActiveStatuses( Return value type: List<Status>
StatusGroup)
Returns the list of currently set statuses belonging to the trans-
ferred status group.
isSet(Status) Return value type: Boolean
Checks whether the transferred status is set.
true: Status is set.
false: Status is not set.
addStatusListener( Return value type: void
IStatusListener, StatusGroup)
Registers the transferred status listener so that it is notified of
status changes.
Any number of status groups can be transferred after the
parameter IStatusListener. The listener is then only informed of
changes in the transferred status groups. If no status groups are
transferred, the listener is informed of all status changes.
removeStatusListener( Return value type: void
IStatusListener)
Unregisters the transferred status listener so that it is no longer
notified of status changes.
18.1.4 Setting and deleting the status via the status monitor
Overview The interface IStatusMonitor provides the methods for setting and deleting a
status:
Method Description
clear(Status) Return value type: void
Deletes the transferred status.
The following exceptions can occur:
IllegalArgumentException: Status is null
StatusNotSetException: Status is not currently set.
StatusOutOfScopeException: Status is set, but by a different status
monitor.
set(Status) Return value type: void
Sets the transferred status.
Any number of statuses from any number of status groups can be set
simultaneously on a status monitor.
The following exceptions can occur:
IllegalArgumentException: Status is null
StatusAlreadySetException: Status is already set.
StatusOutOfScopeException: Status is already set by a different sta-
tus monitor.
Example The status lackOfParts is set by a status monitor to signal that there is a
lack of parts. The status is cleared again when new parts are available.
private Status lackOfParts = new Status(
DefaultStatusGroups.WARNING_CRITICAL, "palette empty");
@Inject
private IStatusMonitor statusMonitor;
//...
@Override
public void run() {
//...
//...
//...
}
Description In order to be able to respond to status changes in a task, the interface ISta-
tusListener must be implemented in a cyclical background application.
The status listener itself must be registered on the status controller using the
addStatusListener(…) method. Any number of status groups can be trans-
ferred during registration of the status listener. The listener is then only in-
formed of changes in the transferred status groups. If no status groups are
transferred, the listener is informed of all status changes.
If a status of a subscribed status group is set or deleted, the onStatusSet(Sta-
tusEvent) or onStatusCleared(StatusEvent) method of the status listener is
called. The procedure for implementing these methods is illustrated in the ex-
ample.
Explanation of
Element Description
the syntax
statusCont- Type: IStatusController
roller
Status controller on which the listener is registered
statusListener Type: IStatusListener
Listener that is registered
statusGroup1 Type: StatusGroup
…
Status groups to which the listener is to respond
statusGroup1
+n If no status groups are specified, the listener responds to
status changes in all groups.
Method Description
getStatusGroup() Returns the status group to which the set or
cleared status belongs.
getDate() Returns the time at which the status was set or
cleared.
hasChangedActive Specifies whether setting or clearing the status
StatusGroups() has altered the number of active status groups.
getStatusDescription() Description that was specified when creating the
status.
Can return null if the set or cleared status has
no description.
Line Description
3 The class BackgroundTask implements the IStatusListener in-
terface with implements IStatusListener.
5, 6 A status controller of type IStatusController is integrated into
the task by means of dependency injection.
7, 8 A logger object of type ITaskLogger is integrated into the task
by means of dependency injection.
The logger object can be used to display status information on
the smartHMI.
16 … 19 In the initialize() method, the status listener is registered on the
status controller.
The keyword this is used to add the class BackgroundTask
to itself as a status listener.
The status groups SAFETY_STOP and ERROR_GENERAL
are transferred during registration. In this way, the listener is
only informed of status changes in these status groups.
27 … 32 The onStatusSet method is called if a status is set with one of
the subscribed status groups. The status group and the de-
scription of the set status are then logged.
34 … 39 The onStatusCleared method is called if a status is cleared
with one of the subscribed status groups. The status group
and the description of the cleared status are then logged.
41 … 45 In the dispose() method, the status listener is unregistered.
The KUKA LBR iiwa can be operated with a number of different controllers.
For each control type, a separate class is provided by the RoboticsAPI in the
package com.kuka.roboticsAPI.motionModel.controlModeModel. The shared
superclass is AbstractMotionControlMode.
Description In robot applications, the controller to be used is set separately for every mo-
tion command. As standard, the following steps are required for this:
Procedure 1. Create the controller object of the desired controller data type.
2. Parameterize the controller object to define the control response.
3. Set the controller as the motion parameter for a motion command.
Description To be able to use a controller, a variable of the desired controller data type
must first be created and initialized. As standard, the controller object is gen-
erated using the standard constructor.
Explanation of
Element Description
the syntax
Controller Data type of the controller. Subclass of AbstractMotionCon-
mode trolMode.
controlMode Name of controller object
The parameters that can be set depend on the type of the controller used. The
individual controller classes in the KUKA RoboticsAPI provide specific “set”
and “get” methods for each parameter.
(>>> 19.5.2 "Parameterization of the Cartesian impedance controller"
Page 505)
(>>> 19.6.3 "Parameterization of the impedance controller with overlaid force
oscillation" Page 513)
(>>> 19.8 "Axis-specific impedance controller" Page 523)
Description The controller object is transferred to a motion as a parameter using the com-
mand setMode(…). If no controller object is transferred as a parameter to a
motion, the motion is automatically executed with position control.
Motions which use the Cartesian impedance controller must not con-
tain any poses in the proximity of singularity positions.
Syntax movableObject.move(motion.setMode(controlMode));
Explanation of
Element Description
the syntax
motion Type: Motion
Motion to be executed
controlMode Type: Subclass of AbstractMotionControlMode
Name of controller object
With position control, the motors are controlled in such a way that the current
position of the robot always matches the setpoint position specified by the con-
troller with just a minimal difference. The position controller is particularly suit-
able in cases where precise positioning is required.
The position controller is represented by the class PositionControlMode. The
data type has no configurable parameters for adapting the robot.
If the controller mode of a motion is not explicitly specified, then the position
controller is used.
Behavior of the Under impedance control, the robot’s behavior is compliant. It is sensitive and
robot can react to external influences such as obstacles or process forces. The ap-
plication of external forces can cause the robot to leave the planned path.
The underlying model is based on virtual springs and dampers, which are
stretched out due to the difference between the currently measured and the
specified position of the TCP. The characteristics of the springs are described
by stiffness values, and those of the dampers are described by damping val-
ues. These parameters can be set individually for every translational and ro-
tational dimension.
If the measured and specified robot positions correspond, the virtual springs
are slack. As the robot’s behavior is compliant, an external force or a motion
command results in a deviation between the setpoint and actual positions of
the robot. This results in a deflection of the virtual springs, leading to a force
in accordance with Hooke’s law.
The resultant force F can be calculated on the basis of Hooke’s law using the
set spring stiffness C and the deflection ∆x:
F = C · ∆x
Examples The force exerted at the contact point depends on the difference between the
setpoint position and the actual position and the set stiffness.
As shown in the figure (>>> Fig. 19-2 ), a large position difference and low
stiffness can result in the same force as a smaller position difference and
greater stiffness. If the force is increased by a motion in a contact situation, the
time required to reach this force differs if the Cartesian velocity is identical.
If higher stiffness values are used, a desired force can be reached earlier, as
only a small position difference is required. Since the setpoint position is
reached quickly, a jerk can be produced in this way.
Fig. 19-3: Force over time (high stiffness, small position difference)
In the case of a large position difference and low stiffness, the force is built up
more slowly. This can be used, for example, if the robot moves to the contact
point and the impact loads are to be reduced.
Fig. 19-4: Force over time (low stiffness, large position difference)
Under impedance control, the robot behaves like a spring. The characteristics
of this spring are defined by different parameters. This results in the behavior
of the robot.
With a Cartesian impedance controller, forces can be overlaid for all Cartesian
degrees of freedom. Forces acting about an axis generate a torque. For this
reason, the overlaid torque and not the overlaid force is specified for the rota-
tional degrees of freedom. For the sake of simplification, the terms “force” and
“force oscillation” are taken to include the terms “torque” and “torque oscilla-
tion” for the rotational degrees of freedom in the following text.
The following controller properties can be defined individually for each Carte-
sian degree of freedom:
Stiffness
Damping
Force to be applied in addition to the spring
The following controller properties can be defined irrespective of the degree of
freedom:
Stiffness of the redundancy degree of freedom
Damping of the redundancy degree of freedom
Limitation of the maximum force on the TCP
Maximum Cartesian velocity
Maximum Cartesian path deviation
Description Some parameters of the Cartesian impedance controller can be defined indi-
vidually for each Cartesian degree of freedom.
During programming, the Cartesian degrees of freedom for which the control-
ler parameter is to apply are specified first. The parametrize(…) method of the
controller data types is used for this purpose. To define the degrees of free-
dom, one or more parameters of the type CartDOF are transferred to this
method.
After this, the “set” method of the desired controller parameter is called via the
dot operator. This controller parameter is set to the value specified as the input
parameter of the set method for all degrees of freedom specified in parame-
trize(…).
Syntax controlMode.parametrize(CartDOF.degreeOfFreedom_1
<, CartDOF.degreeOfFreedom_2,…>).setParameter(value);
Explanation of
Element Description
the syntax
controlMode Type: CartesianImpedanceControlMode
Name of controller object
degreeOfFree Type: CartDOF
dom_1,
degreeOfFree List of degrees of freedom to be described
dom_2, …
setParame- Method for setting a controller parameter
ter(value)
A separate method is available for each settable parameter
(value = value of the parameter).
cartImpCtrlMode.parametrize(CartDOF.X,
CartDOF.Y).setStiffness(3000.0);
cartImpCtrlMode.parametrize(CartDOF.Z).setStiffness(1.0);
cartImpCtrlMode.parametrize(CartDOF.ROT).setStiffness(300.0);
cartImpCtrlMode.parametrize(CartDOF.ALL).setDamping(0.7);
robot.move(lin(getApplicationData().getFrame("/P1")).setCartVelocity(
800).setMode(cartImpCtrlMode));
Overview The following methods are available for the parameters of the Cartesian im-
pedance controller that are specific to the degrees of freedom:
Method Description
setStiffness(…) Spring stiffness (type: double)
The spring stiffness determines the extent to which the robot yields to an
external force and deviates from its planned path.
Translational degrees of freedom (unit: N/m):
0.0 … 5000.0
Default: 2000.0
Rotational degrees of freedom (unit: Nm/rad):
0.0 … 300.0
Default: 200.0
Note: If no spring stiffness is specified for a degree of freedom, the
default value is used for this degree of freedom.
setDamping(…) Spring damping (type: double)
The spring damping determines the extent to which the virtual springs
oscillate after deflection.
For all degrees of freedom (without unit: Lehr’s damping ratio):
0.1 … 1.0
Default: 0.7
Note: If no spring damping is specified for a degree of freedom, the
default value is used for this degree of freedom.
setAdditionalControl- Force applied in addition to the spring (type: double)
Force(…)
The additional force results in a Cartesian force at the TCP. This force
acts in addition to the forces resulting from the spring stiffness.
Translational degrees of freedom (unit: N):
Negative and positive values possible
Default: 0.0
Rotational degrees of freedom (unit: Nm):
Negative and positive values possible
Default: 0.0
Note: As standard, the maximum Cartesian force that can be applied is
limited. If required, this limit value can be increased with setMaxControl-
Force(…).
Note: If no additional force is specified for a degree of freedom, the
default value is used for this degree of freedom.
Note: The force is overlaid without a delay. If the force to be overlaid is
too great, this can result in overloading of the robot and cancelation of
the program. The class CartesianSineImpedanceControlMode has the
option of overlaying forces after a delay.
Some settings apply irrespective of the Cartesian degrees of freedom. The set
methods used to define these controller parameters belong to the class Car-
tesianImpedanceControlMode and are called directly on the controller object.
Overview The following methods are available for the parameters of the Cartesian im-
pedance controller that are independent of the degrees of freedom:
Method Description
setNullSpaceStiff- Spring stiffness of the redundancy degree of freedom (type: double, unit:
ness(…) Nm/rad)
The spring stiffness determines the extent to which the robot yields to an
external force and deviates from its planned path.
≥ 0.0
Note: If no spring stiffness is specified for the redundancy degree of
freedom, a default value is used for this degree of freedom.
setNullSpaceDamp- Spring damping of the redundancy degree of freedom (type: double)
ing(…)
The spring damping determines the extent to which the virtual springs
oscillate after deflection.
0.3 … 1.0
Note: If no spring stiffness is specified for the redundancy degree of
freedom, a default value is used for this degree of freedom.
setMaxControl- Limitation of the maximum force on the TCP
Force(…)
The maximum force applied to the TCP by the virtual springs is limited.
The maximum force required to deflect the virtual spring is thus also
defined. Whether or not the motion is to be aborted if the maximum force
at the TCP is exceeded is also defined.
Syntax:
setMaxControlForce(maxForceX, maxForceY, maxForceZ,
maxTorqueA, maxTorqueB, maxTorqueC, addStopCondition)
Explanation of the syntax:
maxForceXΙYΙZ: Maximum force at the TCP in the corresponding Car-
tesian direction (type: double, unit: N)
≥ 0.0
Default: Value stored in the machine data; can be requested by
the controller object using the method getMaxControlForce().
maxTorqueAΙBΙC: Maximum torque at the TCP in the corresponding
rotational direction (type: double, unit: Nm)
≥ 0.0
Default: Value stored in the machine data; can be requested by
the controller object using the method getMaxControlForce().
addStopCondition: Cancelation of the motion if the maximum force at
the TCP is exceeded (type: boolean)
true: Motion is aborted.
false: Motion is not aborted.
Note: If the force limitation is only to be applied for individual degrees of
freedom, correspondingly high values must be assigned to those
degrees of freedom that are not to be limited.
Note: If no force limitation is defined, the default value from the machine
data is used.
Method Description
setMaxCartesianVe- Maximum Cartesian velocity
locity(…)
The motion is aborted if the defined velocity limit is exceeded.
Syntax:
setMaxCartesianVelocity(maxVelocityX, maxVelocityY, max-
VelocityZ, maxVelocityA, maxVelocityB, maxVelocityC)
mode.setNullSpaceStiffness(10.0);
mode.setNullSpaceDamping(0.7);
Example 2 A robot is to move along a table plate in compliant mode. A Cartesian imped-
ance controller is parameterized for this. A high stiffness value is set for the Z
direction of the tool coordinate system in the TCP. An additional force of 20 N
is also to be applied. The motion is aborted if a force limit of 50 N in the Z di-
rection is exceeded. A low stiffness value is set in the XY plane. The Cartesian
deviation in the X and Y directions must not exceed 10 mm, however. Suitable
higher values are specified for all other parameters.
CartesianImpedanceControlMode mode = new
CartesianImpedanceControlMode();
mode.parametrize(CartDOF.Z).setStiffness(3000.0);
mode.parametrize(CartDOF.Z).setAdditionalControlForce(20.0);
mode.setMaxControlForce(100.0, 100.0, 50.0, 20.0, 20.0, 20.0, true);
mode.parametrize(CartDOF.X, CartDOF.Y).setStiffness(10.0);
mode.setMaxPathDeviation(10.0, 10.0, 50.0, 2.0, 2.0, 2.0);
Behavior of the In this form of impedance control, the overlaid force causes the robot to leave
robot the planned path in a targeted way. The new path is thus determined by a wide
range of different parameters.
In addition to stiffness and damping, further parameters can be defined, e.g.
frequency and amplitude. The programmed velocity of the robot also plays a
significant role for the actual path.
By overlaying a simple force oscillation, the working point is diverted from the
planned path (= path without overlaid oscillations) and is instead moved from
the start point to the end point of the motion in a sinusoidal path.
Example The robot executes a relative motion in the Y direction of the tool coordinate
system in the TCP. A sinusoidal force oscillation in the X direction is overlaid.
The result is a wave-like path in the XY plane of the coordinate system.
The maximum deflection ∆x is the deviation from the original path in the posi-
tive and negative X directions. The maximum deflection is determined by the
stiffness and amplitude which are defined for the impedance controller in the
Cartesian X direction, e.g.:
Cartesian stiffness: C = 500 N/m
Amplitude: F = 5 N
The maximum deflection results from Hooke’s law:
∆x = F / C = 5 N / (500 N/m) = 1 / (100 1/m) = 1 cm
The wavelength can be used to determine how many oscillations the robot is
to execute between the start point and end point of the motion. The wave-
length is determined by the frequency which is defined for the impedance con-
troller with overlaid force oscillation, as well as by the programmed robot
velocity.
Wavelength λ is calculated as follows:
λ = c / f = robot velocity / frequency
Example A sinusoidal force oscillation is overlaid in both the X and Y directions of the
tool coordinate system in the TCP. The maximum deflections ∆x and ∆y are
determined by the stiffness and amplitude, which are defined for the imped-
ance controller in the Cartesian X and Y directions.
In addition to the known parameters of the impedance controller, the phase
offset between the two oscillations plays a significant role in the path.
The form of the path is mainly determined by the ratio of the two frequencies
and the phase offset between the two oscillations. The resulting curve is al-
ways axisymmetric and point-symmetric. The set power amplitude and stiff-
ness for an oscillation direction results in its position amplitude. The ratio
between the two position amplitudes determines the ratio between the width
to the height of the curve.
Overview The following methods are available for the parameters of the Cartesian im-
pedance controller with overlaid force oscillation that are specific to the de-
grees of freedom:
Method Description
setAmplitude(…) Amplitude of the force oscillation (type: double)
Amplitude and stiffness determine the position amplitude.
Translational degrees of freedom (unit: N):
≥ 0.0
Default: 0.0
Rotational degrees of freedom (unit: Nm):
≥ 0.0
Default: 0.0
Note: If no amplitude is specified for a degree of freedom, the default
value is used for this degree of freedom.
setFrequency(…) Frequency of the force oscillation (type: double; unit: Hz)
Frequency and Cartesian velocity determine the wavelength of the force
oscillation.
0.0 … 15.0
Default: 0.0
Note: If no frequency is specified for a degree of freedom, the default
value is used for this degree of freedom.
Method Description
setPhaseDeg(…) Phase offset of the force oscillation at the start of the force overlay (type:
double; unit: °)
≥ 0.0
Default: 0.0
Note: If no phase offset is specified for a degree of freedom, the default
value is used for this degree of freedom.
setBias(…) Constant force overlaid (type: double)
Using setBias(…), a constant force can be overlaid in addition to the
overlaid force oscillation. This force adds to the force resulting from the
spring stiffness and defined force oscillation.
If a constant force is overlaid without an additional force oscillation, this
results in a force characteristic which rises as a function of the rise time
defined with setRiseTime(…) and then remains constant. setRise-
Time(…) belongs to the controller parameters that are independent of
the degrees of freedom (>>> 19.6.3.1 "Controller parameters specific to
the degrees of freedom" Page 514).
If a constant force is overlaid in addition to a force oscillation, the force
oscillation is offset in the defined direction.
Translational degrees of freedom (unit: N):
Negative and positive values possible
Default: 0.0
Rotational degrees of freedom (unit: Nm):
Negative and positive values possible
Default: 0.0
Note: As standard, the maximum Cartesian force that can be applied is
limited. If required, this limit value can be increased with setMaxControl-
Force(…).
Note: If no additional constant force is overlaid for a degree of freedom,
the default value is used for this degree of freedom.
Method Description
setForceLimit(…) Force limitation of the force oscillation (type: double)
Defines the limit value that the overall force, i.e. the sum of the ampli-
tude of the force oscillation and additionally overlaid constant force,
must not exceed. If the overall force exceeds the limit value, the overlaid
force is reduced to the limit value.
Translational degrees of freedom (unit: N):
≥ 0.0
Default: Not limited
Rotational degrees of freedom (unit: Nm):
≥ 0.0
Default: Not limited
Note: If no force limit is specified for a degree of freedom, the default
value is used for this degree of freedom.
setPositionLimit(…) Maximum deflection due to the force oscillation (type: double)
If the maximum permissible deflection is exceeded, the force is deacti-
vated. The force is reactivated as soon as the robot is back in the per-
missible range.
Translational degrees of freedom (unit: mm):
≥ 0.0
Default: Not limited
Rotational degrees of freedom (unit: rad):
≥ 0.0
Default: Not limited
Note: If no maximum deflection is specified for a degree of freedom, the
default value is used for this degree of freedom.
Example During a joining process, an oscillation about the Z axis of the tool coordinate
system in the TCP is to be generated. The Cartesian impedance controller
with overlaid force oscillation is used for this. With a stiffness of 10 Nm/rad and
an amplitude of 15 Nm, the position amplitude is approx. 1.5 rad. The frequen-
cy is set to 5 Hz. In order to exert an additional pressing force in the direction
of motion, a constant force of 5 N is generated in the Z direction and super-
posed on the overlaid force oscillation about the Z axis.
CartesianSineImpedanceControlMode sineMode = new
CartesianSineImpedanceControlMode();
sineMode.parametrize(CartDOF.Z).setStiffness(4000.0);
sineMode.parametrize(CartDOF.Z).setBias(5.0);
sineMode.parametrize(CartDOF.A).setStiffness(10.0);
sineMode.parametrize(CartDOF.A).setAmplitude(15.0);
sineMode.parametrize(CartDOF.A).setFrequency(5.0);
tool.getFrame("/TCP").move(linRel(0.0, 0.0,
10.0).setCartVelocity(10.0).sineMode(sineMode));
Some settings apply irrespective of the Cartesian degrees of freedom. The set
methods used to define these controller parameters belong to the class Car-
Overview The following methods are available for the parameters of the Cartesian im-
pedance controller with overlaid force oscillation that are independent of the
degrees of freedom:
Method Description
setTotalTime(…) Overall duration of the force oscillation (type: double; unit: s)
(>>> "Overall duration of the force oscillation" Page 517)
≥ 0.0
Default: Unlimited
setRiseTime(…) Rise time of the force oscillation (type: double; unit: s)
≥ 0.0
Default: 0.0
Note: If no rise time is specified for a degree of freedom, the default
value is used. This means that the amplitude rises abruptly to the
defined value without a transition. If the force to be overlaid is too great,
this can result in overloading of the robot and cancelation of the pro-
gram.
setHoldTime(…) Hold time of the force oscillation (type: double; unit: s)
≥ 0.0
Default: Unlimited
Note: If no hold time is specified for a degree of freedom, the default
value is used. This means that the overlaid force oscillation ends with
the corresponding motion.
setFallTime(…) Fall time of the force oscillation (type: double; unit: s)
≥ 0.0
Default: 0.0
Note: If no fall time is specified for a degree of freedom, the default
value is used. This means that the amplitude falls abruptly to zero with-
out a transition. If the drop in force is too great, this can result in over-
loading of the robot and cancelation of the program.
setStayActiveUntil- Response if the motion duration is exceeded (type: boolean)
PatternFinished(…)
If the force oscillation lasts longer than the motion, it is possible to define
whether the oscillation is terminated or continued after the end of the
motion.
true: Oscillation is continued after the end of the motion.
false: Oscillation is terminated at the end of the motion.
Default: false
Note: If the response when the motion duration is exceeded is not spec-
ified, the default value is used.
Overall duration The overall duration is the sum of the rise time, hold time and fall time of the
of the force oscil- force oscillation:
lation Rise time
Time in which the amplitude of the force oscillation is built up.
Hold time
Time in which the force oscillation is executed with the defined amplitude.
Fall time
Time in which the amplitude of the force oscillation is reduced back to zero.
Rise time, hold time and fall time of the force oscillation can be defined indi-
vidually, or indirectly by defining the overall duration of the force oscillation.
If the overall duration is defined using setTotalTime(…), the rise time and fall
time are defined automatically.
Calculation:
Rise time = fall time = (1/frequency) 0.5
Of the frequencies defined for the force oscillation (relative to all degrees
of freedom), the frequency that results in the largest possible rise and fall
times is used for the calculation.
If exclusively constant forces are overlaid, the frequency of all degrees of
freedom is 0.0 Hz. Rise and fall time are set to 0.0 s.
If the calculated sum of rise time and fall time exceeds the defined overall
duration, the rise time and fall time are each set to 25% of the overall du-
ration and the hold time to 50%.
If the overall duration of the force oscillation is shorter than the duration of the
corresponding motion, the force oscillation ends before the end of the motion.
The response if the motion duration is exceeded is defined using setStayAc-
tiveUntilPatternFinished(…).
19.7 Static methods for impedance controller with superposed force oscillation
Overview The Cartesian impedance controller with overlaid force oscillation can also be
configured via static methods of the class CartesianSineImpedanceControl-
Mode. This simplifies the programming, in particular of Lissajous curves, as
the user only has to specify a few parameters. The remaining parameters
which are important for the implementation are calculated and set automati-
cally. Default values are used for all other parameters. Additional settings are
made as described using the parametrize(…) function and the set methods of
CartesianSineImpedanceControlMode.
createDesiredForce(…): Static method for constant force
createSinePattern(…): Static method for simple force oscillations
createLissajousPattern(…): Static method for Lissajous curves
createSpiralPattern(…): Static method for spirals
Description The createDesiredForce(…) method overlays a constant force, that does not
change over time, in one Cartesian direction.
Explanation of
Element Description
the syntax
controlMode Type: CartesianSineImpedanceControlMode
Name of the controller object
degreeOfF- Type: CartDOF
reedom
Degree of freedom for which the constant force is to be
overlaid.
force Type: double
Value of the overlaid constant force. Corrsponds to the call
of setBias(…) for the specified degree of freedom.
Translational degrees of freedom (unit: N):
≥ 0.0
Rotational degrees of freedom (unit: Nm):
≥ 0.0
stiffness Type: double
Stiffness value for the specified degree of freedom
Translational degrees of freedom (unit: N/m):
0.0 … 5000.0
Rotational degrees of freedom (unit: Nm/rad):
0.0 … 300.0
Explanation of
Element Description
the syntax
controlMode Type: CartesianSineImpedanceControlMode
Name of controller object
degreeOfF- Type: CartDOF
reedom
Degree of freedom for which the force oscillation is to be
overlaid.
frequency Type: double
Frequency of the oscillation (unit: Hz)
0.0 … 15.0
Element Description
amplitude Type: double
Amplitude of the oscillation which is overlaid in the direc-
tion of the specified degree of freedom
Translational degrees of freedom (unit: N):
≥ 0.0
Rotational degrees of freedom (unit: Nm):
≥ 0.0
stiffness Type: double
Stiffness value for the specified degree of freedom
Translational degrees of freedom (unit: N/m):
0.0 … 5000.0
Rotational degrees of freedom (unit: Nm/rad):
0.0 … 300.0
sineMode =
CartesianSineImpedanceControlMode.createSinePattern(CartDOF.X, 2.0,
50.0, 500.0);
robot.move(linRel(0.0, 150.0,
0.0).setCartVelocity(100).setMode(sineMode));
Explanation of
Element Description
the syntax
controlMode Type: CartesianSineImpedanceControlMode
Name of controller object
plane Type: Enum of type CartPlane
Plane in which the Lissajous oscillation is to be overlaid
Element Description
frequency Type: double
Frequency of the oscillation for the first degree of freedom
of the specified plane (unit: Hz)
0.0 … 15.0
The frequency for the second degree of freedom is calcu-
lated as follows:
frequency · 0.4
amplitude Type: double
Amplitude of the oscillation for both degrees of freedom of
the specified plane (unit: N)
≥ 0.0
stiffness Type: double
Stiffness values for both degrees of freedom of the speci-
fied plane (unit: N/m)
0.0 … 5000.0
lissajousMode =
CartesianSineImpedanceControlMode.createLissajousPattern(CartPlane.XY
, 10.0, 50.0, 500.0);
robot.move(linRel(0.0, 150.0,
0.0).setCartVelocity(100).setMode(lissajousMode));
Explanation of
Element Description
the syntax
controlMode Type: CartesianSineImpedanceControlMode
Name of controller object
plane Type: Enum of type CartPlane
Plane in which the spiral-shaped oscillation is to be over-
laid
frequency Type: double
Frequency of the oscillation for both degrees of freedom of
the specified plane (unit: N)
0.0 … 15.0
amplitude Type: double
Amplitude of the oscillation for both degrees of freedom of
the specified plane (unit: N)
≥ 0.0
stiffness Type: double
Stiffness values for both degrees of freedom of the speci-
fied plane (unit: N/m)
0.0 … 5000.0
totalTime Type: double
Total time for the spiral-shaped oscillation. The time is
divided evenly between the upward and downward motion
of the oscillation (unit: s).
≥ 0.0
Example At the current position of the robot flange, a spiral-shaped force oscillation is
to be overlaid in the XY plane of the flange coordinate system. The force is to
rise helically up to a maximum value of 100 N. Once per second, the force
characteristic is to turn around the start point of the spiral (frequency of the
force oscillation: 1.0 Hz). The force spiral must rise and fall within 10 seconds.
CartesianSineImpedanceControlMode spiralMode;
spiralMode =
CartesianSineImpedanceControlMode.createSpiralPattern(CartPlane.XY,
1.0, 100, 500, 10);
robot.move(positionHold(spiralMode, 10, TimeUnit.SECONDS));
The number of turns is a function of the total time for a turn (tperiod). The time
for a turn corresponds to the duration of an oscillation period, e.g.:
Frequency of the force oscillation: f = 1.0 Hz
Total time: t = 10 s
The number of turns is calculated as follows:
NumberTurns = Total time / tPeriod = 10 s / 1 s = 10
tPeriod = 1 / f = 1 / 1.0 Hz = 1 s
The maximum deflection results from Hooke’s law:
∆x = F / C = 100 N / (500 N/m) = 0.2 m = 20 cm
The following controller properties can be defined individually for each axis:
Stiffness
Damping
Overview
Method Description
setStiffness(…) Spring stiffness (type: double[]; unit: Nm/rad)
The axis-specific spring stiffness determines the degree of compliance
of an axis when force is applied.
≥ 0.0
Default: 1000
Note: The spring stiffness must be specified for every axis.
setDamping(…) Spring damping (type: double[]; without unit: Lehr’s damping ratio)
The axis-specific spring damping determines the extent to which the vir-
tual springs oscillate after deflection.
0.0 … 1.0
Default: 0.7
Note: The spring damping must be specified for every axis.
Method Description
setStiffness Spring stiffness (type: double; unit: Nm/rad)
ForAllJoints(…)
A value determines the degree of compliance of all axes when force is
applied.
≥ 0.0
setDamping Spring damping (type: double; without unit: Lehr’s damping ratio)
ForAllJoints(…)
A value determines the extent to which the virtual springs in all axes
oscillate after deflection.
0.0 … 1.0
Explanation of
Element Description
the syntax
jointImp Type: JointImpedanceControlMode
Name of the controller object
A1 … A7 Type: double; unit: Nm/rad
Axis-specific spring stiffnesses
The number of values is dependent on the axis selection
(here: 7 axes).
Example 1 7 axes are to be controlled using the axis-specific impedance controller. Initial
values for the axis-specific spring stiffnesses are defined in the constructor of
the controller. The stiffness for axis A4 is to be modified subsequently. The
spring damping is to be identical for all axes.
JointImpedanceControlMode jointImp
= new JointImpedanceControlMode(2000.0, 2000.0, 2000.0, 2000.0,
100.0, 100.0, 100.0);
...
jointImp.setStiffness(2000.0, 2000.0, 2000.0, 1500.0, 100.0, 100.0,
100.0);
jointImp.setDampingForAllJoints(0.5);
Example 2 7 axes are to be controlled using the axis-specific impedance controller. Initial
values for the axis-specific spring stiffnesses are defined in the constructor of
the controller. The spring stiffness and spring damping are subsequently to be
identical for all axes.
JointImpedanceControlMode jointImp
= new JointImpedanceControlMode(2000.0, 2000.0, 2000.0, 2000.0,
100.0, 100.0, 100.0);
...
jointImp.setStiffnessForAllJoints(100);
jointImp.setDampingForAllJoints(0.5);
Description Using the motion command positionHold(…), the robot can hold its Cartesian
setpoint position over a set period of time and remain under servo control.
If the robot is operated in compliance control, it can remove itself from its set-
point position. Whether, how far and in which direction the robot moves from
the current Cartesian setpoint position (= position at the start of the command
positionHold(…)) depends on the set controller parameters and the resulting
forces. In addition, the compliant robot under servo control can be forced off
its setpoint position by external forces.
Explanation of
Element Description
the syntax
controlMode Type: Subclass of AbstractMotionControlMode
Name of controller object
time Type: long
Indicates how long the specified controlMode is to be held.
The value must be >= 0. A value of < 0 indicates infinite.
The time unit is defined with unit.
unit Type: Enum of type TimeUnit
Unit of the specified time
The Enum TimeUnit is an integral part of the standard Java
library.
Example The robot is to be held in its current position for 10 seconds. During this time,
the robot is switched to “soft” mode in the Cartesian X direction.
CartesianImpedanceControlMode controlMode = new
CartesianImpedanceControlMode();
controlMode.parametrize(CartDOF.X).setStiffness(1000.0);
controlMode.parametrize(CartDOF.ALL).setDamping(0.7);
20 Diagnosis
2
20.1
s
Field bus diagnosis
s
Description The general error state of the connected field buses can be displayed on the
smartHMI.
Description The status indicator in the I/O groups area of the navigation bar of the smartH-
MI displays the state of the configured I/O groups.
The lower indicator shows the collective state of all configured I/O groups.
The upper indicator shows the state of the selected I/O group.
Procedure In the navigation bar, select the desired I/O group from I/O groups.
The detail view of the I/O group opens. Any faulty inputs/outputs are indi-
cated.
Description A log of the events and changes in state of the system can be displayed on the
smartHMI.
Overview
Item Description
1 Refresh button
Refreshes the displayed log entries. As standard, the most recent
entry is shown at the top of the list after refreshing. If a time filter is
active, the oldest entry is shown at the top of the list.
2 List of log entries
(>>> "Log event" Page 528)
3 Filter settings button
Opens the Filter settings window in which the log entries can be
filtered according to various criteria.
4 Filter settings display
The currently active filters are displayed here.
Log event The log entries contain various information pertaining to each log event.
Item Description
1 Log level of the event
(>>> "Log level" Page 529)
2 Date and time of the log event (system time of the robot controller)
3 Source of the log event (robot or station)
4 Button to maximize/minimize the detail view
The button is only available if more than 2 symptoms are present.
5 Symptoms of the log event (detail view)
As standard, up to 2 symptoms are displayed per event.
6 Category or brief description of the log event
Log level The following icons display the log level of an event:
Icon Description
Error
Critical event which results in a system error state
Warning
Critical event which can result in an error
Information
Non-critical event or information pertaining to the change in
state
Procedure 1. Touch the Filter settings button. The Filter settings window opens.
2. Select the desired filters with the appropriate buttons.
3. Touch the Filter settings button or an area outside the window.
The Filter settings window is closed and the selected filters are activated.
The filters are reset when the Log view is closed. When the view is
re-opened, the default settings are reactivated.
Description
Item Description
1 Filter Source(s)
The log entries can be filtered according to the sources that
caused the log event.
Station: All log entries are displayed which affect the station
and the inputs/outputs of field buses.
Robot: Only those log entries are displayed which affect the
robot selected in the navigation bar, here an LBR iiwa 7 R800.
Default for log at Station level: Both sources are selected.
Default for log at Robot level: The source is the robot selected in
the navigation bar.
2 Filter Timespan
A time filter can be activated to display only the log entries of a
specific timespan.
Default: All (no time filter active)
3 Filter Level
The log entries can be filtered according to their log level.
Default: Info, warning, error (no filter active for log level)
Item Description
1 Time stamp
Time at which the error occurred
2 Level
Log level of the message. Errors have the log level Error.
3 Error message
4 Information when application is terminated, e.g. following a real-
time error
Item Description
5 Error type
Errors are defined as Java classes. The name of the class and the
corresponding package are displayed. The error message follows
(see item 3).
6 Stack trace
The method calls which led to the error are displayed in ascending
order. The methods are specified with their full identifiers. In addi-
tion, the number of the program line in which the error occurred is
displayed.
The stack trace can be used to determine the program position at
which the method which ultimately caused the error was called.
Example, read from the bottom to the top:
Origin of the error: Method run() of the application Inexecut-
ableMotion.java, line 37
In line 37 of the application, the method move(…) of the robot
class was called. In the source code of the class robot.java, the
error occurred in line 612 when the method move(…) of the
class PhysicalObject was called.
...
The actual error occurred in line 220 in the source code of the
class ExecutionContainer.java when the method validate(…)
was called.
Often, an error is the result of a chain of preceding errors. In this case, the en-
tire error chain is displayed in descending order.
Item Description
1 Consequential error
The last element in the error chain is displayed here. In the exam-
ple, this is an error of type RuntimeException which occurred dur-
ing execution of the method run() in line 38 of the application
EmbeddedExceptionApplication.java.
2 Causative error
The display of the causative error is always initiated as follows:
Caused by: Error type
In the example, the causative error is of type Exception and
occurred when the method calculateValue(…) of the class Utils
was called. The entire error chain is thus displayed up to the
actual cause of error.
Procedure Select > KUKA_Sunrise_Cabinet_1 > Virus scanner at the Station lev-
el. The Virus scanner view opens.
Messages from the virus scanner can also be displayed using the
Log tile.
If the robot can no longer be moved due to a virus infection, the fol-
lowing options are available:
Reinstall the System Software on the robot controller.
If the robot can still not be moved, create the diagnosis package KRCDi-
ag and contact KUKA Service.
For error analysis, KUKA Customer Support requires diagnostic data from the
robot controller.
For this purpose, a ZIP file called KRCDiag is created, which can be archived
on the robot controller under D:\DiagnosisPackages or on a USB stick con-
nected to the robot controller. The diagnosis package KRCDiag contains the
data which KUKA Customer Support requires to analyze an error. These in-
clude information about the system resources, machine data and much more.
Sunrise.Workbench can also be used to access the diagnostic information.
For this purpose, either an existing diagnosis package is loaded from the robot
controller or a new package is created.
Description With this procedure, the diagnosis package KRCDiag can be created and ar-
chived on the robot controller under D:\DiagnosisPackages or on a USB stick.
Procedure 1. For archiving to a USB stick: Plug the USB stick into the robot controller
and wait until the LED on the USB stick remains permanently lit.
2. In the main menu, select Diagnosis > Create diagnosis package and se-
lect the desired file location.
Hard disk
USB stick
The diagnostic information is compiled. Progress is displayed in a window.
Once the operation has been completed, this is also indicated in the win-
dow. The window is then automatically hidden again.
Description This procedure uses keys on the smartPAD instead of menu items. It can thus
also be used if the smartHMI is not available.
The KRCDiag diagnosis package is created and archived on the robot con-
troller under D:\DiagnosisPackages.
Procedure 1. Right-click on the project in the Package Explorer and select Sunrise >
Create diagnosis package from the context menu. The wizard for creat-
ing the diagnosis package opens.
2. Select Browse... and navigate to the directory in which the diagnosis
package KRCDiag is to be created. If necessary, create a folder for the
diagnosis package by clicking on Create new folder. Click on OK to con-
firm.
3. Click on Next >. The diagnosis package is created in the specified folder.
4. To navigate to the folder in which the diagnosis package was created, e.g.
to send it directly by e-mail, click on Open target folder in Windows Ex-
plorer.
5. Click on Finish. The wizard is closed.
Procedure 1. Right-click on the project in the Package Explorer and select Sunrise >
Create diagnosis package from the context menu. The wizard for creat-
ing the diagnosis package opens.
2. Select Browse... and navigate to the directory in which the diagnosis
package KRCDiag is to be copied. If necessary, create a folder for the di-
agnosis package by clicking on Create new folder. Click on OK to con-
firm.
3. Activate the radio button Load existing diagnosis packages from con-
troller and select the desired diagnosis packages.
4. Click on Next >. The diagnosis package is copied into the specified folder.
If the folder already contains a diagnosis package of the same name, a
user dialog is displayed. The copying operation can be canceled.
5. To navigate to the folder into which the diagnosis package was copied,
e.g. to send it directly by e-mail, click on Open target folder in Windows
Explorer.
6. Click on Finish. The wizard is closed.
21 Remote debugging
2
t Remote debugging is used for the discovery and diagnosis of errors in pro-
grams.
Remote debugging is carried out using Sunrise.Workbench for applications
and background tasks running on the controller.
Since remote debugging is largely identical for applications and background
tasks, the term “task” is used generically below.
Step Description
1 Starting a debugging session
When starting a debugging session, a remote connection is
established between Sunrise.Workbench and the robot con-
troller. The project in the workspace of Sunrise.Workbench
and the active project on the robot controller are automatically
checked for consistency and synchronization is requested if
required.
(>>> 21.1.2 "Starting the debugging session" Page 539)
2 Performing remote debugging of the task
The programmer uses break points to define the positions in
the program code at which execution of the task is to be inter-
rupted during remote debugging.
If remote debugging is to be carried out for an application that
has not yet been started, the application must be started man-
ually via the smartPAD once the remote connection has been
established.
Once task execution has been stopped at a break point, fur-
ther program execution can be controlled by Sunrise.Work-
bench by executing the source code of the task step by step.
On completion of a step, task execution is automatically
stopped.
(>>> 21.3.2 "Break points" Page 543)
Step Description
3 Using debugging functions
While task execution is interrupted, debugger functions, such
as the observation and modification of variable values, can be
used. Adaptation of the source code is also possible.
(>>> 21.3.6 "Variables view" Page 557)
(>>> 21.3.7 "Monitoring processes" Page 561)
(>>> 21.3.8 "Modifying source code" Page 564)
4 Ending a debugging session
When ending a debugging session, the remote connection to
the controller is disconnected. Execution of the running task
can now no longer be influenced by Sunrise.Workbench. If
modifications have been made to the code, project synchroni-
zation is offered.
(>>> 21.1.3 "Ending the debugging session" Page 539)
Description Remote debugging is used to detect and diagnose errors in programs and
tasks.
Tools that support this process are called debuggers. The remote debugger
integrated into Sunrise.Workbench is based on the standard Java and Eclipse
debugger.
In the case of remote debugging of programs, the debugger is run on a differ-
ent computer than the program that is to be checked. In the case of remote
debugging of tasks, the Sunrise.Workbench debugger is used; the task itself
is executed on the controller.
During remote debugging, a connection is established between Sunrise.Work-
bench and the robot controller. A debugging session is started in this way. Dur-
ing remote debugging, the execution of tasks running on the controller can be
monitored via Sunrise.Workbench and it is possible to influence program exe-
cution. Errors can be diagnosed and the source code can be optimized.
The Debugging perspective contains the most important views for remote de-
bugging.
All safety functions configured for the project are also active during re-
mote debugging.
Description In order to end the debugging session correctly, the remote connection be-
tween Sunrise.Workbench and the robot controller must be disconnected. If
modifications have been made to the code during remote debugging, synchro-
nization of the project is offered.
Overview Debugging can be performed for all tasks running on the controller. In order to
debug an application, it may be necessary to start the application via the
smartPAD.
As soon as the first active break point is reached after the remote connection
has been established, execution of the corresponding task can be controlled
via Sunrise.Workbench. Various functions are available for this. The selected
function determines the command line up to which the task is continued.
If task execution is paused during debugging, additional functions are avail-
able and changes can be made to the source code:
Available functions:
(>>> 21.3.5 "Overview of the toolbar in the “Debugging” view" Page 551)
Additional functions of the debugger:
(>>> 21.3.6 "Variables view" Page 557)
(>>> 21.3.7 "Monitoring processes" Page 561)
Information about modification of the source code during debugging:
(>>> 21.3.8 "Modifying source code" Page 564)
If the application for which debugging is being carried out does not
contain any active break points, it is executed completely without
stopping and then terminated.
Procedure 1. On reaching an active break point, the application is stopped by the de-
bugger. Program execution can now be influenced by Sunrise.Work-
bench.
2. At the break point, pressing the corresponding button in the toolbar of the
Debugging view or using the corresponding keyboard shortcut defines
the step at which the application is to be resumed.
3. The application is resumed until the command line defined by selecting the
function is reached. If a code section to be executed contains motion com-
mands, this has a special effect on the sequence.
4. In order to continue the application on reaching a synchronous motion
command or to execute an asynchronous motion command, the following
actions must additionally be carried out on the smartPAD in accordance
with the operating mode:
T1, T2:
Press and hold down the enabling switch.
Press and hold down the Start key.
AUT:
Description Debugging can also be carried out for background tasks. If a background task
contains active break points, execution of the background task is stopped at
these points during a debugging session.
Debugging of background tasks is essentially carried out in the same way as
debugging of applications. Background tasks do not have to be started sepa-
rately. Furthermore, background tasks should not contain motion commands.
Debugging of background tasks is thus not affected by the selected operating
mode.
Procedure 1. On reaching an active break point, the background task is stopped by the
debugger. Program execution can now be influenced by Sunrise.Work-
bench.
2. At the break point, pressing the corresponding button in the toolbar of the
Debugging view or using the corresponding keyboard shortcut defines
the step at which the application is to be resumed.
3. The background task is resumed until the command line defined by select-
ing the function is reached.
4. Once the program section has been executed, the background task is
stopped.
Exception: With Resume, the background task is continued until the next
break point or the end of a non-cyclical background task is reached.
5. Debugging functions, such as the observation of variables or the changing
of values, can be used between the individual steps.
Item Description
1 Debugging view
Displays the Java processes running on the controller.
2 Debugging toolbar
Program execution during remote debugging is controlled by
means of the buttons.
3 Variables view
If task execution is paused during remote debugging, the vari-
ables valid at the current position of the command pointer are dis-
played together with their current values. Modification of values is
possible.
4 Break points view
Break points are displayed and managed here.
5 Editor area
During remote debugging, the source code currently being execut-
ed can be displayed here. If task execution is paused, the current
command line is highlighted. Modification of the source code is
possible.
Overview The use of break points is a major component of remote debugging. The pro-
grammer uses break points in the source code to define specific points in the
program at which the program is to be stopped during remote debugging.
Break points are created and managed in Sunrise.Workbench. Break points
only pause the task during a debugging session. It is not taken into consider-
ation during normal program execution.
The creation, deletion, activation and deactivation of break points and the
modification of their properties are possible before and during remote debug-
ging.
Depending on the position in the code at which the break point is used, a dis-
tinction is made between different types of break point.
Line break point
The line break point is the most commonly used break point. The line
break point is placed next to a command line. Program execution is
stopped when the break point is reached. The command line next to it is
not executed until remote debugging is resumed.
Monitoring point
A monitoring point is placed next to the declaration of a field. Program ex-
ecution is stopped before read and/or write access to the field.
Method break point
A method break point is placed next to the header of a method. Program
execution before the method is entered and/or left.
Exception break point
An exception break point stops program execution when an error occurs.
Exception break points are displayed and created in the Break points
view.
In order to define more precisely the response on reaching the break point,
certain properties can be parameterized for each break point. Different set-
tings are possible, depending on the type of break point.
Item Description
1 Editor bar
Break points are displayed next to the corresponding command
line in the bar with a gray background at the left-hand edge of the
editor. Break points can be added to the editor bar, deleted, acti-
vated or deactivated.
2 Monitoring point (in this case for the array “robot”)
Break point inserted next to the declaration of an array
Indicated by means of a pair of glasses and/or a pencil
3 Line break point (in this case for the command
robot.move(ptpHome());)
Break point inserted next to a command line
Indicated by a blue circle
4 Method break point (in this case for the method mainTask())
Break point inserted next to the header of a method
Indicated by a blue circle with arrow
Description If a break point is not to be deleted completely, but merely ignored temporarily
during remote debugging, deactivation of the break point is possible. It re-
mains available with all its properties and can be reactivated again if required.
Description The properties of a break point define the conditions for stopping a task when
the break point is reached. The settings are dependent on the type of break
point.
3. Right-click on the icon of the break point and select Breakpoint proper-
ties from the context menu. The Properties dialog opens.
4. Select Breakpoint properties. Edit the properties of the break point.
5. Confirm with OK. The dialog is closed.
The view contains a list of the break points of all classes in the workspace of
Sunrise.Workbench. The view offers the following functions:
Display of all break points
Activation, deactivation and deletion of break points
Modification of break point properties
Addition of exception break points
The functionalities offered by the buttons in the toolbar include the following:
Button Description
Remove selected break points
Deletes the break points selected in the break point list.
Remove all break points
Deletes all break points in the list.
Go to file for break point
The class containing the break point selected in the list is
opened in the editor area in the foreground and the corre-
sponding command line is selected.
Skip all break points
If this button is active, all break points are suppressed and do
not cause the execution of the corresponding task to be
stopped.
Add break point for Java exception condition
Opens the dialog for adding an exception break point.
Description The selection of Suspend thread in the properties of a break point must not
be changed.
Item Description
1 Position of the command pointer (blue arrow)
The command pointer indicates the next command to be executed.
The current position of the command pointer in the source code is
indicated by a blue arrow.
2 Next command line to be executed
The next command line to be executed is highlighted in color.
Description The Debug view contains the toolbar and a list of all Java processes running
on the controller. These processes are referred to as threads. The task for
which debugging is carried out is one of the threads running on the controller.
In the Debug view, the corresponding stack trace is displayed beneath a
thread. The stack trace contains the current method calls of a thread and is
used for tracking program execution.
Item Description
1 Toolbar
Program execution during remote debugging is controlled by
means of the buttons.
2 Task thread
Thread of the executed task. The designation contains the name
of the executed tasks (here application.ExampleApplication). The
corresponding stack trace is located beneath the thread.
3 Stack trace
The stack trace of the task thread contains the methods that are
relevant for execution of the task. The called methods are specified
with their identifiers. In this way, the user can identify the relevant
methods.
The methods are specified in the order in which they are called.
Example In a robot application, the method assembly() is called in the method run() in
order to assemble a component. The method assembly() then calls the meth-
od checking() to check whether the assembly process has been successfully
completed:
public void run(){
// ...
assembly();
// ...
}
If the method run() is selected in the stack trace of the task thread, the current
position of the command pointer in the method run() is displayed:
The filled white arrow icon does not indicate the call of assembly() here, but
the progress of the task in the method run().
Button Name/description
Resume
Key: F8
Execution of a task is continued until the next break point or
the end of the task is reached.
(>>> 21.3.5.1 "Continuing execution (Resume)" Page 552)
Step in
Key: F5
If the current command line contains an individual instruction,
it is executed.
If the current command line is a method call, the command
pointer jumps to the start of the called method.
(>>> 21.3.5.2 "Jump into the method (Step in)" Page 553)
Step over
Key: F6
The current command line is executed completely. If the line
contains a method call, the method is executed completely.
(>>> 21.3.5.3 "Executing a method completely (Step over)"
Page 553)
Step back
Key: F7
The method currently being executed is executed through to
the end. Task execution then stops in the calling method.
(>>> 21.3.5.4 "Terminating the executed method (Step back)"
Page 554)
Back to frame
No key assigned
This function can be used to jump to a point in the source code
that has already been executed.
(>>> 21.3.5.5 "Executing code sections again (Back to
frame)" Page 555)
Pause
No key assigned
Pauses execution.
(>>> 21.3.5.7 "Pausing debugging (Pause)" Page 557)
--- Execution to line (only available as a keyboard shortcut)
Key combination: Ctrl+R
Task execution is resumed until the command pointer reaches
a command line defined by the user.
(>>> 21.3.5.6 "Defining the code section to be executed (Ex-
ecution to line)" Page 556)
The Resume button is used to continue execution of a task until the next break
point or the end of the task is reached.
Description If the current command line contains a method call, the command pointer
jumps to the start of the called method when Step in is used.
The source code of the called method is only displayed if the source
code of this method is available. If the source code is not available,
the warning Source not found is displayed.
Execution can be resumed.
The user has no way of viewing the command currently being executed.
In this case, Step back takes the user back to source code that can be
displayed.
(>>> 21.3.5.4 "Terminating the executed method (Step back)"
Page 554)
Use of Step over is recommended for jumping into a motion command
(robot.move(…)).
(>>> 21.3.5.3 "Executing a method completely (Step over)" Page 553)
If the current command line contains not a method call, but an individual in-
struction, the command line is executed and the command pointer jumps to
the next command line.
Example The application was interrupted before the call of the pickupWorkpiece() meth-
od. Step in causes the command pointer to jump to the start of the method:
Description Step over executes the current command line and the command pointer
jumps to the next program line.
If the command line contains a method call, the method is executed complete-
ly as long as it does not contain a break point.
Formatting Since the source code is executed step by step during remote debugging, the
formatting of the source text influences the number of steps required for com-
plete execution of the commands when using Step over.
Formatting example: Object of type CartesianImpedanceControlMode
With the following formatting, 3 steps are required when using Step over:
CartesianImpedanceControlMode mode =
new CartesianImpedanceControlMode();
mode.parametrize(CartDOF.Z).setStiffness(500);
If the code is divided by further line breaks, the code section is completely ex-
ecuted after a total of 4 steps with the following formatting when using Step
over:
CartesianImpedanceControlMode mode =
new CartesianImpedanceControlMode();
mode.parametrize(CartDOF.Z)
.setStiffness(500);
Description Step back causes the method in which the command pointer is currently lo-
cated to be executed completely. The command pointer returns to the calling
method and jumps to the following command line. Program execution is
paused.
Example The command pointer is located inside the method pickupWorkpiece() that
was called by the method run(). With Step back, the method pickupWork-
piece() is executed completely and execution of the application is stopped be-
fore the next command line in the method run():
Description Back to frame can be used to run program sections that have already been
executed again. As standard, the command pointer jumps to the start of the
method that is currently being executed. Program execution is then paused.
In the Debugging view, it is possible to return to each call level of the task us-
ing the stack trace. To do so, the desired method is selected in the stack trace.
Back to frame causes the command pointer to jump to the start of this meth-
od.
Once the command pointer has been placed at a previously executed position
in the code by means of Back to frame, the following code can be executed
(again).
If the run() method is first selected in the stack trace of the task, Back to frame
causes the command pointer to jump to the start of the run() method:
Description With Execution to line, the program is resumed until the command pointer
reaches a command line defined by the user. Execution to line is not avail-
able in the Debugging view.
Procedure 1. Left-click into the line to which the task is to be executed. The line is high-
lighted with a blue background.
2. Task execution is resumed as far as the selected line or a preceding break
point by means of the keyboard shortcut Ctrl+R.
Alternatively, the function can be selected from the context menu Execution
to line after right-clicking into the desired command line.
The request for pausing task execution at the selected command line
is only valid once. If execution is stopped before the command line is
reached, and then resumed with Resume, execution is not stopped
when the command line is reached.
If the Pause function is used, the user must ensure that the corre-
sponding thread task is selected in the Debugging view. The func-
tioning of the controller may otherwise be adversely affected to such
an extent that a reboot of the controller is required.
Motion commands that have already been sent to the controller are not
paused by the Pause function, but processed in the controller and executed.
When pausing, as when reaching a break point, the current command line is
displayed in the editor area. If the corresponding source code is not available
when using Pause, the warning Source not found is displayed in the editor
area.
Item Description
1 Table of available variables
The table contains the currently available arrays and local vari-
ables and their values. Only those variables that are available at
the position of the command pointer in the selected method in the
stack trace of the Debugging view are displayed.
The Name column contains the variable name. Variables with a
complex type are displayed hierarchically. Variables with complex
data types can be expanded and their arrays displayed using the
icon to the left of the name.
The current value of the variable is displayed in the Value column.
In the case of variables with complex data types, the result of the
call of the toString() method is displayed as standard. The values
of primitive data types and string values can be modified directly in
the table.
2 Detailed information
This area contains detailed information about the variable selected
in the table. The variable value is displayed for primitive data types
and strings. In the case of complex data types, the result of the call
of the toString() method is displayed as standard.
Description Irrespective of their visibility, variables and their values can be displayed and
modified in the Variables view.
Item Description
1 Instance
The variable this refers to the instance of the class whose method
has been selected in the stack trace and in whose source code the
command pointer is currently displayed. During remote debugging
of a task, the robot that is being used can be accessed, e.g. via the
instance of the class. Here, this is the application for which remote
debugging is being carried out.
2 Representation of complex data types
Variables with a complex data type (here the class CartesianSin-
eImpedanceControlMode) are displayed in a hierarchical struc-
ture. Expanding the structure displays the arrays of the referenced
object. Arrays of primitive data types and strings are at the bottom
level.
3 Changes of values
The values of primitive data types and string values can be modi-
fied directly in the table. Once a value has been modified, the vari-
able is highlighted in yellow in the table.
Procedure As standard, only those variables that are available at the position of the com-
mand pointer in the selected method in the stack trace of the Debugging view
are displayed:
1. In order to display variables that are available in a different method, select
the method in the stack trace of the Debugging view.
New values can be assigned to variables with complex data types in the dialog
Change object value:
1. Right-click on the desired variable in the table and select Change value...
from the context menu. The Change object value dialog opens.
2. Enter the corresponding instructions in the editor box.
If task execution is paused during remote debugging, the Java editor has ad-
vanced context help for variables. The advanced context help is then available
for all variables that are available at the position of the command pointer in the
selected method in the stack trace.
To display the context help, the mouse pointer is moved over the desired vari-
able in the source code. A window opens, displaying information about the
variables (data type, name, current value).
Complex data types are displayed in a hierarchical structure, like in the Vari-
ables view. Expanding the structure displays the arrays of the referenced ob-
ject. Elementary data types and strings are located at the bottom hierarchy
level.
Item Description
1 Variable (source code)
Variable in the source code for which the advanced context help is
displayed.
2 Variable (context help)
Advanced context help for the variable. The designation and value
are displayed. In the case of complex data types, the data type is
also specified.
Variables with a complex type are displayed hierarchically in a tree
structure.
3 Details
Details of the selected component are displayed here. In the case
of variables with primitive data types and strings, the correspond-
ing value is displayed; in the case of variables with a complex data
type, the result of the call of toString() is displayed as standard.
Description During remote debugging, data can also be monitored that are not available
as variables. These include, for example, the current position of the robot.
Monitoring expressions can be formulated in Sunrise.Workbench. The moni-
toring expressions are managed in the Expressions view and evaluated each
time task execution is stopped during a debugging session. Both individual ex-
pressions and more complex instruction sequences can be entered. Correct
syntax must be observed.
Configured monitoring expressions are not deleted after the end of the debug-
ging session and are thus also taken into consideration in subsequent debug-
ging sessions.
Overview
Item Description
1 Table of created monitoring expressions
The Name column contains the source code of the monitoring ex-
pression. If available, the return value of the expression is specified
under Value.
2 Line for new expression
New expressions can be entered in the first unoccupied line of the
table.
3 Details
Detailed information about the selected expression is displayed in
this area. As standard, complex data types are the result of the call
of toString() on the return value of the monitoring expression. For
variables of primitive data types and strings, the corresponding val-
ue is displayed.
4 Evaluation error
If an expression cannot be evaluated, an error message is dis-
played in the Value column.
Procedure 1. Left-click into the first blank line (indicated by a green + symbol) in the
Name column.
2. Enter the monitoring expression in the Name column and confirm with the
Enter key. The monitoring expression is added.
If a debugging session is active and task execution has been stopped, the
expression is evaluated immediately.
Procedure Right-click in the line with the monitoring expression that is to be deleted.
Select the entry Delete from the context menu.
Example During remote debugging of a task, the current Cartesian position of the tool
TCP is to be displayed after every execution step. A monitoring expression is
formulated for this.
The identifier of the robot array of the application is robot (data type: LBR).
The gripper is represented by the gripper array (data type: com.kuka.robot-
icsAPI.geometricModel.Tool). The following command call is thus required for
requesting the current position of the gripper TCP:
robot.getCurrentCartesianPosition(gripper.getDefaultMotionFrame());
The following modifications to the source code may lead to complications and
should thus not be made during an active debugging session:
Addition of new methods or fields
Modification of the designation of a method or field
Modification of the data type of a field
Modification of the return type of a method
Modification of the number of transfer parameters of a method
Modification of the data type of transfer parameters of a method
The following must be taken into consideration if, during debugging of a task,
modifications are made in the source code of this task or in the source code of
the classes used in it:
If modifications are made to the source code in a method that is currently
located in the stack trace of the task thread, the command pointer jumps
to the start of this method after saving the change.
22 Appendix
2
From Version 1.8 onwards, KUKA Sunrise.OS contains new features that af-
fect the upward compatibility of projects created using an earlier software ver-
sion (< 1.8).
Task functions in the RoboticsAPI
Some task functions have been renamed or are now used differently.
The migration of projects that use these task functions can thus lead to
compiler errors. The programming must be adapted.
(>>> 22.1.1 "Modified task functions – adapting the programming"
Page 567)
I/O configuration
The current version of WorkVisual generates a changed folder structure
when exporting the I/O configuration in Sunrise.Workbench (the folder
generatedFiles now contains the folder IOConfiguration).
If a project is synchronized that still has the old folder structure, the I/O
configuration is not transferred and no I/Os are available on the robot con-
troller.
In order to generate the new folder, the I/O configuration of the project
must be opened in WorkVisual and exported again in Sunrise.Workbench.
Precondition: The option package supplied with the new software (KOP
file Sunrise) is installed in WorkVisual. Only then is the new folder gener-
ated on exporting.
If, following the export, the folder generatedFiles contains the folder IO-
Configuration, the project can be synchronized on the robot controller.
The modified task functions and the adaptations required in the tasks are de-
scribed here in order to be able to continue using tasks created with a software
version < 1.8.
ITaskFunction If using the interface ITaskFunction (>>> 16.4 "Data exchange between
tasks" Page 478):
The interface ITaskFunction has been dispensed with. The following referenc-
es in the interface in which the task functions are declared must therefore be
deleted:
Delete the following addition in the header of the interface:
extends ITaskFunction
Delete the following import:
import com.kuka.roboticsAPI.applicationMo-
del.tasks.ITaskFunction;
The interface ITaskFunctionProvider and the @ProvidedFunctions annotation
have been replaced by the @TaskFunctionProvider annotation. For this rea-
son, the following changes are required in the task that provides the task func-
tions (providing task):
Delete the following annotation: @ProvidedFunctions(…)
Delete the following addition in the header of the task:
implements ITaskFunctionProvider
Delete the method createTaskFunctions() and the corresponding Map in-
stance:
public Map<Class<? extends ITaskFunction>, ITaskFunc-
tion> createTaskFunctions(){
...
}
For each interface whose task functions the task provides, insert a param-
eterless public method with the annotation @TaskFunctionProvider that
returns implementation of the interface:
@TaskFunctionProvider
public Interface Method name()
return Interface instance;
}
Interface: Interface whose task functions the task provides
Method name: Name of the method that returns the implementation of
the interface (the name can be freely selected)
Interface instance: Instance of the implementing class
If the providing task implements the interface itself, transfer the in-
stance of the task for the parameter Interface instance:
return this;
23 KUKA Service
2
Availability KUKA Customer Support is available in many countries. Please do not hesi-
tate to contact us if you have any questions.
Index
Symbols awaitFileAvailable(…) 443
“Brake” safety reaction 487 Axis limit 265
“Ready for motion”, checking 392 Axis range 29, 265
Axis range monitoring (AMF) 221, 224, 265
Numbers Axis torque condition 404
2006/42/EU2006 46 Axis torque monitoring 271
2014/30/EU2014 46 Axis torque monitoring (AMF) 221, 224, 271
3-point method 125 Axis torques, requesting 382
95/16/EC 46 Axis velocity monitoring (AMF) 221, 223, 251
Axis-specific impedance controller 501, 523
A Axis-specific monitoring spaces, defining 265
ABC 2-point method 122 Axis-specific position, requesting 387
ABC world method 124
Accessories 23, 27 B
Activating, safety configuration 243 Background application, new 59
Activation delay, for safety function 270 Background application, starting 105
Actual position, axis-specific 106 Background application, stopping 105
Actual position, Cartesian 107 Background application, stoppingBackground
addCartesianForce(…) 439 application, starting 105
addCartesianTorque(…) 439 Background tasks 473
addCommandedCartesianPositionXYZ(…) 439 Backup Manager 111
addCommandedJointPosition(…) 439 Backup manager 184
addControllerListener(…) 393, 397 Backup Manager, configuration 180
addCurrentCartesianPositionXYZ(…) 440 Base coordinate system 86, 125
addCurrentJointPosition(…) 439 Base for jogging 152
addDoubleUserKey(…) 446 Base-related TCP force component (AMF) 221,
addExternalJointTorque(…) 439 224, 277, 305
addInternalJointTorque(…) 439 Base, calibration 125
addUserKey(…) 446 Blocking wait 436
Administrator 169 BooleanIOCondition 403
Allow muting via input 309 Brake defect 40
AMF 20 Brake ramp monitoring, Brake 489
ANSI/RIA R.15.06-2012 46 Brake test 131
API 20 Brake test application, template 133
App_Enable 202, 210 Brake test, evaluation 143
App_Start 202, 466 Brake test, performing 147
Appendix 567 Brake test, programming interface 137
Application data (view) 52 Brake test, requesting results 144
Application mode 88 Brake test, results (display) 148
Application override 87, 103, 104, 400 Brake test, start of execution 142
Application tool 91 Brake test, starting position 137
Application, pausing 456 Brake, defective 132, 133
Applied norms and directives 46 BrakeState (enum) 145
Approximate positioning 322 BrakeTest (class) 137, 141
Approximate positioning point 322 BrakeTestResult (class) 144
areAllAxesGMSReferenced() 397 Braking distance 29
areAllAxesPositionReferenced() 397 Break conditions for motions 424
areDataValid() 139 Break conditions, evaluating 425
Asynchronous motion execution 352 Break point, conditional 547
attachTo(…) 369, 370 Break point, view 546
AUT 29 Break points 543
AutExt_Active 203 breakWhen(…) 426
AutExt_AppReadyToStart 203, 466 breakWhen() 424
Auto-complete 336 Bus I/Os, mapping 196
Automatic 29
Automatic mode 44 C
Automatic mode (AMF) 221, 222, 245 Calibration 119
Auxiliary point 314, 354 Calibration, base 125
L N
Labeling 38 Navigation bar 73
Language 73, 80 Network settings, adapting 175
Language package, installing 186 New frame, creating 95, 152
Language selection (button) 73 New Java class (button) 54
Liability 27 New Java package (button) 54
Licenses 21 New Sunrise application (button) 54
LIN 354 Non-cyclic background task 477
LIN REL 355 Non-safety-oriented functions 37
LIN, motion type 314 Normal force 405
Linear motion 354, 355 NotificationType, Enum 434
Lissajous oscillation, overlaying 520 Null space motion 93
Load data 161
Load data, entering 162 O
Log entries, filtering 529 Object management 156
Log, displaying 527 Object templates (view) 52
Log, view 528 Object templates, copying 168
Loops, nesting 463 ObserverManager 433, 436
Low Voltage Directive 28 Old project, loading 183
onIsReadyToMoveChanged(…) 393
M onKeyEvent(…), IUserKeyListener 447
Machinery Directive 28, 46 onSafetyStateChanged(…) 397
Main menu key 68 onTriggerFired(…) 429
Main menu, calling 79 Open-source 21
Maintenance 44 Operating hours meter 111
Manipulator 23, 27, 30, 32 Operating mode, changing 83
Manual guidance mode 362 Operating time 111
Manual guidance support 175, 178 Operation, KUKA smartPAD 67
Manual guidance, axis limitation 366 Operation, KUKA Sunrise.Workbench 51
Manual guidance, motion type 321 Operator 29, 31, 81, 169
Manual guidance, programming 362 Operator safety 33, 34
Manual guidance, velocity limitation 367 Operators 403
Manual mode 43 Option package, installing 185
Manual override 87, 103, 104, 400 Option package, removing from robot controller
Mapping, inputs/outputs 198 187
Mass of the heaviest workpiece 310 Option package, uninstalling 186
Mastering 118 Option packages 184
Mastering state, requesting 392 Options 23, 27
Mastering, deleting 119 Orientation control 361
Media flange Touch 246, 247 Orientation control, LIN, CIRC, SPL 324
Menu bar 52 Output, change 108
Message programming 453 Overload 40
Message window 101 Override 91, 103
Methods, extracting 338 Override (button) 73, 87
Mode selection 37 Override, changing and requesting 399
Monitoring 403, 431 Overview of the robot system 23
Monitoring of processes 403 Overview, motion parameters 360, 364
Monitoring processes 431 Overview, project synchronization 171
Monitoring spaces 259 Overview, servo controllers 501
Monitoring, physical safeguards 34 Overview, Sunrise project 151
Motion enable (AMF) 221, 222, 245 Overview, Sunrise.RolesRights 169
Motion execution, pausing 456
Motion programming, basic principles 313 P
Motion types 313 Package Explorer (view) 52