0% found this document useful (0 votes)
48 views150 pages

Robotmanipulator Ug

The document is a user guide for the Robotics System Toolbox™ Support Package for Universal Robots UR Series Manipulators, detailing installation, setup, and configuration procedures. It includes sections on hardware requirements, network setup, and troubleshooting, along with a comprehensive guide on connecting and controlling the Kinova Gen3 robot using MATLAB and ROS. Additionally, it provides information on object detection and motion planning applications using NVIDIA Jetson Board and OpenCV.

Uploaded by

bhavana2448
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views150 pages

Robotmanipulator Ug

The document is a user guide for the Robotics System Toolbox™ Support Package for Universal Robots UR Series Manipulators, detailing installation, setup, and configuration procedures. It includes sections on hardware requirements, network setup, and troubleshooting, along with a comprehensive guide on connecting and controlling the Kinova Gen3 robot using MATLAB and ROS. Additionally, it provides information on object detection and motion planning applications using NVIDIA Jetson Board and OpenCV.

Uploaded by

bhavana2448
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 150

Robotics System Toolbox™

Support Package for Robotics System Toolbox™


Support Package for Universal Robots UR Series
Manipulators User’s Guide

R2024b
How to Contact MathWorks

Latest news: www.mathworks.com

Sales and services: www.mathworks.com/sales_and_services

User community: www.mathworks.com/matlabcentral

Technical support: www.mathworks.com/support/contact_us

Phone: 508-647-7000

The MathWorks, Inc.


1 Apple Hill Drive
Natick, MA 01760-2098
Robotics System Toolbox™ Support Package for Kinova® Gen3 Manipulators User's Guide
© COPYRIGHT 2020–2024 by The MathWorks, Inc.
The software described in this document is furnished under a license agreement. The software may be used or copied
only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form
without prior written consent from The MathWorks, Inc.
FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by, for, or through
the federal government of the United States. By accepting delivery of the Program or Documentation, the government
hereby agrees that this software or documentation qualifies as commercial computer software or commercial computer
software documentation as such terms are used or defined in FAR 12.212, DFARS Part 227.72, and DFARS 252.227-7014.
Accordingly, the terms and conditions of this Agreement and only those rights specified in this Agreement, shall pertain
to and govern the use, modification, reproduction, release, performance, display, and disclosure of the Program and
Documentation by the federal government (or other entity acquiring for or through the federal government) and shall
supersede any conflicting contractual terms or conditions. If this License fails to meet the government's needs or is
inconsistent in any respect with federal procurement law, the government agrees to return the Program and
Documentation, unused, to The MathWorks, Inc.
Trademarks
MATLAB and Simulink are registered trademarks of The MathWorks, Inc. See
www.mathworks.com/trademarks for a list of additional trademarks. Other product or brand names may be
trademarks or registered trademarks of their respective holders.
Patents
MathWorks products are protected by one or more U.S. patents. Please see www.mathworks.com/patents for
more information.
Revision History
September 2020 Online only New for Version 20.2.0 (R2020b)
March 2021 Online only Revised for Version 21.1.0 (R2021a)
September 2021 Online only Revised for Version 21.2.0 (R2021b)
March 2022 Online only Revised for Version 22.1.0 (R2022a)
September 2022 Online only Revised for Version 22.2.0 (R2022b)
March 2023 Online only Revised for Version 23.1.0 (R2023a)
September 2023 Online only Revised for Version 23.2.0 (R2023b)
March 2024 Online only Revised for Version 24.1.0 (R2024a)
September 2024 Online only Revised for Version 24.2.0 (R2024b)
Contents

Setup and Configuration


1
Install Support for Manipulator Hardware . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Operating System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Install, Update, or Uninstall Support Package . . . . . . . . . . . . . . . . . . . . . . 1-2
Hardware Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3

Set Up Network Adapter and Basic Network Ping . . . . . . . . . . . . . . . . . . . 1-4


Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-6

Set Up IMAQ Adapter for matlab_kortex Driver . . . . . . . . . . . . . . . . . . . . . 1-8


Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8

Get Latest URDF Robot Model for Kinova Gen3 Robot . . . . . . . . . . . . . . . 1-9

Add Custom ROS Message Definitions to MATLAB . . . . . . . . . . . . . . . . . 1-10


Generate Custom ROS Message Definitions . . . . . . . . . . . . . . . . . . . . . . 1-10

Select Interface for Connecting to Kinova Gen3 Robot . . . . . . . . . . . . . . 1-12


Use matlab_kortex API and MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12
Use ros_kortex ROS packages, MATLAB, ROS, and ROS Toolbox . . . . . . . 1-12

Install ROS Packages and Dependencies for ROS . . . . . . . . . . . . . . . . . . 1-15


Install Dependencies for Vision Module of Kinova Gen3 Robot . . . . . . . . 1-15

Install ROS Packages and Dependencies for ROS on External Computer


......................................................... 1-17
Install Dependencies for Vision Module of Kinova Gen3 Robot . . . . . . . . 1-18
Configure MATLAB for Custom ROS Message Definitions for Kinova Gen3
..................................................... 1-18

Default Set of Messages for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-20

Get Started
2
Verify Connection to Kinova Gen 3 Robot . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2

Download MATLAB MEX APIs for Kinova Gen 3 . . . . . . . . . . . . . . . . . . . . . 2-3

iii
Test Basic Connection and Move the Wrist Joint . . . . . . . . . . . . . . . . . . . . 2-4
Create Robot APIs Wrapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Display Joint Angles of the Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Move the Wrist Joint of the Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Disconnect from the Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5

Configure MATLAB to Acquire Images from Vision Module . . . . . . . . . . . 2-6

Clear Existing Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8

Troubleshooting Manipulator Support Package . . . . . . . . . . . . . . . . . . . . . 2-9


Troubleshooting Network Connection with Kinova Gen3 Robot . . . . . . . . . 2-9
Troubleshooting Image Acquisition with Kinova Gen3 Robot . . . . . . . . . . . 2-9
Limitations of Precomputed Trajectory . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9

Getting Started with ROS Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11


Products Required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11

Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to
Control the Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-12

Connect to ROS Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13

Read Joint Angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-14

Manipulate Individual Joint Angle Using MATLAB . . . . . . . . . . . . . . . . . . 2-15

Control Cartesian Pose Using MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-16

Send Precomputed Trajectory to the Robot . . . . . . . . . . . . . . . . . . . . . . . 2-17

Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to
Acquire Image Data from Vision Sensor . . . . . . . . . . . . . . . . . . . . . . . . 2-19
Connect to the ROS Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-19

Track Pre-Computed Trajectory of Kinova Gen3 Robot End-Effector Using


Inverse Kinematics and KINOVA KORTEX MATLAB API . . . . . . . . . . . 2-20

Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB
......................................................... 2-29

Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot


End-Effector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-37

Control KINOVA Gen3 Robotic Manipulator Using KINOVA KORTEX


System Object and Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-46

Control Cartesian Position of KINOVA Gen3 Robot End-Effector Using


Inverse Kinematics Block and KINOVA KORTEX System Object . . . . . 2-49

Generate Colorized Point Cloud Using Vision Module of Kinova Gen3


Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-54

Read Current Joint Angles from KINOVA Gen3 Robot Arm . . . . . . . . . . . 2-61

iv Contents
Control Individual Joint Angle of KINOVA Gen3 Robot . . . . . . . . . . . . . . 2-63

Send KINOVA Gen3 Robot Arm to Desired Cartesian Pose . . . . . . . . . . . 2-66

Send Precomputed Trajectory to KINOVA Gen3 Robot Arm . . . . . . . . . . 2-69

Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid
Body Tree Robot Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-73

Read Current Joint Angle of KINOVA Gen3 and Publish to ROS Network
......................................................... 2-84

Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle
Control and Trajectory Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-87

Design and Validate Object Detection and Motion Planning Algorithms


Using Gazebo, OpenCV, and MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . 2-99

Run on Target Hardware


3
Configure NVIDIA Jetson Board for Object Detection and Motion
Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2

Object Detection and Motion Planning Application with Onboard


Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
Object Detection Using OpenCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4

Detect Position and Orientation of a Rectangular Object Using OpenCV


.......................................................... 3-8
Verify Object Detection with NVIDIA Jetson Board . . . . . . . . . . . . . . . . . 3-11
Troubleshooting Object Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12

Joint Angle Control Versus Trajectory Control . . . . . . . . . . . . . . . . . . . . . 3-13


Joint Angle Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13
Trajectory Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13
Compare the Control Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13

v
1

Setup and Configuration

• “Install Support for Manipulator Hardware” on page 1-2


• “Set Up Network Adapter and Basic Network Ping” on page 1-4
• “Set Up IMAQ Adapter for matlab_kortex Driver” on page 1-8
• “Get Latest URDF Robot Model for Kinova Gen3 Robot” on page 1-9
• “Add Custom ROS Message Definitions to MATLAB” on page 1-10
• “Select Interface for Connecting to Kinova Gen3 Robot” on page 1-12
• “Install ROS Packages and Dependencies for ROS” on page 1-15
• “Install ROS Packages and Dependencies for ROS on External Computer” on page 1-17
• “Default Set of Messages for Windows” on page 1-20
1 Setup and Configuration

Install Support for Manipulator Hardware


Add support for Kinova Gen3 robot by installing the Robotics System Toolbox Support Package for
Kinova Gen3 Manipulators.

After you install the support package, you can refer to the examples that show you how to manipulate
the Kinova Gen3 robot.

Operating System Requirements


Robotics System Toolbox Support Package for Kinova Gen3 Manipulators can be installed on both
Windows® and Linux® operating systems. The supported versions are:

• Windows 10
• Linux: Ubuntu® 18.04 LTS, Ubuntu 16.04 LTS

Install, Update, or Uninstall Support Package


Install Support Package

1 On the MATLAB® Home tab, in the Environment section, select Add-Ons > Get Hardware
Support Packages.

2 In the Add-On Explorer window, click the support package and then click Install.

Update Support Package

On the MATLAB Home tab, in the Resources section, select Help > Check for Updates.

Uninstall Support Package

1 On the MATLAB Home tab, in the Environment section, click Add-Ons > Manage Add-Ons.

1-2
Install Support for Manipulator Hardware

2 In the Add-On Manager window, find and click the support package, and then click Uninstall.

Hardware Setup
Hardware boards and devices supported by MathWorks® require additional configuration and setup
steps to connect to MATLAB and Simulink®. Each support package provides a hardware setup
process that guides you through registering, configuring, and connecting to your hardware board.

If the support package is already installed, you can start the hardware setup by opening the Add-On
Manager.

In the Add-On Manager, start the hardware setup process by clicking the Setup button .

After starting, the Hardware Setup window provides instructions for configuring the support package
to work with your hardware.

Follow the instructions on each page of the Hardware Setup window. When the hardware setup
process completes, you can open the examples to get familiar with the product and its features.

1-3
1 Setup and Configuration

Set Up Network Adapter and Basic Network Ping

Note This section explains the task to be completed as part of the step—Configure Network
Adapter over Ethernet—of the Hardware Setup process (using the Hardware Setup screens). Do
not perform this as a standalone task

Windows
Follow these steps if you have installed Robotics System Toolbox Support Package for Kinova Gen3
Manipulators on Windows:

1 Connect the Ethernet cable from the host computer to the robot base.
2 On the Windows Start menu, search for Network status and open the Network status setting.
Click Change adapter options.

3 In the Network Connections window, locate the Ethernet adapter connected to the robot and
click Properties from the context menu.

1-4
Set Up Network Adapter and Basic Network Ping

4 In the Ethernet Properties dialog box, edit the Internet Protocol Version 4 (TCP/IPv4)
Properties to set the values for the parameters as shown in this figure:

5 Click OK. In the Ethernet Properties dialog box, click OK to save the settings.
6 Perform a test ping using the Windows command prompt to the robot and ensure that the ping is
successful.

ping 192.168.1.10

1-5
1 Setup and Configuration

7 Continue with the Hardware Setup screens.

Linux
Follow these steps if you have installed Robotics System Toolbox Support Package for Kinova Gen3
Manipulators on Linux:

1 Connect the Ethernet cable from the host computer to the robot base.
2 In Linux, go to Network Connections.

3 Identify the Ethernet adapter which is connected to the robot and select it.

4 Select the Wired connection and click Edit to modify the adapter setting.
5 Navigate to IPv4 Settings, change Method to Manual, and add the IP Address, Netmask and
Gateway as per the following figure:

1-6
Set Up Network Adapter and Basic Network Ping

6 Perform a test ping using the terminal to the robot and ensure that the ping is successful.

ping 192.168.1.10
7 Continue with the Hardware Setup screens.

1-7
1 Setup and Configuration

Set Up IMAQ Adapter for matlab_kortex Driver

You need to first download the required adaptor from Kinova Artifactory:

1 Go to this Github page to download the Vision Module MATLAB Image Acquisition Toolbox
Adaptor. Refer to the Installation Instructions section in the page and click the download link
displayed.

The ZIP file from Kinova Artifactory is downloaded to your computer.


2 Extract the contents of the ZIP file.

Windows
After you download and extract contents of the ZIP file, follow these steps to setup IMAQ adapter if
you have installed Robotics System Toolbox Support Package for Manipulators on Windows:

1 Run kinova_vision_imaq_1.0.0.exe (available in the downloaded


matlab_vision_imaq_1.0.0\windows\installer folder)
2 In the Installation Options, select full from the drop-down list and click Next.
3 Specify the installation folder and click Install.

After the successful installation, provide the location of the adaptor library file
kinova_vision_imaq.dll in the Hardware Setup screen.

Linux
After you download and extract contents of the ZIP file, follow these steps to setup IMAQ adapter if
you have installed Robotics System Toolbox Support Package for Manipulators on Linux:

1 Install GStreamer and Glib by entering these commands at the Linux command prompt:

sudo apt install libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstrea

sudo apt install libglib2.0-0


2 After successful installation, continue with the Hardware Setup screens.

1-8
Get Latest URDF Robot Model for Kinova Gen3 Robot

Get Latest URDF Robot Model for Kinova Gen3 Robot


By default, the loadrobot function uses Version 10 of the Unified Robot Description Format (URDF)
file to construct the rigid body tree model for Kinova Gen3 robot. The source URDF information of a
particular rigid body tree model can be accessed using the following set of commands in MATLAB:

[gen3Model, robotData]=loadrobot('kinovaGen3')
URDFSource = robotData.Source

An updated version of the URDF file is available for the Gen3 robot. Use the additional version
argument to use Version 12, associated with release 2.2.1 for the Gen3 robot.

[gen3Model, robotData]=loadrobot('kinovaGen3','Version',2);
URDFSource = robotData.Source

Any different URDF version can be imported in MATLAB using importrobot function. For example,
to import Version 12 of URDF robot model of Kinova Gen3 robot for further algorithm development:

1 Download the gen3.xacro file corresponding to your version of the Gen3 robot from Kinova’s
ROS Kortex Github repository.

• The latest repository can be found here: https://github.com/kinovarobotics/ros_kortex


• As of July 2021, the newest version is Version 12, associated with release 2.2.1. This is
available in this repository. By default, the loadrobot function uses Version 10, associated
with release 1.1.4, which has broader joint limits and slightly different inertial terms than the
newer version.
2 Convert the downloaded XACRO file to URDF by using the following command. This step requires
a valid installation of ROS.

rosrun xacro xacro --inorder -o gen3.urdf gen3.urdf.xacro


3 Use the generated URDF file to construct the rigid body tree model in MATLAB:

robot = importrobot('gen3.urdf')

1-9
1 Setup and Configuration

Add Custom ROS Message Definitions to MATLAB

A set of standard messages are included in the default ROS message catalog of ROS toolbox.
However, you can create your own ROS custom messages and use them in MATLAB and Simulink
with ROS networks to transmit information. For this, rosgenmsg function is used. For more
information on this, refer to ROS Custom Message Support and rosgenmsg.

The kortex_driver ROS package from Kinova Robotics contains several custom ROS messages and
services. In order to control the Gen 3 robot using ROS, these message definitions need to be added
to the MATLAB ROS message catalog. One option to do this is to use rosgenmsg. As mentioned in the
ROS System Requirements, generating custom messages for ROS, you must build the ROS packages.
This requires you to have Python®, CMake, and a C++ compiler for your platform.

To make the process of adding custom ROS message definitions to MATLAB easy, the definitions for
Kinova Gen 3 robot are included in the support package and the hardware setup process consists of a
step to register these pre-generated message definitions with the MATLAB. This workflow is ideal for
the use-cases where the custom messages and services from kortex_driver package are not modified.

Note Generating custom ROS message definitions using rosgenmsg on Windows OS has limitation on
maximum number of custom messages. Hence, only limited message and service definitions are
included in the support package. These definitions are sufficient to perform the hardware setup
successfully and to run all the examples included in the support package (for details, see “Default Set
of Messages for Windows” on page 1-20).

If you are planning to use any custom message which is not part of the default messages for
Windows, then refer following instructions to generate custom message definitions for the addition
ROS messages.

The custom ROS message definitions have to be regenerated in these cases:

• If the ROS messages from Kinova Robotics are modified and are used to communicate with and
control the robot Or
• A new ROS package is being created, which is dependent on the ROS packages supplied by Kinova
Robotics, to communicate with and control the robot. Or
• A new message definition for windows needs to be added to the existing set of definitions supplied
with the support package.

Generate Custom ROS Message Definitions


To generate custom ROS message definitions:

1 Navigate to the custom messages folder by executing the following command in the MATLAB
command window.

fullfile(codertarget.robotmanipulator.internal.getSpPkgRootDir,'resources','ROSMsgs')
2 Based on the operating system platform, there should be a folder with a name starting with
custommsg in the resources folder. Remove custommsg folder from the MATLAB path and save
the current path by executing savepath command in the MATLAB command window.

1-10
Add Custom ROS Message Definitions to MATLAB

3 Restart the MATLAB.


4 Verify that output of the MATLAB command rosmsg list does not contain any ROS messages from
the kortex_driver group.
5 Refer the ROS Custom Message Support to know more about the requirement to generate
custom ROS message definitions using rosgenmsg. Also refer to ROS System Requirements to
know more about the prerequisite additional software.
6 As mentioned in the ROS Custom Message Support, rosgenmsg expects custom message files
(.msg files) present under the folder msg and custom service files (.srv files) under the folder srv.
However the default folder structure of the ROS package kortex_driver contains several sub
folders inside the msg and srv folder. Hence before proceeding further, manually remove all the
subfolders inside the msg and srv folder.

7 Generating custom ROS message definitions using rosgenmsg on Windows OS has limitation on
maximum number of custom messages. Hence, limit the number of custom messages and
services to a minimum required. If you want to just add few ROS messages to the set of custom
message / service definitions, then ensure that you first copy .msg and .srv files for the default
message set and then additional files for the ROS messages which you want to add.
8 Navigate to the folder which contains the custom ROS package and execute rosgenmsg on
MATLAB command window.
9 Wait for the process to complete and once it is done execute the mentioned commands to modify
MATLAB path, clear classes and rehash toolboxcache.
10 Check the output of MATLAB command rosmsg list for the messages from the group
kortex_driver or whatever the name of your custom ROS package is.

1-11
1 Setup and Configuration

Select Interface for Connecting to Kinova Gen3 Robot

Use matlab_kortex API and MATLAB


This interface uses matlab_kortex API (provided by Kinova Robotics) and MATLAB to communicate
with the robot.

This interface is ideal for use-cases where the intended end goal involves standalone manipulation of
the Gen3 robot only. The hardware support package lets you acquire various sensor data from the
robot manipulator, control the robot manipulator, connect with the robot hardware to test and
validate your algorithms, prototype robot manipulator algorithms, and simulate robot manipulators
using rigid body tree models from Robotics System Toolbox™.

This interface is supported on Windows and Linux operating systems and requires a valid installation
of Robotics System Toolbox. If you are planning to use vision sensors of the robot then a valid
installation of Image Acquisition Toolbox is also required. To know more about the capabilities and
features of this interface, watch the video Developing a Pick-And-Place Application in MATLAB and
Simulink and refer the examples.

Use ros_kortex ROS packages, MATLAB, ROS, and ROS Toolbox


This interface uses ros_kortex ROS packages (provided by Kinova Robotics), MATLAB, ROS and ROS
Toolbox to communicate with the robot.

In many robotics use-cases, the environment consists of multiple robots and sensors, and
the standalone manipulation is not common. In such scenarios, ROS is widely used due to inherent
advantages of hardware abstraction, device drivers, libraries, visualizers, message-passing, package
management, and more. This interface is ideal for use-cases where the manipulator is part of the
existing ROS network. With this interface Robotics System Toolbox™ features can be used to
Simulate and generate trajectories for Kinova Gen 3 robot and then the robot can be controlled using
various ROS services and actions using ROS Toolbox.

1-12
Select Interface for Connecting to Kinova Gen3 Robot

This feature is supported for Windows and Linux operating systems and requires a valid installation
of Robotics System Toolbox and ROS Toolbox. However, the automatic hardware setup via hardware
setup screens is only available for Linux 18.04 and 16.04 operating systems.

Special Considerations for Linux OS

• Automatic hardware setup screens guide you to configure the host computer to communicate with
the robot via ROS if the MATLAB and ROS are installed on the same host computer.

• If the MATLAB and ROS are installed on a different machine, then refer to “Install ROS Packages
and Dependencies for ROS on External Computer” on page 1-17 to configure the host computer
and the ROS computer manually.

1-13
1 Setup and Configuration

Special Considerations for Windows OS

• You can use Linux VMprovided by the MathWorks on the host computer with Windows OS and the
MATLAB to control the robot using ROS. Refer to “Install ROS Packages and Dependencies for
ROS on External Computer” on page 1-17 configure the host computer and the VM manually.
• If the robot is connected to a separate machine having Linux with ROS and needs to be controlled
using the MATLAB installed on a Windows host computer, then refer to “Install ROS Packages and
Dependencies for ROS on External Computer” on page 1-17 to configure the host computer and
the ROS computer manually.

1-14
Install ROS Packages and Dependencies for ROS

Install ROS Packages and Dependencies for ROS

Note This section explains the steps to install ROS packages and dependencies if MATLAB and ROS
are installed on the same computer, which is connected to Kinova Gen3 robot. If the ROS and
MATLAB are installed on different computers, see “Install ROS Packages and Dependencies for ROS
on External Computer” on page 1-17.

Kinova Robotics provides set of external ROS packages to communicate and control Gen3 robot.
More information on the individual packages can be found on this GitHub Page. Execute the following
steps sequentially to clone required packages and install required dependencies.

1 Install python3 and python3-pip by executing the following command in the Linux terminal.

sudo apt install python3 python3-pip


2 Install conan using pip and setup a default profile.

sudo python3 -m pip install conan==1.59


conan config set general.revisions_enabled=1
conan profile new default --detect > /dev/null
conan profile update settings.compiler.libcxx=libstdc++11 default
3 Create a new workspace with src folder.

mkdir -p kinova_ws/src
cd kinova_ws
4 Download the ros_kortex ROS packages (version 2.2.1) from GitHub archive of Kinova Robotics.
5 Extract the contents of the ZIP file to the src folder. The src folder should contain
ros_kortex-2.2.1 folder.
6 Install dependencies from the custom ROS packages by executing the following command in the
Linux terminal.

rosdep install --from-paths src --ignore-src -y


7 Build the workspace using catkin tool. This will take a few minutes to complete.

catkin_make
8 Once the build is successful, continue with the hardware setup screens if you are not planning to
use the vision module. If you are planning to use the vision module, execute the setup
instructions mentioned in the following section.

Install Dependencies for Vision Module of Kinova Gen3 Robot


Kinova Robotics provides set of external ROS packages to acquire various image data from the vision
module of Gen3 robot. For more information on the ROS packages, refer to this GitHub Page. Execute
the following steps sequentially to clone required packages and install required dependencies.

1 Install GStreamer and associated packages by executing the following command in the Linux
terminal.

sudo apt install gstreamer1.0-tools gstreamer1.0-libav libgstreamer1.0-dev libgstreamer-plugi


2 Install rgbd launch ROS package

1-15
1 Setup and Configuration

sudo apt-get install ros-kinetic-rgbd-launch


3 Navigate to the src folder that has been created in the Step 3 of the earlier setup process. Clone
the required ROS packages in the src folder.

git clone https://github.com/Kinovarobotics/ros_kortex_vision.git


4 Build the workspace using catkin tool. This will take a few minutes to complete.

catkin_make
catkin_make install

1-16
Install ROS Packages and Dependencies for ROS on External Computer

Install ROS Packages and Dependencies for ROS on External


Computer

Note This section describes the steps to be performed if the ROS is installed on an external
computer than the one with MATLAB installed. If the ROS and MATLAB are installed on the same
computer, see “Install ROS Packages and Dependencies for ROS” on page 1-15.

This section describes the steps to be performed if the ROS is installed on an external computer than
the one with MATLAB installed.

Kinova Robotics provides a set of external ROS packages to communicate and control Gen3 robot. For
more information on the individual packages, refer to this GitHub Page. Execute the following steps
sequentially to clone required packages and install required dependencies on the computer which is
connected to the robot and have ROS installed.
1 Verify the OS version by executing the following command in the Linux terminal.
lsb_release -d

The support package has been tested on Ubuntu 18.04 with ROS Melodic and Ubuntu 16.04 with
ROS Kinetic.
2 Verify the ROS version by executing the following command in the Linux terminal.
rosversion -d

If the ROS version is detected as <unknown>, then add ROS environment variables in your
default bash shell launch script. Instructions to do the same are available under Section 1.5 of
this webpage. If you are using ROS melodic, see this webpage instead.
3 Install python3 and python3-pip by executing the following command in the Linux terminal.
sudo apt install python3 python3-pip
4 Install conan using pip and setup a default profile.
sudo python3 -m pip install conan
conan config set general.revisions_enabled=1
conan profile new default --detect > /dev/null
conan profile update settings.compiler.libcxx=libstdc++11 default
5 Create a new workspace with src folder.
mkdir -p kinova_ws/src
cd kinova_ws
6 Download the ros_kortex ROS packages (version 2.2.1) from GitHub achieve of Kinova Robotics.

If you only have the access of Jetson™ board via SSH terminal, you can use wget command to
download the ZIP archive and then use unzip command to extract the content into the src folder.
7 Extract the contents of the ZIP file to the src folder. The src folder should contain
ros_kortex-2.2.1 folder.
8 Navigate to the workspace folder that you have created in step 5 (for example, kinova_ws).
Install dependencies from the custom ROS packages by executing the following command in the
Linux terminal.

1-17
1 Setup and Configuration

rosdep install --from-paths src --ignore-src -y


9 Build the workspace using catkin tool. Additionally, if you are configuring a NVIDIA® Jetson
compute board, append --cmake-args -DCONAN_TARGET_PLATFORM=jetson to the below
command:
catkin_make

This will take a few minutes to complete.


10 Ensure the connection with the robot over Wi-Fi® or Ethernet. If you are using the Ethernet port
to connect with the robot, then configure your network adaptor to communicate with the robot.
Refer to “Set Up Network Adapter and Basic Network Ping” on page 1-4 for more details.

If you are using the Vision module, perform the setup instructions mentioned in the following section
else go to “Configure MATLAB for Custom ROS Message Definitions for Kinova Gen3” on page 1-18.

Install Dependencies for Vision Module of Kinova Gen3 Robot


Kinova Robotics provides set of external ROS packages to acquire various image data from the vision
module of Gen3 robot. For more information on the ROS packages, refer to this GitHub Page. Execute
the following steps sequentially to clone required packages and install required dependencies.
1 Install GStreamer and associated packages by executing the following command in the Linux
terminal.
sudo apt install gstreamer1.0-tools gstreamer1.0-libav libgstreamer1.0-dev libgstreamer-plugi
2 Install rgbd launch ROS package.

Note In the below command, you need to specify the correct ROS version, that is melodic or
kinetic, according to your ROS installation version.

sudo apt-get install ros-kinetic-rgbd-launch


3 Navigate to the src folder that has been created in the Step 5 of the earlier setup process. Clone
the required ROS packages in the src folder.
git clone https://github.com/Kinovarobotics/ros_kortex_vision.git
4 Build the workspace using catkin tool. This will take a few minutes to complete.
catkin_make
catkin_make install

Configure MATLAB for Custom ROS Message Definitions for Kinova


Gen3
This section explains the steps to be performed on the computer which has MATLAB installed.

The ROS package kortex_driver supplied by Kinova Robotics contains custom ROS messages which
are not part of the default message catalog of MATLAB. Hence, the message definition of these
custom messages must be added to the MATLAB to communicate with the robot using ROS.

Pre-generated custom message definitions are included with this support package. The pre-
configured message definitions are generated from the ROS package kortex_driver, as supplied by

1-18
Install ROS Packages and Dependencies for ROS on External Computer

Kinova Robotics. If you want to modify the message definitions or create your own ROS package,
which is dependent on kortex_driver, then refer to “Add Custom ROS Message Definitions to
MATLAB” on page 1-10 for more details on the workflow.

1 Execute the following function to add custom ROS message definitions to message catalog.

Status = codertarget.manipROSInterface.addCustomROSmsg('kinova');

If custom ROS message definitions are added to the ROS message catalog, the function returns true.

The setup is complete. Continue with the hardware setup screens and refer to the “Read Current
Joint Angles from KINOVA Gen3 Robot Arm” on page 2-61 example to acquire joint angles from the
robot.

See Also
“Add Custom ROS Message Definitions to MATLAB” on page 1-10

Related Examples
• “Read Current Joint Angles from KINOVA Gen3 Robot Arm” on page 2-61

1-19
1 Setup and Configuration

Default Set of Messages for Windows


The Robotics System Toolbox Support Package for Manipulators provides these custom messages and
custom services for ROS connectivity. If you need additional messages, add custom ROS message
definitions to MATLAB (for details, see “Add Custom ROS Message Definitions to MATLAB” on page 1-
10).

• Custom Messages

1 ActuatorFeedback
2 BaseCyclic_Feedback
3 BaseFeedback
4 CartesianSpeed
5 CartesianTrajectoryConstraint
6 CartesianTrajectoryConstraint_type
7 ConstrainedJointAngles
8 ConstrainedPose
9 Empty
10 GripperCyclic_Feedback
11 GripperCyclic_MessageId
12 InterconnectCyclic_Feedback
13 InterconnectCyclic_Feedback_tool_feedback
14 InterconnectCyclic_MessageId
15 JointAngle
16 JointAngles
17 JointTrajectoryConstraint
18 MotorFeedback
19 Pose

• Custom Services

1 PlayCartesianTrajectory
2 PlayJointTrajectory

1-20
2

Get Started

• “Verify Connection to Kinova Gen 3 Robot” on page 2-2


• “Download MATLAB MEX APIs for Kinova Gen 3” on page 2-3
• “Test Basic Connection and Move the Wrist Joint” on page 2-4
• “Configure MATLAB to Acquire Images from Vision Module” on page 2-6
• “Clear Existing Faults” on page 2-8
• “Troubleshooting Manipulator Support Package” on page 2-9
• “Getting Started with ROS Connectivity” on page 2-11
• “Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to Control the Robot”
on page 2-12
• “Connect to ROS Network” on page 2-13
• “Read Joint Angles” on page 2-14
• “Manipulate Individual Joint Angle Using MATLAB” on page 2-15
• “Control Cartesian Pose Using MATLAB” on page 2-16
• “Send Precomputed Trajectory to the Robot” on page 2-17
• “Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to Acquire Image Data from
Vision Sensor” on page 2-19
• “Track Pre-Computed Trajectory of Kinova Gen3 Robot End-Effector Using Inverse Kinematics and
KINOVA KORTEX MATLAB API” on page 2-20
• “Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB ” on page 2-29
• “Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot End-Effector ”
on page 2-37
• “Control KINOVA Gen3 Robotic Manipulator Using KINOVA KORTEX System Object and Simulink”
on page 2-46
• “Control Cartesian Position of KINOVA Gen3 Robot End-Effector Using Inverse Kinematics Block
and KINOVA KORTEX System Object” on page 2-49
• “Generate Colorized Point Cloud Using Vision Module of Kinova Gen3 Robot” on page 2-54
• “Read Current Joint Angles from KINOVA Gen3 Robot Arm” on page 2-61
• “Control Individual Joint Angle of KINOVA Gen3 Robot” on page 2-63
• “Send KINOVA Gen3 Robot Arm to Desired Cartesian Pose” on page 2-66
• “Send Precomputed Trajectory to KINOVA Gen3 Robot Arm” on page 2-69
• “Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot
Model ” on page 2-73
• “Read Current Joint Angle of KINOVA Gen3 and Publish to ROS Network” on page 2-84
• “Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory
Control” on page 2-87
• “Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV,
and MATLAB” on page 2-99
2 Get Started

Verify Connection to Kinova Gen 3 Robot


Perform these steps to verify connection from the host computer to Kinova Gen3 robot using
MATLAB:

1 Ensure that the robot is connected to the host computer and the network adapter is configured.
For details, see “Set Up Network Adapter and Basic Network Ping” on page 1-4. Identify the IP
Address of the robot.
2 Verify the network connection with the robot.

To do this, use the system command at the MATLAB prompt to test network connectivity with
the robot (assume that the IP address is '192.168.1.10'):

IP = ‘192.168.1.10’;
system(['ping ' IP],'-echo');

If the network adapter is configured correctly, you should receive a reply from the robot in
response to the ping command. If you do not receive a reply, refer to “Set Up Network Adapter
and Basic Network Ping” on page 1-4 or “Troubleshooting Manipulator Support Package” on
page 2-9.

2-2
Download MATLAB MEX APIs for Kinova Gen 3

Download MATLAB MEX APIs for Kinova Gen 3


Kinova Robotics provides the MATLAB APIs under GitHub® repository matlab_kortex, to connect and
control Gen3 robot with MATLAB and Simulink. It also includes an adapter for Image Acquisition
Toolbox™. This adapter can be used to acquire RGB and depth images from the vision module of the
Gen3 robot.

The required functions to connect with the manipulator can be downloaded from the Kinova
Artifactory. The download page does not load properly in the Internet Explorer® or Microsoft Edge®
web browsers.

The examples that are provided in this release of the support package are developed and tested with
the KORTEXTM API version 2.2.1 and with the manipulator having a firmware version 2.2.1. For
detailed instructions on changing the robot firmware, refer to the user guide for Gen3 robot.

Once the download is complete, extract the ZIP file to a preferred location and add the extracted
folder and subfolders to the MATLAB path. For more information on how to add folder to the MATLAB
search path, refer to addpath.

The MATLAB APIs provided by Kinova Robotics includes pre-compiled MEX files. However, you can
build them again (for build instructions, refer to this GitHub page of Kinova Robotics). The package
includes a MEX interface function kortexApiMexInterface, which serves as a mexFunction (C)
function (entry point) for C/C++ MEX functions. It also includes the MATLAB system object, kortex.

2-3
2 Get Started

Test Basic Connection and Move the Wrist Joint


After you perform the initial configuration activities as described in “Set Up Network Adapter and
Basic Network Ping” on page 1-4 and for Kinova Gen3, you can test the basic connection with the
robot and try to move the wrist joint of the robot.

Create Robot APIs Wrapper


To create a wrapper for MATLAB APIs for using with the Gen 3 robot:

1 Import enums and structures.

Simulink.importExternalCTypes(which('kortex_wrapper_data.h'));
2 Create an API instance to connect to the robot

gen3 = kortex;

This creates an object of class kortex with default properties for IP address, user name and
password.
3 Modify the IP address of the robot

gen3. ip_address= ‘192.168.1.10’;


4 Modify the user name and password

gen3. user = ‘admin’;


gen3. password = ‘admin’;
5 Connect to the robot

isOk = gen3.CreateRobotApisWrapper;

On successful connection, the function returns true.

Display Joint Angles of the Robot


[isOk, baseFeedback, jointsFeedback, interconnectFeedback] = obj.SendRefreshFeedback;
disp(jointsFeedback.position);

Move the Wrist Joint of the Robot


You can use the SendJointAngles command to move the robot to a particular position defined by a
set of joint angles. Because the Kinova Gen 3 robot has seven actuators, the input to the function is a
vector of size 1-by-7. The valid range of joint angle is 0 to 360 degrees.

In the previous step, you read current joint angles from the robot. The following command specifies
that the wrist joint of the actuator be moved by 5 degrees.

jointCmd = jointsFeedback.position + [0,0,0,0,0,0,5];

Specify constraints for the robot motion (if you specify 0, the default values are used):

constraintType = int32(0);
speed = 0;
duration = 0;

2-4
Test Basic Connection and Move the Wrist Joint

Send high-level command to the robot to move the wrist joint:

isOk = gen3.SendJointAngles(jointCmd, constraintType, speed, duration);

Disconnect from the Robot


isOk = gen3.DestroyRobotApisWrapper;

2-5
2 Get Started

Configure MATLAB to Acquire Images from Vision Module

After you perform the initial configuration activities as described in “Set Up IMAQ Adapter for
matlab_kortex Driver” on page 1-8, you can configure MATLAB to acquire images from vision module
of the Kinova Gen3 robot.

1 Register the adaptor library with MATLAB

In the MATLAB command window, use the imaqregister command to register the adaptor from
KINOVA Robotics with Image Acquisition Toolbox. This step needs to be performed only once.

• For the Windows operating system, the library file is ‘kinova_vision_imaq.dll’ and it is located
in the installation folder. For example, if the installation folder is ‘C:\Program Files\Kinova
\Vision Imaq’ then the command to register the adaptor is:

imaqregister('C:\Program Files\Kinova\Vision Imaq’);


• For Linux operating system, the library file is 'libkinova_vision_imaq.so' and it is available
inside the downloaded folder.
2 Verify the registration

a Reset the image acquisition hardware

imaqreset;
b Get information about the available image acquisition hardware

imaqhwinfo;

If the adaptor is registered correctly, ‘kinova_vision_imaq’ is present under list of installed


adaptors. If the adaptor is not listed under the installed adapters, then restart the MATLAB
and continue with Step 3.
3 Acquire image from cameras

• Windows OS

a Create a video input object for RGB camera

vid1 = videoinput('kinova_vision_imaq', 1, 'RGB24');


b Set frames per trigger and get information about currently selected video source object

vid1.FramesPerTrigger = 1;
src1 = getselectedsource(vid1);
c Change device properties

src1.CloseConnectionOnStop = 'Enabled';
src1.Ipv4Address = ‘192.168.1.10’;
d Preview the video object

preview(vid1);
e Close the preview and delete the video object

closepreview(vid1);
delete(vid1);

2-6
Configure MATLAB to Acquire Images from Vision Module

f Display the colorized depth video

vid2 = videoinput('kinova_vision_imaq', 2, 'MONO16');


vid2.FramesPerTrigger = 1;
src2 = getselectedsource(vid2);
src2.CloseConnectionOnStop = 'Enabled';
src2.Ipv4Address = '192.168.1.10';
im = preview(vid2);
ax = im.Parent;
im.CDataMapping = 'scaled';
colormap(ax, hot);
ax.CLimMode = 'auto';
ax.CLim = [0 20];
closepreview(vid2);
delete(vid2);
• Linux OS

• The only difference in the workflow for Linux operating system is the name of adaptor. For
Linux, the adaptor name is libkinova_vision_imaq instead of kinova_vision_imaq
for Windows.
• Perform the remaining steps, which are same as mentioned under Windows operating
system.

2-7
2 Get Started

Clear Existing Faults


You need to clear faults that occur while you try to manipulate the Kinova Gen3 robot using Robotics
System Toolbox Support Package for Manipulators.

Perform any of these actions to clear existing faults:

• If you are using a the game pad / joystick connected to the Kinova Gen 3 robot, press the Left
Bumper button to clear any existing faults of the robot.
• You can open the browser and enter the IP address of the robot in the search bar to open the
KINOVA® KORTEX™ Web App. Login to the robot and navigate to the error symbol at the top
right corner. Click on the Error symbol and then click Clear All.

2-8
Troubleshooting Manipulator Support Package

Troubleshooting Manipulator Support Package


Troubleshooting Network Connection with Kinova Gen3 Robot
Perform any of these actions to troubleshoot network connection issues with the Kinova Gen3 robot:

• Test the connection with the robot using KINOVA® KORTEX™ Web App.
• Ensure that the robot is powered on.
• Test the setup by controlling the robot with a game pad.
• Check the IP address of the robot.
• If you are connecting using Ethernet, ensure that the Ethernet cable is properly connected to your
computer and the robot.
• Ensure that the proper network adapter is selected and configured to communicate with the
robot.
• Clear any existing faults using game pad or KINOVA® KORTEX™ Web App. For more information,
refer to “Clear Existing Faults” on page 2-8.

Troubleshooting Image Acquisition with Kinova Gen3 Robot

Perform any of these actions to troubleshoot image acquisition issues with the Kinova Gen3 robot:

• Verify that the adaptor kinova_vision_imaq (for Windows) or libkinova_vision_imaq (for


Linux) is registered correctly with the Image Acquisition Toolbox. To get the list of installed
adaptors, run imaqhwinfo in the MATLAB command window. If the specified adaptor is not listed
in the installed adaptors, then refer to “Set Up IMAQ Adapter for matlab_kortex Driver” on page
1-8 for detailed instructions to register the adaptor.
• Test the connection with the robot using KINOVA® KORTEX™ Web App. You can also verify the
camera streams using the app.
• Verify the image acquisition using Image Acquisition Tool:
1 Open the Image Acquisition Tool by navigating to the APPS tab. The tool is available under
the section Image Processing and Computer Vision.
2 Select the appropriate device, that is, RGB24 (color device) or MONO16 (depth device), from
the list of available devices on the left panel.
3 Go to the Device properties and enter the IP address of the robot.
4 Click Start preview and observe the camera stream.
• Sometimes the connection between the vision module and the acquisition adaptor does not close
and therefore, you might face the error stating that the device may already in use. This issue is
more frequent in Linux and more information on this can be found at this GitHub page of Kinova
Robotics.

To resolve this issue, wait for at least 30 seconds before initiating the next image acquisition. If
the error persists, then power-cycle the robot.

Limitations of Precomputed Trajectory


Consider these limitations while sending precomputed trajectories to manipulate Kinova Gen3 robot:

2-9
2 Get Started

• The Gen3 robot accepts the precomputed trajectories with certain restrictions to angular velocity
and acceleration of each joint as well as on the time stamp of each input.
• The initial timestamp of each trajectory should be exactly zero second and the current position of
the robot should exactly match the starting point of the trajectory.
• The successive trajectory points should have time increment of exactly 1 milliseconds, and overall
trajectory should not be more than 30 seconds long, that is 30,001 trajectory points.
• Joint positions for all actuators must be within the specified joint limits throughout the entire
trajectory.
• Similarly, angular velocity and acceleration for each joint should be within the specified limits
throughout the entire trajectory.
• For more information on the limits of individual joint angle, velocity and acceleration, refer to the
user guide of Kinova Gen3 robot.
• For more information on the limitations of SendPreComputedTrajectory, refer to this GitHub page.

2-10
Getting Started with ROS Connectivity

Getting Started with ROS Connectivity

Products Required
• Robotics System Toolbox
• ROS Toolbox

The robot can be connected to the MATLAB using different methods depending on the communication
interface provided by the robot manufacturer. For the Kinova Gen3 robot arm, Kinova provides a
MATLAB MEX API and ROS API that uses the Ethernet port or Wi-Fi to communicate with the robot.
For more information on MATLAB MEX API, see “Download MATLAB MEX APIs for Kinova Gen 3” on
page 2-3. To configure the host computer for ROS API, see “Install ROS Packages and Dependencies
for ROS” on page 1-15 or “Install ROS Packages and Dependencies for ROS on External Computer”
on page 1-17.

This example uses ROS API provided by KINOVA Robotics to connect the Gen3 robot to ROS network
and to perform various common operations using ROS and MATLAB. For more information on the
'ros_kortex' ROS packages, see GitHub page of KINOVA Robotics.

Ensure that the robot hardware is setup completely and connected to the host computer before
proceeding further. For more information, see “Verify Connection to Kinova Gen 3 Robot” on page 2-
2.

The topic is divided into 6 parts:

1 “Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to Control the Robot” on
page 2-12
2 “Connect to ROS Network” on page 2-13
3 “Read Joint Angles” on page 2-14
4 “Manipulate Individual Joint Angle Using MATLAB” on page 2-15
5 “Control Cartesian Pose Using MATLAB” on page 2-16
6 “Send Precomputed Trajectory to the Robot” on page 2-17

2-11
2 Get Started

Connect to the Kinova Gen3 Robot and Initiate Required ROS


Nodes to Control the Robot
Open the terminal from the catkin workspace containing ros_kortex ROS packages in the host
computer or VM where ROS is installed. Execute the following command in the terminal to launch
required ROS nodes. Also, replace the IP address of the robot and gripper based on the actual
hardware setup. For more information on the roslaunch command, see GitHub page of KINOVA
Robotics.

roslaunch kortex_driver kortex_driver.launch ip_address:=192.168.0.106


gripper:=robotiq_2f_85 start_rviz:=false

If the hardware setup and the IP address are correct, a message 'The Kortex driver has been
initialized correctly!' displays on the terminal window.

2-12
Connect to ROS Network

Connect to ROS Network


Set the MATLAB and ROS IP addresses in the environment variable. For more information on
environment variables, see Connect to a ROS Network. Execute these commands in the MATLAB
sequentially to connect to the existing ROS network.

matlabIP = '192.168.0.103'; % IP address of MATLAB host PC


rosIP = '192.168.0.103'; % IP address of ROS enabled machine
setenv('ROS_IP',matlabIP);
setenv('ROS_MASTER_URI',['http://' rosIP ':11311']);

Initialize MATLAB global node and connect to the existing ROS network.

rosinit;

2-13
2 Get Started

Read Joint Angles


This section helps you to connect the KINOVA® Gen3 7-DoF Ultralightweight Robot arm to ROS
network and read joint angles using MATLAB.

Execute the following commands in the MATLAB sequentially to read joint angles from the robot. As
explained in KINOVA github page, the first part of the ROS topic, 'my_gen3’ might be different
based on the robot name set during the roslaunch command.

Create subscriber for a ROS topic /my_gen3/joint_states and receive the latest message.

jSub = rossubscriber('/my_gen3/joint_states');
jMsg = receive(jSub);

Extract the joint angles from the received data.

angles = int16(jMsg.Position(1:7)'.*180/pi);
disp(angles);

2-14
Manipulate Individual Joint Angle Using MATLAB

Manipulate Individual Joint Angle Using MATLAB


This section helps you to manipulate individual joint angle of the KINOVA® Gen3 7-DoF
Ultralightweight Robot arm using MATLAB. Execute these commands sequentially in the MATLAB to
manipulate individual joint angles of the robot. As explained in KINOVA github page, the first part of
the ROS topic, 'my_gen3’ might be different based on the robot name set during the roslaunch
command.

Create a service client with an empty message to control the individual joint angles.

[jointSrv,jointRequest] = rossvcclient('/my_gen3/base/play_joint_trajectory');

Set the desired joint angles for Kinova Gen3 robot.

jAngs = [0 15 180 -130 0 55 90]'*pi/180;


jAngs(jAngs < 0) = jAngs(jAngs < 0) + 2*pi;

Populate the blank message with the desired joint angle values.

for idx = 1:7


jAngMsgs(idx) = rosmessage('kortex_driver/JointAngle');
jAngMsgs(idx).JointIdentifier = idx;
jAngMsgs(idx).Value = rad2deg(jAngs(idx));
end
jointRequest.Input.JointAngles.JointAngles_ = jAngMsgs;

Use the default type and value for the constraints. This will set the constraint type to angular velocity
constraint and uses the default limit for the maximum Angular velocity.

jointRequest.Input.Constraint.Type = 0; % Speed constraint


jointRequest.Input.Constraint.Value = 0; % Max angular speed in deg/s

Call the service to move the Robot.

call(jointSrv,jointRequest);

Wait for the robot to stop moving.

2-15
2 Get Started

Control Cartesian Pose Using MATLAB


This section helps you to manipulate the KINOVA® Gen3 7-DoF Ultralightweight Robot arm and
achieve desired Cartesian pose using MATLAB. Execute the following commands sequentially in the
MATLAB to manipulate the robot. As explained in KINOVA github page, the first part of the ROS
topic, 'my_gen3’ might be different based on the robot name set during the roslaunch command.

Read the current Cartesian pose of the robot.

baseSub = rossubscriber('/my_gen3/base_feedback');
baseMsg = baseSub.LatestMessage;

Set the offset in the current Cartesian pose. First three values corresponds to the offset in X, Y and Z
positions respectively and are in meters. The last three values correspond to the offset in rotation
angles with respect to X, Y and Z axis respectively and are in degrees.

offsets = [0.1 0.1 0.1 0 0 0];

Create a service client and an empty ROS message for the ROS service /my_gen3/base/
play_cartesian_trajectory.

[cartTrajSrv,cartTrajMsg] = rossvcclient('/my_gen3/base/play_cartesian_trajectory');

Set the desired Cartesian pose using the existing pose and offsets.

cartTrajMsg.Input.TargetPose.X = baseMsg.Base.ToolPoseX + offsets(1);


cartTrajMsg.Input.TargetPose.Y = baseMsg.Base.ToolPoseY + offsets(2);
cartTrajMsg.Input.TargetPose.Z = baseMsg.Base.ToolPoseZ + offsets(3);
cartTrajMsg.Input.TargetPose.ThetaX = baseMsg.Base.ToolPoseThetaX + offsets(4);
cartTrajMsg.Input.TargetPose.ThetaY = baseMsg.Base.ToolPoseThetaY + offsets(5);
cartTrajMsg.Input.TargetPose.ThetaZ = baseMsg.Base.ToolPoseThetaZ + offsets(6);

Set the velocity constraints on the translation speed and rotational speed.

speedConstraint = rosmessage('kortex_driver/CartesianSpeed');
speedConstraint.Translation = 0.5; % m/s
speedConstraint.Orientation = 45; % deg/s
cartTrajMsg.Input.Constraint.OneofType.Speed = speedConstraint;

Call the service to move the robot.

call(cartTrajSrv,cartTrajMsg);

2-16
Send Precomputed Trajectory to the Robot

Send Precomputed Trajectory to the Robot


This section helps you to send commands to robot for following a precomputed joint trajectory.
Ensure that the robot is at home position before executing the following commands.

The expected motion of the robot is same as shown in the example “Generate a Trajectory Using a
Set of Waypoints for KINOVA Gen3 Robot End-Effector” on page 2-37.

Load the robot model and set the data format.


gen3 = loadrobot('kinovagen3');
gen3.DataFormat = 'column';

Read the current state of the robot.


jSub = rossubscriber('/my_gen3/joint_states');
jMsg = receive(jSub);
jAngs = jMsg.Position(1:7);

Display the robot in current configuration.


show(gen3,jAngs,'Frames','off');
T = getTransform(gen3,jAngs,'EndEffector_Link');

Define waypoint offset and the time instances to achieve the waypoint.
waypointTimes = [0 7 14 21 28];
waypointOffsets = [0 0 0 0 0 0;
-0.1 0.2 0.2 pi/4 0 0;
-0.2 0 0.1 0 0 pi/4;
-0.1 -0.2 0.2 -pi/4 0 0;
0 0 0 0 0 0]; % [X Y Z yaw pitch roll]

startPose = [tform2trvec(T) tform2eul(T,'ZYX')];


taskWaypoints = startPose + waypointOffsets;
jointWaypoints = zeros(7,numel(waypointTimes));

Calculate the joint configuration for each waypoint using inverse kinematic.
ik = inverseKinematics('RigidBodyTree',gen3);
for idx = 1:numel(waypointTimes)
ikGoal = trvec2tform(taskWaypoints(idx,1:3)) * eul2tform(taskWaypoints(idx,4:6),'ZYX');
ikSoln = ik('EndEffector_Link',ikGoal,[1 1 1 1 1 1],jAngs);
jointWaypoints(:,idx) = ikSoln(1:7);
end

Plot the robot at calculated waypoints.


figure('Visible','on')
show(gen3,jAngs,'Frames','off');
hold on;
plot3(taskWaypoints(:,1),taskWaypoints(:,2),taskWaypoints(:,3),'ro:','LineWidth',2)
for idx = 1:numel(waypointTimes)
show(gen3,jointWaypoints(:,idx),'Frames','off','PreservePlot',false);
pause(2)
end

Package the trajectory and send to the robot.

2-17
2 Get Started

sampleTime = 0.001; % Do not modify


numSamples = waypointTimes(end)/sampleTime + 1;
[q,qd,qdd] = trapveltraj(jointWaypoints,numSamples, ...
'EndTime',repmat(diff(waypointTimes),[7 1]));
trajTimes = linspace(0,waypointTimes(end),numSamples);

[trajAct,trajGoal] = rosactionclient( ...


'/my_gen3/gen3_joint_trajectory_controller/follow_joint_trajectory');

jointNames = {'joint_1','joint_2','joint_3','joint_4','joint_5','joint_6','joint_7'};
packageJointTrajectory(trajGoal,jointNames,q,qd,qdd,trajTimes)
sendGoal(trajAct,trajGoal);

function packageJointTrajectory(msg,jointNames,q,qd,qdd,trajTimes)

% Initialize values
N = numel(trajTimes);
numJoints = size(q,1);
zeroVals = zeros(numJoints,1);

% Assign joint names to ROS message


msg.Trajectory.JointNames = jointNames;

% Assign constraints
for idx = 1:numJoints
msg.GoalTolerance(idx) = rosmessage('control_msgs/JointTolerance');
msg.GoalTolerance(idx).Name = jointNames{idx};
msg.GoalTolerance(idx).Position = 0;
msg.GoalTolerance(idx).Velocity = 0.1;
msg.GoalTolerance(idx).Acceleration = 0.1;
end

% Loop through waypoints and fill in their data


trajPts(N) = rosmessage('trajectory_msgs/JointTrajectoryPoint');
for idx = 1:N
m = rosmessage('trajectory_msgs/JointTrajectoryPoint');
m.TimeFromStart = rosduration(trajTimes(idx));
m.Positions = q(:,idx);
m.Velocities = qd(:,idx);
m.Accelerations = qdd(:,idx);
m.Effort = zeroVals;
trajPts(idx) = m;
if mod(idx,round(N/10))==0
disp(['Packing waypoint ' num2str(idx) ' of ' num2str(N) '...']);
end
end
msg.Trajectory.Points = trajPts;
end
rosshutdown;

2-18
Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to Acquire Image Data from Vision Sensor

Connect to the Kinova Gen3 Robot and Initiate Required ROS


Nodes to Acquire Image Data from Vision Sensor
Open the terminal from the catkin workspace containing ros_kortex ROS packages in the host
computer or VM where ROS is installed. Execute the following command in the terminal to launch
required ROS nodes. Also, replace the IP address of the robot based on the actual hardware setup.
For more information on the roslaunch command, see GitHub page of KINOVA Robotics.

roslaunch kinova_vision kinova_vision.launch device:=192.168.0.106

If the hardware setup and provided IP address are correct, a message 'The stream has started'
displays on the terminal window for color and depth sensors.

Connect to the ROS Network


Set MATLAB and ROS IP addresses in the environment variable. For more information on
environment variables, see Connect to a ROS Network. Execute these commands sequentially in the
MATLAB to connect to the existing ROS network.

matlabIP = '192.168.0.103'; % IP address of MATLAB host PC


rosIP = '192.168.0.103'; % IP address of ROS enabled machine
setenv('ROS_IP',matlabIP);
setenv('ROS_MASTER_URI',['http://' rosIP ':11311']);

Initialize MATLAB global node and connect to the existing ROS network.

rosinit;

Create a subscriber for RGB and depth images.

rgbImgSub = rossubscriber('/camera/color/image_rect_color/compressed');
depthImgSub = rossubscriber('/camera/depth/image');

Read RGB and depth images and plot them in a single subplot.

subplot(2,1,1)
rgbImg = readImage(rgbImgSub.LatestMessage);
imshow(rgbImg)
title('Color Image')
subplot(2,1,2)
depthImg = readImage(depthImgSub.LatestMessage);
imshow(depthImg,[0 4])
title('Depth Image')
drawnow;

Shutdown the MATLAB global node.

rosshutdown;

2-19
2 Get Started

Track Pre-Computed Trajectory of Kinova Gen3 Robot End-


Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB
API

This example shows how to solve inverse kinematics problems using Rigid Body Trees and move the
Kinova Gen3 7-DoF Ultralightweight Robot arm to follow the desired trajectory.

Clear all output in the live script by right clicking anywhere in the live script and then clicking ‘Clear
All Output’ or navigate to VIEW > Clear all Output.

Required Products

• Robotics System Toolbox™


• Simulink®

Create the Robot Model

Create a Rigid Body Tree for Gen3 robot, set the home configuration and calculate the transformation
at the home configuration:

gen3 = loadrobot("kinovaGen3");
gen3.DataFormat = 'column';
q_home = [0 15 180 -130 0 55 90]'*pi/180;
eeName = 'EndEffector_Link';
T_home = getTransform(gen3, q_home, eeName);

Visualize the robot at home configuration.

show(gen3,q_home);
axis auto;
view([60,10]);

2-20
Track Pre-Computed Trajectory of Kinova Gen3 Robot End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API

Create Inverse Kinematics Solver and Set Parameters

The inverseKinematics System object™ creates an inverse kinematic (IK) solver to calculate joint
configurations for a desired end-effector pose based on a specified rigid body tree model.
ik = inverseKinematics('RigidBodyTree', gen3);

Disable random restarts for inverse kinematic solver. AllowRandomRestart parameter indicates if
random restarts are allowed. Random restarts are triggered when the algorithm approaches a
solution that does not satisfy the constraints. A randomly generated initial guess is used.
ik.SolverParameters.AllowRandomRestart = false;

Specify weight priorities for pose tolerances, as a six-element vector. The first three elements
correspond to the weights on the error in orientation for the desired pose. The last three elements
correspond to the weights on the error in xyz position for the desired pose.
weights = [1, 1, 1, 1, 1, 1];

Since the desired trajectory is selected to start near the home position, define initial guess for solver
as the home position of the robot.

2-21
2 Get Started

q_init = q_home;

Define Waypoints from the Desired Trajectory

This section helps you is to create a circular trajectory for the end effector to track. Keep the
orientation of the end effector as constant for the whole range of motion. Define center and radius of
the circular trajectory.

center = [0.5 0 0.4]; %[x y z]


radius = 0.1;

Define total time and time step, and based on that generate waypoints for the circular trajectory.

dt = 0.25;
t = (0:dt:10)';
theta = t*(2*pi/t(end))-(pi/2);
points = center + radius*[0*ones(size(theta)) cos(theta) sin(theta)];

Plot the waypoints along with the robot at home configuration.

hold on;
plot3(points(:,1),points(:,2),points(:,3),'-*g', 'LineWidth', 1.5);
xlabel('x');
ylabel('y');
zlabel('z');
axis auto;
view([60,10]);
grid('minor');

2-22
Track Pre-Computed Trajectory of Kinova Gen3 Robot End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API

Solve the Inverse Kinematics for Each Waypoint

The inverse kinematics solver finds a joint configuration that achieves the specified end-effector pose.
Specify an initial guess for the configuration and the desired weights on the tolerances for the six
components of pose. The inverse kinematics solver returns information about its convergence and it
is recommended to check the status of the solution.

This section helps you to find a joint configuration for a fixed end effector orientation and variable
position defined by waypoints calculated in the previous sections. The current solution is used as the
initial guess for the next waypoint to ensure smooth and continuous trajectory. Uncomment the
display command to see the status of each inverse kinematics iteration.

numJoints = size(q_home,1);
numWaypoints = size(points,1);
qs = zeros(numWaypoints,numJoints);
for i = 1:numWaypoints
T_des = T_home;
T_des(1:3,4) = points(i,:)';
[q_sol, q_info] = ik(eeName, T_des, weights, q_init);

2-23
2 Get Started

% Display status of ik result


%disp(q_info.Status);

% Store the configuration


qs(i,:) = q_sol(1:numJoints);

% Start from prior solution


q_init = q_sol;
end

Visualize the Animation of the Solution

figure; set(gcf,'Visible','on');
ax = show(gen3,qs(1,:)');
ax.CameraPositionMode='auto';
hold on;

% Plot waypoints
plot3(points(:,1),points(:,2),points(:,3),'-g','LineWidth',2);
axis auto;
view([60,10]);
grid('minor');
hold on;

title('Simulated Movement of the Robot');


% Animate
framesPerSecond = 30;
r = robotics.Rate(framesPerSecond);
for i = 1:numWaypoints
show(gen3, qs(i,:)','PreservePlot',false);
drawnow;
waitfor(r);
end

2-24
Track Pre-Computed Trajectory of Kinova Gen3 Robot End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API

Send the Trajectory to the Hardware

Expected Motion of the robot (Assuming you start from the retract position)

2-25
2 Get Started

Press ‘y’ and press 'Enter' key on the keyboard if you want to send commands to the Kinova Gen3
robot to track the calculated trajectory or press 'Enter' to finish the example.
prompt = 'Do you want to send same trajectory to the hardware? y/n [n]: ';
str = input(prompt,'s');

if isempty(str)
str = 'n';
end

if str == 'n'
clear;
return;
end

Command Kinova Gen3 Robot to Track the Pre-Computed Trajectory

As explained in the example “Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB”
on page 2-29, the MATLAB API for Kinova Gen3 robot supports multiple modes to command the
robot. The high-level commands to reach a desired joint configuration or Cartesian pose cannot be
used to follow a smooth trajectory as the robot always reaches the destination before processing the
next command. Hence, the end-effector velocity between the successive commands reaches to zero,
which results into a choppy motion. The pre-computed joint trajectories can be used to command the
robot to track set of waypoints ensuring smooth motion.

A valid pre-computed joint trajectory is a set of timestamp, angular position, angular velocity, and
angular acceleration for each joint at each waypoint. You must adhere to certain constraints while

2-26
Track Pre-Computed Trajectory of Kinova Gen3 Robot End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API

calculating the trajectory. For more information, see SendPreComputedTrajectory and Precomputed
Joint Trajectories.

Calculate joint velocity and acceleration at each waypoint using the numerical
differentiation

qs_deg = qs*180/pi;
vel = diff(qs_deg)/dt;
vel(1,:) = 0;
vel(end+1,:) = 0;
acc = diff(vel)/dt;
acc(1,:) = 0;
acc(end+1,:) = 0;

Interpolate the joint position, velocity and acceleration to ensure the 0.001 seconds time
step between two trajectory points

timestamp = 0:0.001:t(end);
qs_deg = interp1(t,qs_deg,timestamp);
vel = interp1(t,vel,timestamp);
acc = interp1(t,acc,timestamp);

Connect to the Robot

Simulink.importExternalCTypes(which('kortex_wrapper_data.h'));
gen3Kinova = kortex();
gen3Kinova.ip_address = '192.168.1.10';

isOk = gen3Kinova.CreateRobotApisWrapper();
if isOk
disp('You are connected to the robot!');
else
error('Failed to establish a valid connection!');
end

Visualize the Actual Movement of the Robot

title('Actual Movement of the Robot');


[~,~, actuatorFb, ~] = gen3Kinova.SendRefreshFeedback();
show(gen3, ((actuatorFb.position)*pi/180)','PreservePlot',false);
drawnow;

Send Robot to Starting Point of the Trajectory

Note that the valid range of input for SendJointAngles is 0 to 360 degrees while the computed angles
are in a range of -180 to 180 degrees. Hence the joint angles must be wrapped around 0 to 360
degrees.

jointCmd = wrapTo360(qs_deg(1,:));
constraintType = int32(0);
speed = 0;
duration = 0;

isOk = gen3Kinova.SendJointAngles(jointCmd, constraintType, speed, duration);


if isOk
disp('success');
else
disp('SendJointAngles cmd error');

2-27
2 Get Started

return;
end

Check if the robot has reached the starting position

while 1
[isOk,~, actuatorFb, ~] = gen3Kinova.SendRefreshFeedback();
show(gen3, ((actuatorFb.position)*pi/180)','PreservePlot',false);
drawnow;
if isOk
if max(abs(wrapTo360(qs_deg(1,:))-actuatorFb.position)) < 0.1
disp('Starting point reached.')
break;
end
else
error('SendRefreshFeedback error')
end
end

Send Pre-Computed Trajectory

isOk = gen3Kinova.SendPreComputedTrajectory(qs_deg.', vel.', acc.', timestamp, size(timestamp,2))


if isOk
disp('SendPreComputedTrajectory success');
else
disp('SendPreComputedTrajectory command error');
end

Check if the robot has reached the end position

while 1
[isOk,~, actuatorFb, ~] = gen3Kinova.SendRefreshFeedback();
show(gen3, ((actuatorFb.position)*pi/180)','PreservePlot',false);
drawnow;
if isOk
if max(abs(wrapTo360(qs_deg(end,:))-actuatorFb.position)) < 0.1
disp('End Point reached.')
break;
end
else
error('SendRefreshFeedback error')
end
end

Disconnect from the Robot

Use this command to disconnect from Kinova Gen3 robot Robot.

isOk = gen3Kinova.DestroyRobotApisWrapper();

Clear workspace

clear;

2-28
Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB

Connect to Kinova Gen3 Robot and Manipulate the Arm Using


MATLAB

This example demonstrates how to connect the Kinova Gen3 7-DoF Ultralightweight Robot arm to
MATLAB® and Simulink®. This also helps you to get feedback from the robot and to send control
commands.

Note: Execute the script section by section as mentioned in this example. Also, keep the E-
STOP button close to you to stop any unnecessary movement of the robot.

Depending on the communication interface provided by the robot manufacturer, the robot can be
connected to the MATLAB using different methods. For Kinova® Gen3 robot arm, Kinova provides a C
++ API that uses the Ethernet port or Wi-Fi® to communicate with the robot. For more details, see
“Download MATLAB MEX APIs for Kinova Gen 3” on page 2-3.

You can utilize the MEX Wrapper provided by Kinova robotics or use the system object to control the
robot. Using system object allows you to utilize the same architecture for the Simulink
implementation. In the following sections, In the following sections, the system object will be used in
the MATLAB script to communicate with the robot. For more information on the ‘kortex’ system
object, see GitHub page of Kinova Robotics.

Before proceeding further ensure that the robot hardware is setup completely and connected to the
host computer. For more details, see Verify Connection to KINOVA Gen 3 Robot.

Clear all output in the live script by right clicking anywhere in the live script and then clicking ‘Clear
All Output’ or navigate to VIEW > Clear all Output.

Required Products

• Robotics System Toolbox™


• Simulink
• Robotics System Toolbox Support Package for KINOVA Gen3 Manipulators

Create an API Instance to Connect to the Robot

Create an API instance using your robot's IP address to establish connection with the robot. You can
create multiple instances that connect to the same robot or to multiple robots.
Simulink.importExternalCTypes is used to import certain bus definitions from the external
header file. This has to be done explicitly only in the MATLAB scripts. For Simulink models, the same
command can be added in the Model Callbacks.

Simulink.importExternalCTypes(which('kortex_wrapper_data.h'));
gen3Kinova = kortex();
gen3Kinova.ip_address = '192.168.1.10';
gen3Kinova.user = 'admin';
gen3Kinova.password = 'admin';

Connect to the Robot

Use the created API instance to connect to the robot. While connecting, each MATLAB script must
include a function call to CreateRobotApisWrapper at the beginning and
DestroyRobotApisWrapper at the end of the script. Connect with the robot using

2-29
2 Get Started

CreateRobotApisWrapper command and if the connection is successful, the return flag isOk
should be true.
isOk = gen3Kinova.CreateRobotApisWrapper();
if isOk
disp('You are connected to the robot!');
else
error('Failed to establish a valid connection!');
end

You are connected to the robot!

If the connection fails, see “Troubleshooting Manipulator Support Package” on page 2-9.

Get Sensor Data from the Robot

The Kinova Gen3 robot contains multiple sensor suites across the base, actuator, and end-effector.
SendRefreshFeedback provides the data from these sensors in three structures corresponding to the
base, the actuators, and the gripper(interconnect).

If the robot and MATLAB is configured correctly, the actuator feedback consists of numerically
consistent nonzero values. Uncomment the display commands to see the values.

Note: Kinova API returns all angular values in degrees.


[isOk,baseFb, actuatorFb, interconnectFb] = gen3Kinova.SendRefreshFeedback();

if isOk
disp('Base feedback');
disp(baseFb);
disp('Actuator feedback');
disp(actuatorFb);
disp('Gripper feedback');
disp(interconnectFb);
else
error('Failed to acquire sensor data.');
end

Control Command Modes

The MATLAB API for Kinova Gen3 robot supports these modes for sending commands to the robot:

• High-Level commands: Use this mode to send the robot to a desired configuration, move the
gripper, or to send a Cartesian command. The robot will internally interpolate to reach the desired
configuration and there is no self-collision checking.
• Prescribed trajectories: Use this method if you need the robot to follow a smooth trajectory
(example: motion planning frameworks). You need to send an array of joint positions, velocities,
and accelerations. For more information, see SendPreComputedTrajectory and the example,
“Track Pre-Computed Trajectory of Kinova Gen3 Robot End-Effector Using Inverse Kinematics and
KINOVA KORTEX MATLAB API” on page 2-20.

High-Level Commands

Send Joint Angles

Expected Motion of the robot (Assuming you are starting from the retract position)

2-30
Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB

This section helps you to send the robot to the home configuration. SendJointAngles command can be
used to move the robot to a particular position defined by a set of joint angles. Since the Kinova Gen3
robot has seven actuators, input to the function is a vector of size 1 X 7. Valid range of joint angle is 0
to 360 degrees.
jointCmd = [0 15 180 230 0 55 90];

Use default speed and duration constraints.


constraintType = int32(0);
speed = 0;
duration = 0;

Send joint command to the robot.


isOk = gen3Kinova.SendJointAngles(jointCmd, constraintType, speed, duration);

if isOk
disp('Command sent to the robot. Wait for the robot to stop moving.');
else
disp('Command error.');
end

Command sent to the robot. Wait for the robot to stop moving.

Move only the end effector of the robot.

Expected Motion of the robot

2-31
2 Get Started

jointCmd = [0 15 180 230 0 55 180];


constraintType = int32(0);
speed = 0;
duration = 0;

isOk = gen3Kinova.SendJointAngles(jointCmd, constraintType, speed, duration);


if isOk
disp('Command sent to the robot. Wait for the robot to stop moving.');
else
disp('Command error.');
end

Command sent to the robot. Wait for the robot to stop moving.

Open and Close the Gripper

If your robot arm is equipped with a Robotiq 2F-85 gripper, then execute the following code to close
and open the gripper using velocity command. Here, toolCommand depicts the type of control mode
which is used. For the velocity control, the value should be set to 2. Also, toolCmd is used to set the
speed of opening or closing the gripper. For more information on the SendToolCommand, see GitHub
page from Kinova Robotics.
toolCommand = int32(2); % Velocity control mode
toolDuration = 0;
toolCmd = -1; % Close the gripper with full speed
isOk = gen3Kinova.SendToolCommand(toolCommand, toolDuration, toolCmd);
if isOk
disp('Command sent to the gripper. Wait for the gripper to close.')

2-32
Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB

else
error('Command Error.');
end

Command sent to the gripper. Wait for the gripper to close.

Open the gripper using velocity command.

% Open gripper again in full speed


toolCmd = 1;
isOk = gen3Kinova.SendToolCommand(toolCommand, toolDuration, toolCmd);
if isOk
disp('Command sent to the gripper. Wait for the gripper to open.')
else
error('Command Error.');
end

Command sent to the gripper. Wait for the gripper to open.

Half close the gripper using position commands.

toolCommand = int32(3); % position mode


toolDuration = 0;
toolCmd = 0.5;
isOk = gen3Kinova.SendToolCommand(toolCommand, toolDuration, toolCmd);
if isOk
disp('Command sent to the gripper. Wait for the gripper to stop moving.')
else
error('Command Error.');
end

Command sent to the gripper. Wait for the gripper to stop moving.

Open the gripper using position commands.

toolCmd = 0;
isOk = gen3Kinova.SendToolCommand(toolCommand, toolDuration, toolCmd);
if isOk
disp('Command sent to the gripper. Wait for the gripper to stop moving.')
else
error('Command Error.');
end

Command sent to the gripper. Wait for the gripper to stop moving.

Send Cartesian Pose Command

Expected Motion of the robot

2-33
2 Get Started

You can also send Cartesian commands to the robot using the function SendCartesianPose. The robot
solves the inverse kinematics internally and calculates the joint angles required to achieve the
position.

The orientation is represented with Euler angles and the convention used is Tait-Bryan, extrinsic
ZYX.

Send Cartesian pose command to the robot without any constraints.

2-34
Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB

cartCmd = [0.6, 0.2, 0.2, 90, 0, 90];


constraintType = int32(0);
speeds = [0, 0];
duration = 0;

isOk = gen3Kinova.SendCartesianPose(cartCmd, constraintType, speeds, duration);

if isOk
disp('Command sent to the robot. Wait for robot to finish motion and stop');
else
error('Command Error.');
end

Command sent to the robot. Wait for robot to finish motion and stop

Change pose of the robot.

Expected Motion of the robot

cartCmd = [0.4, 0.2, 0.2, 90, 0, 45];


isOk = gen3Kinova.SendCartesianPose(cartCmd, constraintType, speeds, duration);
if isOk
disp('Command sent to the robot. Wait for robot to finish motion and stop');
else
error('Command Error.');
end

Command sent to the robot. Wait for robot to finish motion and stop

2-35
2 Get Started

Send Robot Back to Retract Configuration

Expected Motion of the robot

jointCmd = [360 340 180 214 0 310 90];


constraintType = int32(0);
speed = 0;
duration = 0;

isOk = gen3Kinova.SendJointAngles(jointCmd, constraintType, speed, duration);


if isOk
disp('Command sent to the robot. Wait for robot to finish motion and stop');
else
error('Command Error.');
end

Command sent to the robot. Wait for robot to finish motion and stop

Disconnect from the Robot

Use this command to disconnect from Kinova Gen3 robot Robot.

isOk = gen3Kinova.DestroyRobotApisWrapper();

Clear workspace

clear;

2-36
Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot End-Effector

Generate a Trajectory Using a Set of Waypoints for KINOVA


Gen3 Robot End-Effector

This example shows how to generate and interpolate trajectories from a set of waypoints. This
example covers common ways of generating trajectories for robot arms such as cubic polynomials,
quintic polynomials, and trapezoidal trajectories.

Before proceeding further, see the example “Track Pre-Computed Trajectory of Kinova Gen3 Robot
End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API” on page 2-20. This
provides you information on inverse kinematic for robotic manipulators and sending commands to
Kinova® Gen3 robot to track pre-computed trajectories.

Clear all output in the live script by right clicking anywhere in the live script and then clicking ‘Clear
All Output’ or navigate to VIEW > Clear all Output.

Create the Robot Model

Create a Rigid Body Tree for Gen3 robot, set the home configuration and calculate the transformation
at the home configuration:

gen3 = loadrobot("kinovaGen3");
gen3.DataFormat = 'column';
q_home = [0 15 180 -130 0 55 90]'*pi/180;
eeName = 'EndEffector_Link';
T_home = getTransform(gen3, q_home, eeName);
T_home(1:4,4) = [0;0;0;1];

Visualize the Robot at Home Configuration

show(gen3,q_home);
axis auto;
view([60,10]);

2-37
2 Get Started

Define a Set of Waypoints Based on the Tool Position


toolPositionHome = [0.455 0.001 0.434];
waypoints = toolPositionHome' + ...
[0 0 0 ; -0.1 0.2 0.2 ; -0.2 0 0.1 ; -0.1 -0.2 0.2 ; 0 0 0]';

Define orientation.

The orientation is represented in Euler angles and the convention used is the Tait-Bryan, extrinsic
ZYX.
orientations = [pi/2 0 pi/2;
(pi/2)+(pi/4) 0 pi/2;
pi/2 0 pi;
(pi/2)-(pi/4) 0 pi/2;
pi/2 0 pi/2]';

Define array of waypoint times.


waypointTimes = 0:7:28;
ts = 0.25;
trajTimes = 0:ts:waypointTimes(end);

2-38
Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot End-Effector

Define boundary conditions for velocity and acceleration.

waypointVels = 0.1 *[ 0 1 0;
-1 0 0;
0 -1 0;
1 0 0;
0 1 0]';

waypointAccels = zeros(size(waypointVels));
waypointAccelTimes = diff(waypointTimes)/4;

Create Inverse Kinematics Solver and Set Parameters

ik = inverseKinematics('RigidBodyTree',gen3);
ikWeights = [1 1 1 1 1 1];
ikInitGuess = q_home';
ikInitGuess(ikInitGuess > pi) = ikInitGuess(ikInitGuess > pi) - 2*pi;
ikInitGuess(ikInitGuess < -pi) = ikInitGuess(ikInitGuess < -pi) + 2*pi;

Set Plot and Display Waypoints

plotMode = 1; % 0 = None, 1 = Trajectory, 2 = Coordinate Frames


show(gen3,q_home,'Frames','off','PreservePlot',false);
hold on
if plotMode == 1
hTraj = plot3(waypoints(1,1),waypoints(2,1),waypoints(3,1),'b.-');
end
plot3(waypoints(1,:),waypoints(2,:),waypoints(3,:),'ro','LineWidth',2);
axis auto;
view([30 15]);

2-39
2 Get Started

Solve the Inverse Kinematics for Each Waypoint

Set includeOrientation to zero if you do not want to define different orientation at different waypoints.
When includeOrientation is set to false, a default orientation is used to compute the trajectory.

includeOrientation = true;
numWaypoints = size(waypoints,2);
numJoints = numel(gen3.homeConfiguration);
jointWaypoints = zeros(numJoints,numWaypoints);
for idx = 1:numWaypoints
if includeOrientation
tgtPose = trvec2tform(waypoints(:,idx)') * eul2tform(orientations(:,idx)');
else
tgtPose = trvec2tform(waypoints(:,idx)') * eul2tform([pi/2 0 pi/2]); %#ok<UNRCH>
end
[config,info] = ik(eeName,tgtPose,ikWeights',ikInitGuess');
jointWaypoints(:,idx) = config';
end

2-40
Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot End-Effector

Generate a Trajectory in Joint Space using Interpolation

Select method of interpolation from trapezoidal velocity profiles, third-order polynomial, fifth-order
polynomial or B-spline polynomial by modifying the variable trajType.

trajType = 'trap';
switch trajType
case 'trap'
[q,qd,qdd] = trapveltraj(jointWaypoints,numel(trajTimes), ...
'AccelTime',repmat(waypointAccelTimes,[numJoints 1]), ...
'EndTime',repmat(diff(waypointTimes),[numJoints 1]));

case 'cubic'
[q,qd,qdd] = cubicpolytraj(jointWaypoints,waypointTimes,trajTimes, ...
'VelocityBoundaryCondition',zeros(numJoints,numWaypoints));

case 'quintic'
[q,qd,qdd] = quinticpolytraj(jointWaypoints,waypointTimes,trajTimes, ...
'VelocityBoundaryCondition',zeros(numJoints,numWaypoints), ...
'AccelerationBoundaryCondition',zeros(numJoints,numWaypoints));

case 'bspline'
ctrlpoints = jointWaypoints; % Can adapt this as needed
[q,qd,qdd] = bsplinepolytraj(ctrlpoints,waypointTimes([1 end]),trajTimes);
% Remove the first velocity sample
qd(:,1) = zeros (7,1);
otherwise
error('Invalid trajectory type! Use ''trap'', ''cubic'', ''quintic'', or ''bspline''');
end

Visualize the Solution

for idx = 1:numel(trajTimes)

config = q(:,idx)';

% Find Cartesian points for visualization


eeTform = getTransform(gen3,config',eeName);
if plotMode == 1
eePos = tform2trvec(eeTform);
set(hTraj,'xdata',[hTraj.XData eePos(1)], ...
'ydata',[hTraj.YData eePos(2)], ...
'zdata',[hTraj.ZData eePos(3)]);
elseif plotMode == 2
plotTransforms(tform2trvec(eeTform),tform2quat(eeTform),'FrameSize',0.05);
end

% Show the robot


show(gen3,config','Frames','off','PreservePlot',false);
title(['Trajectory at t = ' num2str(trajTimes(idx))])
drawnow

end

2-41
2 Get Started

Send the Trajectory to the Hardware

Expected motion of the robot (Assuming you are starting from the retract position)

2-42
Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot End-Effector

Press ‘y’ and hit enter if you want to command the Kinova Gen3 robot to track the calculated
trajectory or hit enter to finish the example.

prompt = 'Do you want to send same trajectory to the hardware? y/n [n]: ';
str = input(prompt,'s');

if isempty(str)
str = 'n';
end

if str == 'n'
clear;
return;
end

Command Kinova Gen3 robot to track the pre-computed trajectory

q = q*180/pi;
qd = qd*180/pi;
qdd = qdd*180/pi;

timestamp = (0:0.001:waypointTimes(end))';
qs_deg = interp1(trajTimes',q',timestamp);
vel = interp1(trajTimes',qd',timestamp);
acc = interp1(trajTimes',qdd',timestamp);

Connect to the robot

2-43
2 Get Started

Simulink.importExternalCTypes(which('kortex_wrapper_data.h'));
gen3Kinova = kortex();
gen3Kinova.ip_address = '192.168.1.10';
gen3API.user = 'admin';
gen3API.password = 'admin';

isOk = gen3Kinova.CreateRobotApisWrapper();
if isOk
disp('You are connected to the robot!');
else
error('Failed to establish a valid connection!');
end

Visualize the actual movement of the robot


title('Actual Movement of the Robot');
[~,~, actuatorFb, ~] = gen3Kinova.SendRefreshFeedback();
show(gen3, ((actuatorFb.position)*pi/180)','Frames','off','PreservePlot',false);
drawnow;

Send Robot to Starting Point of the Trajectory


jointCmd = wrapTo360(qs_deg(1,:));
constraintType = int32(0);
speed = 0;
duration = 0;

isOk = gen3Kinova.SendJointAngles(jointCmd, constraintType, speed, duration);


if isOk
disp('success');
else
disp('SendJointAngles cmd error');
return;
end

Check if the robot has reached the starting position


while 1
[isOk,~, actuatorFb, ~] = gen3Kinova.SendRefreshFeedback();
show(gen3, ((actuatorFb.position)*pi/180)','Frames','off','PreservePlot',false);
drawnow;
if isOk
if max(abs(wrapTo360(qs_deg(1,:))-actuatorFb.position)) < 0.1
disp('Starting point reached.')
break;
end
else
error('SendRefreshFeedback error')
end
end

Send Pre-Computed Trajectory


isOk = gen3Kinova.SendPreComputedTrajectory(qs_deg.', vel.', acc.', timestamp.', size(timestamp,1
if isOk
disp('SendPreComputedTrajectory success');
else
disp('SendPreComputedTrajectory cmd error');
end

2-44
Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot End-Effector

Check if the robot has reached the end position

status = 1;
%% Check if the robot has reached the end position
while status
[isOk,~, actuatorFb, ~] = gen3Kinova.SendRefreshFeedback();
show(gen3, ((actuatorFb.position)*pi/180)','Frames','off','PreservePlot',false);
drawnow;
if isOk
[~,status] = gen3Kinova.GetMovementStatus();
else
error('SendRefreshFeedback error');
end
end

disp('Reached to Final waypoint.');

Disconnect from the Robot

Use this command to disconnect from Kinova Gen3 robot Robot.

isOk = gen3Kinova.DestroyRobotApisWrapper();

Clear Workspace.

clear;

2-45
2 Get Started

Control KINOVA Gen3 Robotic Manipulator Using KINOVA


KORTEX System Object and Simulink

This example shows how to connect to the KINOVA® Gen3 7-DoF Ultralightweight Robot arm with
Simulink®. It includes blocks to get feedback from the robot and send control commands.

Introduction

Robotics System Toolbox™ Support Package for KINOVA Gen3 Manipulators enables you to control
manipulators using MATLAB® and Simulink. This support package utilizes APIs provided by robot
manufactures to acquire various sensor data, simulate robot models, and control the robot. Prototype
algorithms and perform simulations of these robots using rigid body tree models from Robotics
System Toolbox or Simscape™ Multibody™ robot models. Connect with the robot hardware to test
and validate your algorithms.

In this example, KORTEX™ system object from KINOVA is used to connect and control the Gen3
robot.

Prerequisites

• If you are new to Simulink, watch the Simulink Quick Start.


• Perform the initial of the support package using Hardware Setup screens.
• Refer to “Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB” on page 2-29
for information on the communication interface between MATLAB and the robot.

Model

Open the manipdemo_connecttogen3 model.

2-46
Control KINOVA Gen3 Robotic Manipulator Using KINOVA KORTEX System Object and Simulink

The general structure of the model is shown in the figure above. The model consists primarily of two
system objects, i.e., Trajectory Feeder and KORTEX Main. The KORTEX Main system is associated
with the Kortex system object provided by KINOVA Robotics. The input and output structure of the
system object is pre-defined and more information on the interface see kortex System object.

The KORTEX Main system object receives a command to process, and various setpoints form the
Trajectory feeder system. The robot specific parameters need to be set before running the Simulink
model.

Trajectory Feeder System Object

The KORTEX Main system object receives a command to process, and various setpoints form the
Trajectory feeder system. The trajectory feeder system waits for the robot to stop moving and then
send the next set of commands. Expected robot motion is similar to the one mentioned in “Connect to
Kinova Gen3 Robot and Manipulate the Arm Using MATLAB” on page 2-29.

The trajectory Feeder system object consists of a set of waypoints that the robot will follow. Refer to
the setupImpl method of the system object to modify the waypoints.

Run the Model

Note: Ensure that the E-STOP is close to you to stop the movement of robot, as the robot moves
according to the specified list of waypoints.

After configuring the robot specific parameters, click SIMULATE on the Simulink toolbar and then
click Run. The robot starts moving based on the set of waypoints, and all the waypoints are executed.

2-47
2 Get Started

2-48
Control Cartesian Position of KINOVA Gen3 Robot End-Effector Using Inverse Kinematics Block and KINOVA KORTEX System Object

Control Cartesian Position of KINOVA Gen3 Robot End-Effector


Using Inverse Kinematics Block and KINOVA KORTEX System
Object

This example shows how to solve inverse kinematics problems using Rigid Body Trees and how to
control the KINOVA Gen3 7-DoF Ultralightweight Robot Arm to follow the desired trajectory.

Introduction

Robotics System Toolbox™ Support Package for KINOVA® Gen3 Manipulators enables you to control
manipulators using MATLAB® and Simulink®. This support package utilizes APIs provided by robot
manufactures to acquire various sensor data, simulate robot models, and to control the robot.
Prototype algorithms and perform simulations of these robots using rigid body tree models from
Robotics System Toolbox or Simscape™ Multibody™ robot models. Connect with the robot hardware
to test and validate your algorithms.

In this example, the InverseKinematics System object™ from Robotics System Toolbox is used to
calculate the actuator joint angles for the specified Cartesian pose. The InverseKinematics System
object creates an inverse kinematic (IK) solver to calculate joint configurations for a desired end-
effector pose based on a specified rigid body tree model.

Prerequisites

• If you are new to Simulink, watch the Simulink Quick Start.


• Perform the initial of the support package using Hardware Setup screens.
• Refer to “Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB” on page 2-29
for information on the communication interface between MATLAB and the robot.
• Refer to “Control KINOVA Gen3 Robotic Manipulator Using KINOVA KORTEX System Object and
Simulink” on page 2-46 for more information on the KORTEX™ system object from KINOVA
Robotics.

Model

Open the manipdemo_ikgen3 model.

2-49
2 Get Started

Inverse Kinematics

The inverseKinematics System object creates an inverse kinematic (IK) solver to calculate joint
configurations for a desired end-effector pose based on a specified rigid body tree model. Create a
rigid body tree model for your robot using the rigidBodyTree class. This model defines all the joint
constraints that the solver enforces. If a solution is possible, the joint limits specified in the robot
model are obeyed.

2-50
Control Cartesian Position of KINOVA Gen3 Robot End-Effector Using Inverse Kinematics Block and KINOVA KORTEX System Object

Setpoint Control

The Setpoint Control subsystem takes the desired Cartesian position and pose of the end-effector of
the robot and applies a coordinate transformation to generate the desired for the Inverse Kinematics
block. In this example, the desired orientation of the end-effector is fixed, and the Cartesian position
is controlled via three input sliders. The subsystem also contains the Simulation 3D Actor block from
Simulink 3D Animation™ to locate object in Simulink 3D world.

Input Sliders

2-51
2 Get Started

Robot Visualization and Command Actual Robot

Simulation 3D Robot block from Robotics System Toolbox™ enables visualization of RigidBodyTree
objects in Simulink 3D world. The command actual robot subsystem uses Kortex system object from
KINOVA Robotics to connect and control the Gen3 robot.

Run the Model

Note: The robot will move according to the specified input, so please have the E-STOP close to you.
Ensure that 'Move Robot' toggle switch is in off position before running the model. This will prevent
the sudden and unintended motion of the robot.

After configuring the robot specific parameter in the Kortex system object properties, navigate to the
SIMULATE tab and press Run. The simulated robot will move to the position specified by the slider
position, and any change in the slider position will be reflected in the end-effector position of the
simulated robot.

2-52
Control Cartesian Position of KINOVA Gen3 Robot End-Effector Using Inverse Kinematics Block and KINOVA KORTEX System Object

After validating the joint configuration generated by the inverse kinematic block, change 'Move
Robot' toggle switch to on position. This will send the joint angles generated by the inverse kinematic
block to the robot.

Run the Model without Simulink 3D Animation

In the absence of Simulink 3D Robot and Actor block, this model can still be used to control the Gen3
robot. Comment out the Simulink 3D Robot block and Simulink 3D Actor in 'Setpoint Control'
subsystem before running the model without valid installation of Simulink 3D Animation. However,
validate the joint configuration generated by the inverse kinematic block using display or scope
blocks before sending it to the actual robot.

2-53
2 Get Started

Generate Colorized Point Cloud Using Vision Module of Kinova


Gen3 Robot

This example shows how to connect to the cameras of Kinova Gen3 7-DoF Ultralightweight Robot and
get the RGB and depth streams. Acquired images are utilized to create colorized point clouds of the
environment.

Kinova® robotics provides Image Acquisition Toolbox™ adaptor to access the camera streams.
Ensure that you have completed the installation instructions for Image Acquisition Toolbox adaptor
before proceeding with this example (for details, see “Configure MATLAB to Acquire Images from
Vision Module” on page 2-6).

Clear all output in the live script by right clicking anywhere in the live script and then clicking ‘Clear
All Output’ or navigate to VIEW > Clear all Output.

Products required

• Image Acquisition Toolbox


• Computer Vision Toolbox™
• Robotics System Toolbox™
• Simulink®

Connect to the Robot and Load Camera Calibration Parameters

To start working with the RGB and depth streams of the camera, you need to know the intrinsic and
extrinsic parameters of the camera. The intrinsic parameters represent the optical center and focal
length of the camera. The extrinsic parameters represent the location of the camera in the 3-D scene.

You can get initial values from the device manufacturer, but it is recommended to run a calibration
routine to get the exact values of your device. In this example, you can use the values returned by the
manipulator to generate the colorized point cloud from the RGB and Depth image.

Create an API Instance to Connect to the Robot


Simulink.importExternalCTypes(which('kortex_wrapper_data.h'));
gen3Kinova = kortex();
gen3Kinova.ip_address = '192.168.1.10';
gen3Kinova.user = 'admin';
gen3Kinova.password = 'admin';

isOk = gen3Kinova.CreateRobotApisWrapper();
if isOk
disp('You are connected to the robot!');
else
error('Failed to establish a valid connection!');
end

You are connected to the robot!

Get Intrinsic Parameters for RGB Camera


[isOk, RGB_intrinsic] = gen3Kinova.GetIntrinsicParameters(1);
if ~isOk

2-54
Generate Colorized Point Cloud Using Vision Module of Kinova Gen3 Robot

error('Failed to acquire Intrinsic Parameters for RGB sensor.');


end

Get Intrinsic Parameters for Depth Camera


[isOk, Depth_intrinsic] = gen3Kinova.GetIntrinsicParameters(2);
if ~isOk
error('Failed to acquire Intrinsic Parameters for depth sensor.');
end

Get Extrinsic Parameters


[isOk, Extrinsic] = gen3Kinova.GetExtrinsicParameters();
if ~isOk
error('Failed to acquire Extrinsic Parameters.');
end

Disconnect from the Robot


isOk = gen3Kinova.DestroyRobotApisWrapper();

d_scalingFactor = 1000; % scaling factor [mm]


d_R = [Extrinsic.rotation.row1.column1 , Extrinsic.rotation.row1.column2 , Extrinsic.rotatio
Extrinsic.rotation.row2.column1 , Extrinsic.rotation.row2.column2 , Extrinsic.rotatio
Extrinsic.rotation.row3.column1 , Extrinsic.rotation.row3.column2 , Extrinsic.rotatio
d_t = [Extrinsic.translation.x Extrinsic.translation.y Extrinsic.translation.z];
T = eye(4);
T(1:3,1:3) = d_R;
T(1:3,4) = d_t';
T = T';
d_extrinsics = T;

Connect to the Color Device

To access the RGB channel, create a video input object. A video input object represents the
connection between MATLAB® and a specific image acquisition device.
vidRGB = videoinput('kinova_vision_imaq', 1, 'RGB24');
vidRGB.FramesPerTrigger = 1;

Search all the video source objects associated with the video input object and return the video source
object.
srcRGB = getselectedsource(vidRGB);

Modify the video source objects associated with the video input object.
srcRGB.Ipv4Address = gen3Kinova.ip_address;
srcRGB.CloseConnectionOnStop = 'Enabled';

Connect to the Depth Device

Follow the same steps to connect to the depth stream of the RGB-D camera.
vidDepth = videoinput('kinova_vision_imaq', 2, 'MONO16');
vidDepth.FramesPerTrigger = 1;
srcDepth = getselectedsource(vidDepth);

srcDepth.Ipv4Address = gen3Kinova.ip_address;
srcDepth.CloseConnectionOnStop = 'Enabled';

2-55
2 Get Started

Get Snapshots from the RGB and Depth Streams

After connecting to the hardware, acquire images from the RGB and Depth camera. If you have
already stored images on your computer, then load them in depthData and rgbData variables
respectively.

depthData = getsnapshot(vidDepth);
rgbData = getsnapshot(vidRGB);

Display Acquired Images

figure;
imshow(rgbData);

figure;
h = imshow(depthData,hot);
ax = h.Parent;
h.CDataMapping = 'scaled';
ax.CLimMode = 'auto';

2-56
Generate Colorized Point Cloud Using Vision Module of Kinova Gen3 Robot

Create a Colorized Point Cloud

Using the intrinsic parameters of both cameras and the extrinsic parameters, you can convert the
RGB and depth streams to a colorized point cloud.

[pcRaw, pcPnts,pcRGB] = pcfromgen3(depthData, rgbData, ...


Depth_intrinsic.focal_length_x, Depth_intrinsic.focal_length_y, Depth_intrinsic.principal_poi
RGB_intrinsic.focal_length_x, RGB_intrinsic.focal_length_y, RGB_intrinsic.principal_point_x,

Visualize Colorized Point Cloud

pc = pointCloud(pcPnts, 'Color', pcRGB);

figure;
ax2 = pcshow(pc);
axis(ax2, [-0.3, 0.3, -0.3, 0.3, 0.3, 0.6]);
view(ax2, [1,-60]);
xlabel('x');ylabel('y');zlabel('z');

2-57
2 Get Started

Close the Preview and Delete Video Input Objects

if exist('vidRGB', 'var')
if isvalid(vidRGB)
closepreview(vidRGB);
delete(vidRGB);
end
end

if exist('vidDepth', 'var')
if isvalid(vidDepth)
closepreview(vidDepth);
delete(vidDepth);
end
end

Clear Workspace

clear;

2-58
Generate Colorized Point Cloud Using Vision Module of Kinova Gen3 Robot

Function Definition

function [pcRaw, pcPnts, pcRGB] = pcfromgen3(depthData, rgbData,...


d_fx, d_fy, d_ppx, d_ppy, d_scalingFactor, d_extrinsics,...
rgb_fx, rgb_fy, rgb_ppx, rgb_ppy)
% fx and fy: describe the focal length of the camera, as a multiple of pixel width and height
% ppx and ppy: describe the pixel coordinates of the principal point (center of projection)
% The center of projection is not necessarily the center of the image

% Preallocate
depthData= double(depthData);
depthData(depthData == 0) = nan;
[dh, dw] = size(depthData);

% Convert to point cloud by applying depth intrinsics


% (For each pixel the depth camera can be projected to the 3D space)
pcRaw = zeros(dh,dw,3);
pcRaw(:,:,3) = depthData/d_scalingFactor;
[x,y] = meshgrid((1:dw),(1:dh));
pcRaw(:,:,1) = (x-d_ppx) .* pcRaw(:,:,3)/d_fx;
pcRaw(:,:,2) = (y-d_ppy) .* pcRaw(:,:,3)/d_fy;

% Transform the points to RGB frame by using rgb-depth extrinsics


R = d_extrinsics(1:3, 1:3);
t = d_extrinsics(4, 1:3);
pcPnts = reshape(pcRaw,[],3) * R;
%pcPnts = reshape(pcRaw,[],3) * R';
pcPnts(:,1) = pcPnts(:,1) + t(1);
pcPnts(:,2) = pcPnts(:,2) + t(2);
pcPnts(:,3) = pcPnts(:,3) + t(3);
pcPnts = reshape(pcPnts, size(pcRaw));

% Create RGB associated with point cloud (registration)


pcRGB = zeros(dh,dw,3,'uint8');

% Reproject each 3D point on the color image: Get indices into the RGB frame
xind = (pcPnts(:,:,1) * rgb_fx)./ pcPnts(:,:,3) + rgb_ppx;
yind = (pcPnts(:,:,2) * rgb_fy)./ pcPnts(:,:,3) + rgb_ppy;

% Remove indices that are not visible to the RGB camera


[rgbh, rgbw, ~] = size(rgbData);
validInd = ~(xind < 1 | xind > rgbw | yind < 1 | yind > rgbh | isnan(xind) | isnan(yind));

% Simple approximation by rounding:


xvalInd = round(xind(validInd));
yvalInd = round(yind(validInd));

rgbLinearInd = sub2ind([rgbh rgbw], yvalInd(:), xvalInd(:));

rgbDataTemp = uint8(rgbData(:,:,1));
color_temp = zeros(dh,dw,'uint8');
color_temp(validInd) = rgbDataTemp(rgbLinearInd);
pcRGB(:,:,1) = color_temp;

rgbDataTemp = uint8(rgbData(:,:,2));
color_temp = zeros(dh,dw,'uint8');
color_temp(validInd) = rgbDataTemp(rgbLinearInd);

2-59
2 Get Started

pcRGB(:,:,2) = color_temp;

rgbDataTemp = uint8(rgbData(:,:,3));
color_temp = zeros(dh,dw,'uint8');
color_temp(validInd) = rgbDataTemp(rgbLinearInd);
pcRGB(:,:,3) = color_temp;
end

2-60
Read Current Joint Angles from KINOVA Gen3 Robot Arm

Read Current Joint Angles from KINOVA Gen3 Robot Arm

This example shows you how to connect to the KINOVA Gen3 7-DoF Ultralightweight Robot arm with
Simulink® using Robot Operating System (ROS). This includes ROS message Subscribe block to get
feedback from the robot.

Products Required

• MATLAB®
• Simulink
• Robotics System Toolbox™
• ROS Toolbox™

Hardware Required

KINOVA® Gen3 Robot

Introduction

Robotics System Toolbox Support Package for Manipulators enables you to control manipulators
using MATLAB and Simulink. This support package utilizes ROS packages provided by robot
manufacturers to acquire various sensor data, simulate robot models, and control the robot. You can
prototype algorithms and perform simulations of these robots using rigid body tree models from
Robotics System Toolbox™ or Simscape™ Multibody™ robot models. This support package also
allows you to connect with the robot hardware to test and validate your algorithms.

In this example, sensor_msgs/JointState ROS message is used to get current joint configuration of
Kinova® Gen3 robot.

Prerequisites

• If you are new to Simulink, watch the Simulink Quick Start.


• Perform the initial of the support package using Hardware Setup screens and select ROS interface
in step 2.
• Refer to “Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to Control the
Robot” on page 2-12 for information on the communication interface between MATLAB and the
robot. Execute the steps to initiate required ROS nodes.
• Connect to the ROS network.

Model

Open the manipdemo_getAnglesROS model.

2-61
2 Get Started

The model consists of a ROS subscriber block which is configured for /my_gen3/joint_states
ROS topic. As explained in the Github page of Kinova Robotics, the first part of the ROS topic,
'my_gen3' might be different based on the robot name set during the roslaunch command. The joint
angles and gripper position is extracted from the received message and then displayed using Display
blocks.

Run the Model

After launching required ROS nodes and connecting to the ROS network, click Simulate on the
Simulink toolbar and then click Run. The Display blocks in Simulink displays the current joint angles
and gripper position of the robot.

2-62
Control Individual Joint Angle of KINOVA Gen3 Robot

Control Individual Joint Angle of KINOVA Gen3 Robot

This example shows you how to connect to the KINOVA Gen3 7-DoF Ultralightweight Robot arm with
Simulink® using Robot Operating System (ROS). This includes ROS message Subscribe and Publish
blocks to get feedback from the robot and then send commands to the robot to reach a particular
joint configuration.

Products Required

• MATLAB®
• Simulink
• Robotics System Toolbox™
• ROS Toolbox

Hardware Required

KINOVA® Gen3 Robot

Introduction

Robotics System Toolbox Support Package for KINOVA Gen3 Manipulators enables you to control
manipulators using MATLAB and Simulink. This support package utilizes ROS packages provided by
robot manufacturers to acquire various sensor data, simulate robot models, and control the robot.
You can prototype algorithms and perform simulations of these robots using rigid body tree models
from Robotics System Toolbox or Simscape™ Multibody™ robot models. This support package also
allows you to connect with the robot hardware to test and validate your algorithms.

In this example, sensor_msgs/JointState ROS message is used to get current joint configuration of
Kinova Gen3 robot.

Prerequisites

• If you are new to Simulink, watch the Simulink Quick Start.


• Perform the initial of the support package using Hardware Setup screens and select ROS interface
in step 2.
• Refer to “Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to Control the
Robot” on page 2-12 for information on the communication interface between MATLAB and the
robot. Execute the steps to initiate required ROS nodes.
• Connect to the ROS network.

Model

Open the manipdemo_setJointAnglesROS model.

2-63
2 Get Started

The model consists of a ROS subscriber block which is configured for /my_gen3/joint_states
ROS topic. As explained in the GitHub page of Kinova Robotics, the first part of the ROS topic,
'my_gen3' might be different based on the robot name set during the roslaunch command. Current
joint angles are extracted from the received message and then converted to degrees. Further, user
specified offset in each joint angle is added to the current value and given to the Send Joint
Trajectory subsystem.

Send Joint Trajectory

This subsystem consists of three different blocks. The Blank ROS message block, first from the left, is
configured to create a blank message for kortex_driver/PlayJointTrajectoryRequest ROS
service request. The MATLAB function block packJointTraj, assigns various values such as desired
joint configuration, type of constraint, and maximum velocity to blank message fields. The call service

2-64
Control Individual Joint Angle of KINOVA Gen3 Robot

block is configured for /my_gen3/base/play_joint_trajectory service and sends commands to


robot to reach the desired joint configuration.

Run the Model

Note: Ensure that the E-STOP is close to you to stop the movement of robot, as the robot moves
according to the specified offset.

After launching required ROS nodes and connecting to the ROS network, click Simulate on the
Simulink toolbar and then click Run. The robot will not move from the current position at the start as
there is no change in the joint angle offset. Now change the offset values and apply the change. This
triggers the Send Joint Trajectory subsystem and the robot moves to the desired joint angle
configuration.

2-65
2 Get Started

Send KINOVA Gen3 Robot Arm to Desired Cartesian Pose

This example shows you how to connect to the KINOVA Gen3 7-DoF Ultralightweight Robot arm with
Simulink® using Robot Operating System (ROS). This includes ROS message Subscribe and Publish
blocks to get feedback from the robot and then send commands to the robot to reach a particular
Cartesian configuration.

Products Required

• MATLAB®
• Simulink
• Robotics System Toolbox™
• ROS Toolbox

Hardware Required

KINOVA® Gen3 Robot

Introduction

Robotics System Toolbox Support Package for KINOVA Gen3 Manipulators enables you to control
manipulators using MATLAB and Simulink. This support package utilizes ROS packages provided by
robot manufacturers to acquire various sensor data, simulate robot models, and control the robot.
You can prototype algorithms and perform simulations of these robots using rigid body tree models
from Robotics System Toolbox or Simscape™ Multibody™ robot models. This support package also
allows you to connect with the robot hardware to test and validate your algorithms.

In this example /my_gen3/base_feedback/ ROS message is used to get current Cartesian


configuration of KINOVA Gen3 robot. Then /my_gen3/base/play_cartesian_trajectory
custom ROS service is used to send commands to a robot to follow a desired joint configuration.

Prerequisites

• If you are new to Simulink, watch the Simulink Quick Start.


• Perform the initial of the support package using Hardware Setup screens and select ROS interface
in step 2.
• Refer to “Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to Control the
Robot” on page 2-12 for information on the communication interface between MATLAB and the
robot. Execute the steps to initiate required ROS nodes.
• Connect to the ROS network.

Model

Open the manipdemo_setCartesianROS model.

2-66
Send KINOVA Gen3 Robot Arm to Desired Cartesian Pose

The model consists of a ROS Subscriber block which is configured for /my_gen3/joint_states
ROS topic. As explained in the GitHub page of Kinova Robotics, the first part of the ROS topic,
'my_gen3' might be different based on the robot name set during the roslaunch command. From the
received message, information about the current Cartesian pose is extracted and displayed on various
Display blocks. These values can be used as a reference while providing the Cartesian setpoint to the
robot. Further, user specified Cartesian setpoint is given to the Send Desired Pose subsystem.

Send Desired Pose

This subsystem consists of three different blocks. The Blank ROS message block , first from the left, is
configured to create a blank message for kortex_driver/PlayCartesianTrajectoryRequest
ROS service request. The MATLAB function block, packCartesianTraj assigns various values such as
desired Cartesian configuration, type of constraint, and maximum velocity to blank message fields.

2-67
2 Get Started

The call service block is configured for /my_gen3/base/play_cartesian_trajectory service


and for sending commands to robot to reach the desired Cartesian configuration.

Run the Model

Note: Ensure that the E-STOP is close to you to stop the movement of robot, as the robot moves
according to the specified Cartesian setpoint.

After launching required ROS nodes and connecting to the ROS network, click Simulate on the
Simulink toolbar and then click Run. The robot will not move from the current position at the start as
there is no change in the joint angle offset. The 'Move Robot manual switch is initially connected to a
ground, so the Send Desired Pose subsystem will not trigger. Now, change the desired Cartesian pose
and then change the state of 'Move Robot' manual switch. This triggers the Send Desired Pose
subsystem and the robot moves to the desired Cartesian configuration.

2-68
Send Precomputed Trajectory to KINOVA Gen3 Robot Arm

Send Precomputed Trajectory to KINOVA Gen3 Robot Arm

This example shows you how to connect to the KINOVA Gen3 7-DoF Ultralightweight Robot arm with
Simulink® using Robot Operating System (ROS). This includes ROS message Subscribe and Publish
blocks to get feedback from the robot and then send command to the robot to follow a precomputed
joint trajectory.

Products Required

• MATLAB®
• Simulink
• Robotics System Toolbox™
• ROS Toolbox

Hardware Required

KINOVA® Gen3 Robot

Introduction

Robotics System Toolbox Support Package for KINOVA Gen3 Manipulators enables you to control
manipulators using MATLAB and Simulink. This support package utilizes ROS packages provided by
robot manufacturers to acquire various sensor data, simulate robot models, and control the robot.
You can prototype algorithms and perform simulations of these robots using rigid body tree models
from Robotics System Toolbox or Simscape™ Multibody™ robot models. This support package also
allows you to connect with the robot hardware to test and validate your algorithms.

In this example /my_gen3/joint_states ROS message is used to get current joint configuration of
KINOVA Gen3 robot. Then follow joint trajectory custom ROS action is used for sending command to
a robot to follow a desired precomputed joint trajectory. For more information on the advantages of
the precomputed trajectory and its limitation, see “Track Pre-Computed Trajectory of Kinova Gen3
Robot End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API” on page 2-20.

Prerequisites

• If you are new to Simulink, watch the Simulink Quick Start.


• Perform the initial of the support package using Hardware Setup screens and select ROS interface
in step 2.
• Refer to “Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to Control the
Robot” on page 2-12 for information on the communication interface between MATLAB and the
robot. Execute the steps to initiate required ROS nodes.
• Connect to the ROS network.

Model

Open the manipdemo_sendTrajectoryROS model.

2-69
2 Get Started

The model consists of a ROS Subscriber block which is configured for /my_gen3/joint_states
ROS topic. As explained in the GitHub page of Kinova Robotics, the first part of the ROS topic,
'my_gen3' might be different based on the robot name set during the roslaunch command. The
current joint configuration and Cartesian offset waypoints along with the time to achieve each
waypoint is fed to Package and Send Trajectory subsystem. This subsystem calculates joint
configuration for each waypoint using Inverse Kinematics block and then calculates a smooth
trajectory between waypoints using trapezoidal velocity profile interpolation. This precomputed joint
trajectory command is further given to the robot using /my_gen3/
gen3_joint_trajectory_controller/follow_joint_trajectory/goal ROS action.

2-70
Send Precomputed Trajectory to KINOVA Gen3 Robot Arm

Send Robot to Home

The waypoint offsets are considered from the home position. Hence the model consists of a subsystem
and a push button which sends robot to the home position from its current position. The structure of
this subsystem is similar to the example “Send KINOVA Gen3 Robot Arm to Desired Cartesian Pose”
on page 2-66.

Package and Send Trajectory

This subsystem consists of three primary blocks related to ROS and auxiliary subsystem to calculate
the joint configuration for each waypoint. The Blank ROS message block is configured to create a
blank message for control_msgs/FollowJointTrajectoryActionGoal ROS action goal. The
MATLAB function block, packageTrajectory calculates interpolated joint angle trajectory and assigns
various values to blank message fields. Finally, the publisher block is configured for /my_gen3/

2-71
2 Get Started

gen3_joint_trajectory_controller/follow_joint_trajectory/goal ROS action and for


sending commands to robot to follow the precomputed joint angle trajectory.

Waypoint Inverse Kinematics

This subsystem consists of auxiliary algorithm to calculate the joint configuration for each waypoint.
Primarily it uses certain frame transformations and Inverse Kinematics block to calculate joint angle
configuration for each Cartesian waypoint. For more information on Inverse Kinematics, see “Control
Cartesian Position of KINOVA Gen3 Robot End-Effector Using Inverse Kinematics Block and KINOVA
KORTEX System Object” on page 2-49.

Run the Model

Note: Ensure that the E-STOP is close to you to stop the movement of robot, as the robot moves
according to the specified waypoints.

After launching required ROS nodes and connecting to the ROS network, click Simulate on the
Simulink toolbar and then click Run. The robot will not move when simulation starts. Press the Send
To Home button to command the robot to move from its current position to the home position. Wait
for the robot to stop moving completely and then press Send Trajectory. This triggers the Package
and Send Trajectory subsystem and smooth interpolated trajectory will be calculated based on the
desired waypoints. The trajectory calculation process is computationally intensive and will take few
minutes to complete. After the trajectory is calculated, it is provided to the robot and the robot starts
moving. The precomputed trajectory has to meet certain requirements in order to be accepted by the
robot. For more information on the limitation and requirements of precomputed trajectories, see
GitHub page of Kinova Robotics. The trajectory followed by the robot with default waypoint offsets is
similar to Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot End-Effector
example. For more information, see “Generate a Trajectory Using a Set of Waypoints for KINOVA
Gen3 Robot End-Effector” on page 2-37.

2-72
Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot Model

Simulate a Detect and Pick Algorithm Using OpenCV Interface


and Rigid Body Tree Robot Model

This example shows how to simulate an end-to-end rigid body tree workflow for a KINOVA® Gen3
robot to detect a rectangular object using OpenCV Interface, and then pick and move the object with
inverse kinematics.

This example uses the algorithm created using Computer Vision Toolbox™ Interface for
OpenCV in MATLAB to detect the object from the image (either simulated image or a captured
image), and calculates the position and orientation data. Robotics System Toolbox™ is used to model,
simulate, plan motion, and visualize the manipulator.

Note: Execute the script section-by-section as mentioned in this example. Do not skip any section as
each section contains some dependency on reference variables.

Required Products

• MATLAB®
• Robotics System Toolbox™
• Robotics System Toolbox™ Support Package for Manipulators
• Computer Vision Toolbox™ (required only if you want to use the detectObjectOCV MEX function
for object detection, provided with this example)
• Image Processing Toolbox™ (required as a dependency of Computer Vision Toolbox)
• Computer Vision Toolbox Interface for OpenCV in MATLAB (required only if you want to modify
the detectObjectOCV MEX function, provided with this example)

Create a Rigid Body Tree Model and Define Initial Pose

Create a rigid body tree model for Kinova Gen3 robot, and define initial pose and get end effector
transformation.

The initial pose is defined such that the focus of the wrist-mounted camera on Gen3 robot is exactly
downwards, and the camera axis is perpendicular to the surface of the bench (to be created in the
next step) in the downward direction.

% Load the robot model from a .mat file as a rigidBodyTree object


load('exampleHelperKINOVAGen3GripperCollManip.mat');
gen3 = robot;
gen3.DataFormat = 'column';

% Define initial pose by specifying the joint values (in radians) at ['Shoulder_Link', 'HalfArm1_
q_home = [0 0.35 pi -1.20 0 -pi/2 pi/2]';

% Get the end effector transform


endEffectorTransf = getTransform(gen3,q_home,'EndEffector_Link');

Create Simulation Environment with a Bench and a Box

Create a simulation environment with a bench (to which the base of manipulator is fixed) and a box
placed on the bench.

2-73
2 Get Started

To create the collision geometry for both objects for visualization, you can use collisionBox
function. To add desired pose for the collision geometry, use the homogeneous transformation matrix.

Note: The object must be placed within the field of view limit of the camera for successful object
detection. For a given camera scan position, the camera field of view limits are [0.365, 0.635] and
[-0.239, 0.243] respectively for the X and Y axis. The object height must be within the appropriate
limit ([0.03 0.1]) as per the gripper geometry.
% Create bench mesh (X_length, Y_length, Z_length)
bench = collisionBox(1.3, 1.3, 0.05);

% Compute Homogeneous transformation matrix using translation vector at [x y z] position of the b


TBench = trvec2tform([0.35 0 -0.025]);
bench.Pose = TBench;

% Height of the box in meter.


boxHeight = 0.05;
% Position of the box in the base reference system [x y z]
boxPosition = [0.45 0.15 boxHeight/2]';
% Orientation of the box in degrees
boxOrientation = -20;
% Box mesh (X_length, Y_length, Z_length)
box = collisionBox(0.075, 0.17, boxHeight);
Rz = [cosd(boxOrientation) -sind(boxOrientation) 0; sind(boxOrientation) cosd(boxOrientation) 0;

% Create 3x3 rotation matrix


transf = rotm2tform(Rz);
% Homogeneous transformation(4x4) for the box
transf(:,4) = [boxPosition; 1];
box.Pose = transf;
% Creating a Box structure to be used in the next section to find pose transformation for visuali
Box.mesh = box;

% Create environment array of all collision objects


environmentWorld = {bench, box};

Visualize Robot in Scan Position with the Simulation Environment

Visualize the robot model and the environment objects. Set the color of the bench and the box
objects.
% Close the previous simulation figure window if it is present while running this script again
close(findobj('type','figure','name','Simulation of Gen3 to Detect and Pick Object'))

% Current figure handle


hFig = figure('name', 'Simulation of Gen3 to Detect and Pick Object');
% Current axes handle
ax = gca;

show(gen3,q_home,'PreservePlot',false,'FastUpdate',true,'Frames',"off");
set(gcf,'Visible','on');
hold on

% Initialize array for graphics objects


patchObj = gobjects(1,2);
for i = 1:2
[~,patchObj(i)] = show(environmentWorld{i});
end

2-74
Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot Model

patchObj(1).FaceColor = [0.89 0.68 0.37];


patchObj(2).FaceColor = [0.04 0.321 0.262];
axis([-0.8 1 -1 1 -0.8 1])
set(ax,'units','inch')
set(gca,'position',[0.5 0 5 5])

Find Camera Transformation with Reference to the Base of Kinova Gen3 Robot

In Kinova Gen3 robot, the camera is located at 0.041m (offset) in y-direction in the local frame of the
end effector.

% Camera offset in positive y-direction


cameraOffset = 0.041;
% Compute the actual camera transform
actualCameraTransf = endEffectorTransf * trvec2tform([0,cameraOffset,0]);
% Absolute depth from the base of the robot
cameraZ = actualCameraTransf(3,4);
% Subtract the height of the object (depth from the surface of an object with reference to the or
zDistance = cameraZ - boxHeight ;

However, if your robot is situated at some height from the reference plane, then consider that value
in the zDistance parameter.

Note: If you are using a different object instead of the one provided in this simulation, update the
boxHeight parameter accordingly. This value is used to calculate the location of the object in
subsequent sections.

2-75
2 Get Started

Define Camera Parameters

Note: The parameters defined here are based on the vision module provided by Kinova for the Gen3
robot. If you are using any other camera module, update the camera parameters accordingly.

% Horizontal fov (field of view) in radians


cameraParams.hfov = 1.211269;
% Vertical fov in radians
cameraParams.vfov = 0.7428;

Acquire the Image for Object Detection

Select the type of image, which is the input for the OpenCV object detection function.

Select Simulated Image for using the simulated image, or select Camera Image if you want to use
an actual image captured by the vision module camera for the simulation purpose.

To use Kinova vision module to capture actual image for the simulation, you need to complete the
steps 1 and 2 explained in “Configure MATLAB to Acquire Images from Vision Module” on page 2-6.

% Define the Kinova Gen3 robot IP using the "gen3KinovaIP" variable.


gen3KinovaIP = '192.168.1.10';

selectImageType = ;
rgbImg = exampleHelperAcquireImage(environmentWorld,actualCameraTransf,selectImageType,cameraPara

Note: If you want to use an existing image saved on your computer, then load the image (using
imread function) in the "rgbImg" variable after running this section. Also, update the camera
parameters accordingly and select the image type as Simulated Image in the drop-down menu, so
that you can proceed without hardware connection.

Use OpenCV Object Detection Algorithm and Compute Object Location in a Global or Base
Frame of Kinova Gen3 Robot

Select Use detectObjectOCV MEX Function from the below drop-down menu to use the MEX
function generated from the OpenCV file (detectObjectOCV.cpp) to detect the rectangular object
and extract the position and orientation parameters in terms of pixel and degrees, respectively.

Note: To use detectObjectOCV MEX Function, Computer Vision Toolbox is a required product.
Please ensure that this toolbox is installed in your system. To check the list of installed products, use
"ver" command in the MATLAB command window.

If Computer Vision Toolbox is not installed in your system, then select Skip Object Detection to skip
the OpenCV object detection to avoid the issues related to MEX files dependency.

selectObjectDetectionWorkflow =

selectObjectDetectionWorkflow =
"OpenCVWorkflowEnabled"

The overall algorithm to detect an object's position and orientation is similar to the one explained in
“Detect Position and Orientation of a Rectangular Object Using OpenCV” on page 3-8.

Note: If you are using an actual image from the vision module or a captured custom image for the
simulation, then you may need to change the threshold parameters of the OpenCV object detection

2-76
Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot Model

logic. Use the below command in the MATLAB command window to open the
detectObjectOCV.cpp file to make the necessary changes.

>> open detectObjectOCV.cpp

If you have made changes in the OpenCV source file (detectObjectOCV.cpp), then follow these
steps to generate an updated MEX file that reflects the changes (for more information regarding
these steps, see Computer Vision Toolbox™ Interface for OpenCV in MATLAB):

1 Install Computer Vision Toolbox Interface for OpenCV in MATLAB.


2 Select a compiler version using this command:

>> mex -setup

• Create the MEX-file from the source file:

>> mexOpenCV detectObjectOCV.cpp

The below section of code calculates the absolute position of the object in a global frame of reference.

if (strcmp(selectObjectDetectionWorkflow,"OpenCVWorkflowEnabled"))
% Add path to the MEX function location and use it to compute the
% location and orientation of the object
dirPath = codertarget.robotmanipulator.internal.getSpPkgRootDir;
dirFullPath = fullfile(dirPath,'resources','opencv_mex');
addpath(dirFullPath);
result = detectObjectOCV(rgbImg);

% Convert the result into double data type for further calculations
result = double(result);

% Calculating horizontal and vertical focal length


hfocalLength = (size(rgbImg,2)/2)/tan(cameraParams.hfov/2); %horizontal focal length
vfocalLength = (size(rgbImg,1)/2)/tan(cameraParams.vfov/2); %vertical focal length

% Computed center point of the box object


centerBoxwrtCenterPixel = [round(size(rgbImg,1)/2)-result(1), round(size(rgbImg,2)/2)-result(
worldCenterBoxwrtCenterPixel(1) = (zDistance/vfocalLength)*centerBoxwrtCenterPixel(1); % in m
worldCenterBoxwrtCenterPixel(2) = (zDistance/hfocalLength)*centerBoxwrtCenterPixel(2);
actualpartXY = actualCameraTransf(1:2,4)' + worldCenterBoxwrtCenterPixel;
boxCenterPoint = [actualpartXY(1), actualpartXY(2), boxHeight/2];
else
% Skipping the OpenCV MEX function and taking the predefined
% object location and orientation for the further computation
result = [boxPosition(1), boxPosition(2), boxOrientation];
boxCenterPoint = boxPosition';
end

Note: if you are using the actual image of a different object, change the boxHeight parameter
accordingly in the "boxCenterPoint" definition.

Update Simulation Environment Based on the Computed Object Location

Update the simulation environment to display the object in the simulation world as per the input
image and the computed object location.

2-77
2 Get Started

% Update box location and orientation in simulation figure window


% Position of the box in the base reference system [x y z]
boxPosition = [boxCenterPoint(1) boxCenterPoint(2) boxHeight/2]';
% Orientation of the box in degrees
boxOrientation = result(3);
Rz = [cosd(boxOrientation) -sind(boxOrientation) 0; sind(boxOrientation) cosd(boxOrientation) 0;
% Create 3x3 rotation matrix
transf = rotm2tform(Rz);

% Homogeneous transformation(4x4) for the box


transf(:,4) = [boxPosition; 1];
% Updating pose
box.Pose = transf;

% Note: If you want to update the object dimension as per the actual
% object, then update it here by changing box.X, box.Y, box.Z parameter
% values.

% Delete previous box patch object


delete(patchObj(2));

% Create new patch object with updated box parameters


[~,patchObj(2)] = show(box);
patchObj(2).FaceColor = [0.04 0.321 0.262];

2-78
Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot Model

Motion Planning to Pick and Move the Object with Inverse Kinematics

This step involves the following tasks:

• Task-1: Reach 15cm above the object


• Task-2: Reach near to the object and grab it in the gripper
• Task-3: After picking the object, reach 15cm above the surface of the bench

% Define the sample time and trajectory time array (For simulation
% purpose, we are using the same trajectory time for all the three tasks)

waypointTimes = [0;4];
% Sample time
ts = 0.2;
% Creating a trajectory time array
trajTimes = waypointTimes(1):ts:waypointTimes(end);

Task-1: Reach 15cm above the object

In this task, compute the waypoints, orientations and the initial guess for inverse kinematics.

pickedPartTransf = getTransform(gen3,q_home,'pickedPart');
% Define reach to height (+Z direction) for Task-1 and Task-3
reachToHeight = 0.15;
% Configure the waypoint to reach 0.15m above the object for the first task
goalPointForTask1 = boxCenterPoint' + [0;0;reachToHeight];
waypointForTask1 = horzcat([pickedPartTransf(1,4); pickedPartTransf(2,4); pickedPartTransf(3,4)],
goalPointForTask1);

% Orientation at grasp pose


orientationAtGraspPose = result(3);
orientations = [-pi/2 pi 0; wrapToPi(deg2rad(orientationAtGraspPose)) pi 0]';

ikInitGuess = q_home';

Generate a trajectory for the Task-1 using above parameters.

computedTrajForTask1 = exampleHelperGenerateTrajectory(gen3,waypointForTask1,orientations,trajTim

Visualize the trajectory for Task-1

disp('---* Reaching 15cm above to the object*---');

---* Reaching 15cm above to the object*---

r = rateControl(1/ts);
for idx = 1:numel(trajTimes)
config = computedTrajForTask1.position(:,idx)';
show(gen3,config','PreservePlot',false,'FastUpdate',true,'Frames',"off");
waitfor(r);
end

2-79
2 Get Started

Task-2: Reach near to the object and grab it in the gripper

Compute waypoints, orientations and initial guess for inverse kinematics,.

pickedPartTransf = getTransform(gen3,config','pickedPart');
waypointForTask2 = horzcat([pickedPartTransf(1,4); pickedPartTransf(2,4); pickedPartTransf(3,4)],
orientations = [deg2rad(orientationAtGraspPose) pi 0; deg2rad(orientationAtGraspPose) pi 0]';
ikInitGuess = config;

Generate a trajectory for the Task-2 using above parameters

computedTrajForTask2 = exampleHelperGenerateTrajectory(gen3,waypointForTask2,orientations,trajTim

Visualize the trajectory for Task-2

disp('---* Reaching near to the object*---');

---* Reaching near to the object*---

r = rateControl(1/ts);
for idx = 1:numel(trajTimes)
config = computedTrajForTask2.position(:,idx)';
show(gen3,config','PreservePlot',false,'FastUpdate',true,'Frames',"off");
waitfor(r);
end

Attach the object to the robot's gripper to grab it.

2-80
Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot Model

% Temporary pose for creating plot handle for the object


tempPose = [0,0,0];
correctPose = Box.mesh.Pose;
Box.mesh.Pose = trvec2tform(tempPose);
[~, Box.plot] = show(Box.mesh);
Box.plot.LineStyle = 'none';
Box.plotHandle = hgtransform;

% Attach the plot handle to the parent


Box.plot.Parent = Box.plotHandle;
Box.mesh.Pose = correctPose;
Box.plotHandle.Matrix = Box.mesh.Pose;
Box.plot.FaceColor = [0.04 0.321 0.262];

partBody = getBody(gen3,'pickedPart');
addCollision(partBody,"box", [Box.mesh.X, Box.mesh.Y, Box.mesh.Z]);

CurrentRobotJConfig = config';
% Compute current transformation matrix for pickedPart for setting up the pose of the object mesh
CurrentRobotTaskConfig = getTransform(gen3,CurrentRobotJConfig,'pickedPart');
patchObj(2).Visible = 'off';
show(gen3,CurrentRobotJConfig,'PreservePlot',false,'FastUpdate',true,'Frames',"off");
Box.mesh.Pose = CurrentRobotTaskConfig ;
Box.plotHandle.Matrix = Box.mesh.Pose;

disp('---*Object is attached to the robot gripper*---');

2-81
2 Get Started

---*Object is attached to the robot gripper*---

Task-3: Reach to 15cm above the surface of the bench after grabbing the object

Compute waypoints, orientations and the initial guess for inverse kinematics for Task-3.

pickedPartTransf = getTransform(gen3,config','pickedPart');
goalPointForTask3 = [pickedPartTransf(1,4);pickedPartTransf(2,4);reachToHeight];
waypointForTask3 = horzcat([pickedPartTransf(1,4); pickedPartTransf(2,4); pickedPartTransf(3,4)],

orientations = [deg2rad(orientationAtGraspPose) pi 0; deg2rad(orientationAtGraspPose) pi 0]';

Generate a trajectory for Task-3 using above parameters.

ikInitGuess = config;
computedTrajForTask_3 = exampleHelperGenerateTrajectory(gen3,waypointForTask3,orientations,trajTi

Visualize the trajectory for Task-3.

disp('---*Reaching 15cm above the object after grabbing it in gripper*---');

---*Reaching 15cm above the object after grabing it in gripper*---

r = rateControl(1/ts);
for idx = 1:numel(trajTimes)
config = computedTrajForTask_3.position(:,idx)';
show(gen3,config','PreservePlot',false,'FastUpdate',true,'Frames',"off");
CurrentRobotTaskConfig = getTransform(gen3,config','pickedPart'); %Computing current transfor
Box.mesh.Pose = CurrentRobotTaskConfig;
Box.plotHandle.Matrix = Box.mesh.Pose;
waitfor(r);
end

2-82
Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot Model

disp('--*All simulation tasks of picking and moving the object are completed*--');

--*All simulation tasks of picking and moving the object are completed*--

Clear variables, which are not significant, from Workspace.

clear partBody patchObj Bench CurrentRobotTaskConfig robot transf vfocalLength hfocalLength orien
orientationAtGraspPose i idx Rz tempPose TBench worldCenterBoxwrtCenterPixel actualpartXY ...
reachToHeight r ikInitGuess pickedPartTransf config correctPose CurrentRobotJConfig ans ax be
cameraZ environmentWorld goalPointForTask1 goalPointForTask3 actualCameraTransf boxHeight box
endEffectorTransf centerBoxwrtCenterPixel zDistance boxCopy dirPath path

2-83
2 Get Started

Read Current Joint Angle of KINOVA Gen3 and Publish to ROS


Network

This example shows how to deploy robotics algorithms developed in Simulink as a standalone ROS
node to NVIDIA Jetson compute boards. The example includes ROS Subscribe block to get feedback
from the robot and relay the same information over a new ROS topic using ROS Publish block.

Required Products

• MATLAB®
• Simulink®
• Robotics System Toolbox™
• ROS Toolbox
• Simulink Coder

Required Hardware

• KINOVA® Gen3 Robot


• NVIDIA Jetson Nano OR TX2 OR Xavier

Introduction

Robotics System Toolbox Support Package for Manipulators enables you to control manipulators
using MATLAB and Simulink. The support package utilizes ROS packages provided by robot
manufactures to acquire various sensor data, simulate robot models, and control the robot.

You can further utilize the standalone ROS node generation feature from the ROS Toolbox to convert
robotics algorithms into a standalone ROS node. Such a node can be automatically deployed to
various compute boards using Simulink. The deployment of robotics algorithms eliminates the
requirement of constant communication between host PC (with MATLAB installed) and the robot.
This is beneficial in many application workflows. For more information on the deployment workflow
refer to Object Detection and Motion Planning Application with Onboard Deployment.

In this example, sensor_msgs/JointState ROS message is used to get current joint configuration of
Kinova Gen3 robot. One of the joint angles is then extracted form the received data and published via
a new ROS topic with std_msgs/Float32 message.

Prerequisites

• If you are new to Simulink, watch the Simulink Quick Start.

• Perform the Setup and Configuration of the Jetson compute board as per instructions mentioned in
“Configure NVIDIA Jetson Board for Object Detection and Motion Planning” on page 3-2.

• Refer Read Current Joint Angles from KINOVA Gen3 Robot Arm to know more about the ROS
interface between for Kinova Gen3 robot.

Connect to KINOVA Gen3 Robot and Initiate Required ROS Nodes to Control the Robot

Open the terminal from the catkin workspace containing ros_kortex ROS packages in the Jetson
board. Execute 'source devel/setup.bash' and the following command in the terminal to launch
required ROS nodes.

2-84
Read Current Joint Angle of KINOVA Gen3 and Publish to ROS Network

roslaunch kortex_driver kortex_driver.launch ip_address:=192.168.0.106 gripper:=robotiq_2f_85 sta

Also, replace the IP address of the robot and gripper based on the actual hardware setup. For more
information on the roslaunch command, see GitHub page of KINOVA Robotics.

If the hardware setup and the IP address are correct, a message 'The Kortex driver has been
initialized correctly!' displays on the terminal window.

Connect to the ROS Master Running on Jetson Board from MATLAB

Configure ROS network parameter for connectivity and deployment as per setup instructions
mentioned under the section Configure the Connection to the ROS Device in Generate a Standalone
ROS Node from Simulink.

Note: The workspace folder must be the same in which you have downloaded the ROS packages from
Kinova Robotics.

Because you already started the ROS core in the earlier step, you need not start the ROS core on the
rosdevice using runCore command from MATLAB.

Model

Open the manipdemo_relayJointAnglesROS model.

The model consists of a ROS Subscribe block, which is configured for /my_gen3/joint_states ROS
topic. As explained in the Github page of Kinova Robotics, the first part of the ROS topic, my_gen3,
might be different based on the robot name that is set during the roslaunch command. The joint
angles position is extracted from the received message and then the value of the first joint angle is
passed to the subsystem block.

2-85
2 Get Started

The subsystem creates a std_msgs/Float32 type blank message and populate the data with first joint
angle. The populated message is then published using /relay_joint_states topic.

Run and Verify the Model Using MATLAB Connectivity

In the previous step, the required ROS nodes are already launched from the Jeston board, to ensure
the connection with the robot and compute board. Open Simulation on the Simulink toolbar and
then click Run. This establishes a connection between MATLAB and ROS master running on the
Jetson board.

To verify the intended behavior of the Simulink model, type rostopic list in the MATLAB command
window. The ROS topic /relay_joint_states should be listed in the output.

Echo /relay_joint_states by executing the rostopic echo /relay_joint_states command . This


prints stream of float values in the MATLAB command window. Move the Joint 1 of the robot using
Joystick or Kinova webapp, and verify that the value changes. Once verified, stop the simulation.

Run and Verify the Model using Standalone ROS deployment

Navigate to ROS tab and press Build & Deploy. This will generate the code from the model, transfer it
to specified workspace on Jetson and build it.

You can use isNodeRunning, runNode and stopNode feature of rosdevice to interact with the
deployed ROS node. Repeat the same verification steps as mentioned in the earlier section to verify
the intended behavior of the autogenerated ROS node.

2-86
Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control

Detect and Pick Object Using KINOVA Gen3 Robot Arm with
Joint Angle Control and Trajectory Control

This example shows how to deploy object detection and motion planning workflows developed in
Simulink® as a standalone ROS node to NVIDIA® Jetson® compute boards. The KINOVA® Gen3
robot receives key vision sensor parameters, current robot state, and position and orientation of the
object in the captured image using ROS network. The gathered data is further used to localize the
object in world frame and the robot is commanded to reach the object and pick it.

This example includes two models: one specific to joint angle control and the other specific to
trajectory control.

For a comparison between the two control strategies, see “Joint Angle Control Versus Trajectory
Control” on page 3-13.

Required Products

• MATLAB®
• Simulink
• Robotics System Toolbox™
• ROS Toolbox
• Simulink Coder™
• Stateflow®

Required Hardware

• KINOVA Gen3 Robot


• One of these boards: NVIDIA Jetson Nano™, NVIDIA Jetson TX2, NVIDIA Jetson AGX Xavier™
• Rectangular object (to be picked up by robot gripper)

Introduction

Robotics System Toolbox Support Package for Manipulators enables you to control manipulators
using MATLAB and Simulink. The support package utilizes ROS packages provided by robot
manufactures to acquire various sensor data, simulate robot models, and control the robot.

You can further utilize the “Generate a Standalone ROS Node from Simulink” (ROS Toolbox) feature
from the ROS Toolbox to convert robotics algorithms into a standalone ROS node. Such a node can be
automatically deployed to various computer boards using Simulink. The deployment of robotics
algorithms eliminates the requirement of constant communication between host PC (with MATLAB
installed) and the robot. This is beneficial in many application workflows. For more information on the
deployment workflow, refer to “Object Detection and Motion Planning Application with Onboard
Deployment” on page 3-4.

In this example, several ROS Subscribe blocks are used to gather key robot and camera sensor
parameters. These parameters along with the current joint configuration of the robot, position of the
camera sensor in world frame, and the object position and orientation relative to the image, are used
to localize the object in world frame. Once the object position and orientation are known in world
frame, various functionalities offered by Robotics System Toolbox are used to command the robot.

2-87
2 Get Started

This example demonstrates a workflow to design and validate complex robotics applications, which
requires integration of various open-source utilities with MATLAB, and further control the robot using
ROS drivers. The example has been tested with KINOVA Gen3 robot with the vision module and
Robotiq 2f-85 gripper. However, the overall design of the example is modular and can be easily
modified for different manipulators which can be controlled using ROS driver.

Prerequisites

• If you are new to Simulink, watch the Simulink Quick Start.

• Understand the algorithm to detect position and orientation, as explained in “Detect Position and
Orientation of a Rectangular Object Using OpenCV” on page 3-8.

• Perform the Setup and Configuration of the Jetson compute board as per instructions mentioned in
“Configure NVIDIA Jetson Board for Object Detection and Motion Planning” on page 3-2.

• Refer to the “Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo,
OpenCV, and MATLAB” on page 2-99 example for more information on the object localization
algorithm.

Connect to KINOVA Gen3 Robot and Initiate Required ROS Nodes to Control the Robot

Open the terminal from the catkin workspace containing ros_kortex ROS packages in the Jetson
board. Execute 'source devel/setup.bash' command and the following commands in the terminal to
launch required ROS nodes.

Also, replace the IP address of the robot and gripper based on the actual hardware setup. For more
information on the roslaunch command, see GitHub page of KINOVA Robotics.

2-88
Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control

1. Launch the kortex_driver node.

$ roslaunch kortex_driver kortex_driver.launch ip_address:= <ipaddress> gripper:=robotiq_2f_85 st

If the hardware setup and the IP address are correct, a message 'The Kortex driver has been
initialized correctly!' displays on the terminal window.

2. Open new terminal window at the same folder location and execute 'source devel/setup.bash'.
Launch the kortex_vision node by executing the following command.

$ roslaunch kinova_vision kinova_vision_color_only.launch device:=<ipaddress>

3. Open new terminal windows at the same folder location and execute 'source devel/setup.bash'.
Launch the detect_object node by executing the following command.

$ rosrun opencv_node detect_object

This launches two image windows - One is the output of canny edge detection and the other is the
original image with an overlay of a red rectangle. In an ideal setting, the red rectangle should match
the boundary of the object. If the object detection is not as desired, refer to “Troubleshooting Object
Detection” on page 3-12 for more details.

Connect to ROS Master Running on Jetson Board from MATLAB

Configure ROS network parameter for connectivity and deployment as per setup instructions
mentioned under the section Configure the Connection to the ROS Device in “Generate a Standalone
ROS Node from Simulink” (ROS Toolbox).

Note: The workspace folder must be the same in which you have downloaded the ROS packages from
Kinova Robotics.

Because you already started the ROS core in the earlier step, you need not start the ROS core on the
rosdevice using runCore command from MATLAB.

Model for Joint Angle Control

Open the manipdemo_DetectAndPick_Joint model.

2-89
2 Get Started

The model consists of a state machine, which performs object localization and motion planning for a
robotic manipulator.

SendToJointConfiguration Subsystem

This subsystem commands the robot to move to a particular joint angle configuration. It also
commands the gripper to open, and uses ROS interface for Kinova Gen3 robot. For more information
refer “Control Individual Joint Angle of KINOVA Gen3 Robot” on page 2-63. It also monitors if the
robot has moved to it's commanded position. If you are using a different manipulator which can be
controlled by ROS driver, then modify this subsystem accordingly.

2-90
Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control

getGraspPose Subsystem

This subsystem calculates the robot joint configuration to reach the object. To compute the desired
joint configuration, it uses vision sensor properties such as image height, image width and focal
length; current joint configuration of the robot, and the relative position and orientation of the object.
The algorithm to calculate the desired joint configuration is similar as mentioned in “Design and
Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB” on
page 2-99.

Move to Position 1

2-91
2 Get Started

Once the desired joint configuration has been calculated, robot is commanded to move to the first
position. It is about 15 cm directly above the object. Because the rotation of the object is considered
while calculating the desired joint configuration, the gripper should align with the orientation of the
object when the robot reaches to Position 1.

Move to Position 2

Once the robot reaches to the first position, it is commanded to move to the second position. At the
end of robot motion, the object should be inside the grasping region of the robot.

2-92
Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control

Close the Gripper

At this stage, the robot gripper is at the desired position to pick the object. The subsystem sends the
command to close the gripper. The desired closing position and effort might change with size and
material of the object.

2-93
2 Get Started

Move back to Position 1 and Stop the simulation

Once the object is firmly picked up by the gripper, the robot is commanded to retract to first position.
Once the robot motion is complete, the execution of model stops.

Modify Offsets

The image localization and path planning algorithm considers certain offsets while calculating
desired grasping pose. The default values are assigned to these offsets in the “Types of Callbacks”
(Simulink) of the model. The description for each offset is as following:

• objHeight: Height of the object

• heightOffset: Offset between Robot base and the base of the object. If the object is lying on the
same surface as the robot base, then the offset will be zero.

• graspPoseZOffset: Z offset for first position. This will decide the height of the robot gripper from
the object at first position.

2-94
Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control

• graspPoseOffset: Final correction in the calculation of grasp pose. Use this offset to remove any
modeling errors and to set the robot gripper height above the surface at second position.

• cameraOffset: Offset between robot body and camera sensor. Fixed for Kinova Gen3 robot.

Ensure that you change these values accordingly before proceeding further. You can start with
estimated values and then fine tune them by observing the robot's behavior.

Run and Verify the Model Using MATLAB Connectivity

In the previous step, the required ROS nodes are already launched from the Jetson board, to ensure
the connection with the robot and compute board. Place the object in the field of view of the vision
sensor. Click Simulate on the Simulink toolbar and then click Run. This establishes a connection
between MATLAB and ROS master running on the Jetson board.

Observe the robot motion and confirm that the robot is picking the object as intended.

Run and Verify the Model Using Standalone ROS Deployment

Navigate to ROS tab and click Build & Deploy. This will generate the code from the model, transfer
it to specified workspace on Jetson, and build it.

You can use isNodeRunning (ROS Toolbox), runNode (ROS Toolbox) and stopNode (ROS Toolbox)
feature of rosdevice to interact with the deployed ROS node. Repeat the same verification steps as
mentioned in the earlier section to verify the intended behavior of the autogenerated ROS node.

Model for Trajectory Control

In the previous model, the desired joint angle configuration has been sent to the robot. However, you
do not have any explicit control of the intermediate trajectory between current joint configuration
and the desired joint configuration. For an accurate control of the end effector position, you can
control the robot by sending a pre-computed joint position, velocity, and acceleration commands.

To know more about this control strategy, open the manipdemo_DetectAndPick_Traj model.

2-95
2 Get Started

The model consists of a state machine, which performs object localization and motion planning for a
robotic manipulator. The overall state machine is similar to the state machine described in the model
for joint angle control. For example, the subsystem to move to position 2 looks like this:

2-96
Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control

Package and Send Trajectory

The Package and Send Trajectory subsystem in Move_Robot_Position2 consists of three primary
blocks related to ROS and auxiliary subsystem to calculate the joint configuration for each waypoint.
The Blank ROS message block is configured to create a blank message for control_msgs/
FollowJointTrajectoryActionGoal ROS action goal. The MATLAB function block, packageTrajectory
calculates interpolated joint angle trajectory and assigns various values to blank message fields.
Finally, the publisher block is configured for /my_gen3/gen3_joint_trajectory_controller/
follow_joint_trajectory/goal ROS action and for sending commands to robot to follow the precomputed
joint angle trajectory.

2-97
2 Get Started

You can verify the entire model for trajectory control using the same steps as mentioned in Run and
Verify the Model using MATLAB connectivity and Run and Verify the Model Using
Standalone ROS deployment (for the manipdemo_DetectAndPick_Joint model in the previous
sections).

2-98
Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB

Design and Validate Object Detection and Motion Planning


Algorithms Using Gazebo, OpenCV, and MATLAB

This example shows how to design and validate complex application workflows for a robotic
manipulator like the Kinova Gen3 7-DoF Ultralightweight Robot using Gazebo® physics simulator,
OpenCV, Robotics System Toolbox™ and ROS Toolbox.

Overview

This example uses functionalities offered by OpenCV to detect the location and orientation of an
object from a video stream. This information is then passed to the motion planning algorithm using
the ROS network. The motion planning algorithm uses functionalities offered by Robotics System
Toolbox to calculate the location of the object in the world frame and commands the simulated robot
to move and pick the object.

In this example, the Gazebo physics simulator is used with a simulated robot and an object. All the
necessary files required for the ROS setup are included in the virtual machine (VM) available for
download.

Clear all output in the live script by right-clicking anywhere in the live script and then clicking ‘Clear
All Output’ or navigate to VIEW > Clear all Output.

Note: Execute the script section by section as mentioned in this example.

2-99
2 Get Started

Required Products

• Robotics System Toolbox


• Robotics System Toolbox Support Package for KINOVA® Gen3 Manipulators
• ROS Toolbox

This example builds on key concepts from the following related examples:

• Get Started with Gazebo and a Simulated TurtleBot (ROS Toolbox)


• Pick-and-Place Workflow in Gazebo Using ROS (Robotics System Toolbox)

Robot Simulation and Control in Gazebo

Start a ROS-based simulator for a KINOVA Gen3 robot and configure the MATLAB® connection with
the robot simulator.

This example uses a virtual machine (VM) that is available for download. To download and install the
same, use the link available under Download Virtual Machine section in Get Started with Gazebo
and a Simulated TurtleBot (ROS Toolbox). Follow the below steps after the installation is complete
(ignore the other steps after the first step (Download and install the ROS Virtual Machine)
mentioned in Get Started with Gazebo and a Simulated TurtleBot).

1 After installing the ROS Virtual Machine, launch the Ubuntu® virtual machine desktop.
2 On the Ubuntu desktop, click the Gazebo Recycling World - Object Detection icon to start the
Gazebo world built for this example.
3 Specify the IP address and port number of the ROS master in Gazebo so that MATLAB can
communicate with the robot simulator. For this example, the ROS master in Gazebo uses the IP
address of 172.19.98.176 displayed on the Desktop. Adjust the rosIP variable based on your VM.
4 Start the ROS 1 network using rosinit.

rosIP = '172.19.98.83'; % IP address of ROS-enabled machine


rosinit(rosIP,11311); % Initialize ROS connection

Initializing global node /matlab_global_node_65505 with NodeURI http://172.19.98.140:62441/

The simulated world consists of a Kinova Gen3 robot mounted on a table and a rectangular object.
The robot also has a simulated camera sensor attached, which transmits a video stream over a ROS
network.

To simulate and control the robot arm in Gazebo, the VM contains the ros_kortex ROS package, which
is provided by KINOVA. The packages use ros_control to control the joints to desired joint positions.

2-100
Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB

Object Detection ROS Node

First, unpause the Gazebo simulation engine. This will start the video stream from the simulated
camera sensor mounted on the robot.

physicsClient = rossvcclient('gazebo/unpause_physics');
physicsResp = call(physicsClient,'Timeout',3);

2-101
2 Get Started

The ROS node with object detection logic is placed at /home/user/catkin_ws/src/opencv_node location
in the virtual machine. The detect_object.cpp file contains the algorithm to subscribe to the video
feed and use OpenCV to detect the rectangular object. The Position and Orientation of the detected
object are further transmitted to the existing ROS network via different ROS topics.

The overall algorithm to detect an object's position and orientation is similar to the one explained in .
The necessary modifications are done in the algorithm to enable the interface with ROS.

Move the Robot to Initial Position

In the initial robot pose, the object is not entirely visible in the field of view of the camera sensor. The
following commands move the simulated robot to a predefined position such that the large area of the
table is captured.

%% Move to scan position


% Open the gripper
CommandActivateGripperROSGazebo('off');

Gripper open...

[trajPub,trajCmd] = rospublisher('/my_gen3/gen3_joint_trajectory_controller/command');

jointWaypoints = [0 0 180 266 0 272 90]*pi/180; % Scan position

jointWaypointTimes = 5;

reachJointConfiguration(trajPub,trajCmd,jointWaypoints,jointWaypointTimes);

Moving to Position
Position Reached

Once the robot reaches the scan position, the detected object can be seen in the two image windows
launched by the object detection node. The algorithm superimposes a red rectangle over the detected
object so we can verify that the correct object has been identified. You can change the slider in the
Binary Image window and observe the effect. This slider is associated with the threshold value of
canny edge detector algorithm.

2-102
Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB

Interact with Object Using Gazebo

To change the position of the object on the table, select the object first and press t. This will enable
Translation Mode in Gazebo, and axes marker will be displayed on the object. Interact with this
marker to change the position of the object.

Similarly, to change the orientation of the object, press r. This will enable Rotation Mode in Gazebo,
and axes marker will be displayed on the object. Interact with this marker to change the orientation
of the object.

Use Rest Model Poses option to reset the robot and object position. Do not use Reset World option
as it results in loss of communication between ROS master and MATLAB.

2-103
2 Get Started

Calculate the Location of Object in World Frame

The object detection node provides the location of the object with reference to the captured image.
The position coordinates and rotation of the object with respect to the horizontal axis are published to
three different ROS topics. The following section of code utilizes the object position information,
properties of the simulated camera, and robot's current pose to localize the object in the global world
frame.

%% Load Rigid Body Tree Model of Kinova Gen3 with Gripper


load('exampleHelperKINOVAGen3GripperROSGazeboManip.mat');
gen3 = robot;

%% Retrieve Camera properties


jSub = rossubscriber('/my_gen3/camera/camera_info');
jMsg = receive(jSub,10);

imageWidth = single(jMsg.Width);
imageHeight = single(jMsg.Height);
focalLength = single(jMsg.K(1));

%% End Effector
endEffectorFrame = "gripper";

%% Read current Joint Angles


jSub = rossubscriber('/my_gen3/joint_states');
jMsg = receive(jSub,10);
CurrentRobotJConfig = wrapToPi(jMsg.Position(2:8)');

%% Get transformation to end effector


cameraTransf = getTransform(gen3, CurrentRobotJConfig, 'EndEffector_Link');
cameraZ = cameraTransf(3,4);
zDistance = cameraZ - 0.05590 + 0.10372; % - Width of the object + Width of the stool;

Read the object's position and orientation data.

%% Get center and rotation of the object


tmpsub = rossubscriber('/opencv/centerx');
tmp = receive(tmpsub,10);
result(1) = tmp.Data;

tmpsub = rossubscriber('/opencv/centery');
tmp = receive(tmpsub,10);
result(2) = tmp.Data;

tmpsub = rossubscriber('/opencv/angle');
tmp = receive(tmpsub,10);
result(3) = tmp.Data;

Calculate the absolute position of the object in the world frame

centerPixel = [imageHeight/2, imageWidth/2];


centerBox = [imageHeight-result(1) imageWidth-result(2)];
centerBoxwrtCenterPixel = centerBox - centerPixel; % in pixels
worldCenterBoxwrtCenterPixel = (zDistance/focalLength)*centerBoxwrtCenterPixel; % in meters
actualCameraTransf = cameraTransf * trvec2tform([0, 0.041, 0.0]);
actualpartXY = actualCameraTransf(1:2,4)' + worldCenterBoxwrtCenterPixel;
part_centerPoint = [actualpartXY(1),actualpartXY(2),zDistance];

2-104
Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB

Calculate the Desired Grasping Pose

The following section of code calculates the desired grasping pose to reach the object. Once we have
the desired grasping pose, the joint angle configuration for the desired pose is computed using
inverse kinematics. The desired grasping pose also considers the rotation of the object to align the
gripper with the object.

ik = inverseKinematics('RigidBodyTree',gen3);
ik.SolverParameters.AllowRandomRestart = 0;

% Calculate Final Pose to grasp the object


GraspPose = trvec2tform(part_centerPoint + [-0.008 0 -zDistance-0.055])*eul2tform([deg2rad(result

Move to Position 1

The robot is first commanded to move 15 cm directly above the object. In the process, the gripper is
also aligned with the object.

% Calculate first position Grasppose - 0.15 in Z axes


taskFinal = double(GraspPose*trvec2tform([0,0,-0.15]));

jointWaypoints1 = ik(endEffectorFrame,taskFinal,[1 1 1 1 1 1],CurrentRobotJConfig);


jointWaypointTimes = 5;

%% Go to Position 1
reachJointConfiguration(trajPub,trajCmd,jointWaypoints1,jointWaypointTimes);

Moving to Position
Position Reached

Move to Position 2

Now the robot is commanded to reach the grasping pose. At the end of this motion, the object will be
inside the grasping range of the gripper.

% Read current Joint Angles


jSub = rossubscriber('/my_gen3/joint_states');
jMsg = receive(jSub,10);
CurrentRobotJConfig = wrapToPi(jMsg.Position(2:8)');

jointWaypoints2 = ik(endEffectorFrame,double(GraspPose),[1 1 1 1 1 1],CurrentRobotJConfig);

jointWaypointTimes = 3;
reachJointConfiguration(trajPub,trajCmd,jointWaypoints2,jointWaypointTimes);

Moving to Position
Position Reached

pause(1);

Grab the Object Using Gripper

%% Close the gripper


CommandActivateGripperROSGazebo('on');

Gripper closed...

2-105
2 Get Started

pause(3);

Move Back to Position 1

jointWaypointTimes = 5;
reachJointConfiguration(trajPub,trajCmd,jointWaypoints1,jointWaypointTimes);

Moving to Position
Position Reached

Shutdown the MATLAB Global Node and Clear Workspace

rosshutdown;

Shutting down global node /matlab_global_node_65505 with NodeURI http://172.19.98.140:62441/

clear;

Helper Function to Control Joint Angles of the Robot and Activate the Gripper

The following function utilizes the ros_kortex ROS package from Kinova to control the robot.

function reachJointConfiguration(trajPub,trajCmd,jointWaypoints,jointWaypointTimes)

trajCmd.JointNames = {'joint_1','joint_2','joint_3','joint_4', ...


'joint_5','joint_6','joint_7'};
jointWaypoints = wrapToPi(jointWaypoints);
trajCmd.Points = rosmessage('trajectory_msgs/JointTrajectoryPoint');
trajCmd.Points.TimeFromStart = rosduration(jointWaypointTimes);
trajCmd.Points.Positions = jointWaypoints;
trajCmd.Points.Velocities = zeros(size(jointWaypoints));
trajCmd.Points.Accelerations = zeros(size(jointWaypoints));
trajCmd.Points.Effort = zeros(size(jointWaypoints));

send(trajPub,trajCmd);
pause(1);
disp('Moving to Position');

jSub = rossubscriber('/my_gen3/joint_states');
err = 1;
while ( err > 1e-2)
jMsg = receive(jSub);
err = max(abs(jMsg.Velocity(2:8)));
end
disp('Position Reached');
end

Helper Function to Activate the Gripper

The following function utilizes the ros_kortex ROS package from Kinova to activate the gripper by
sending a ROS action command.

function CommandActivateGripperROSGazebo(state)% This class is for internal use and may be remove
if strcmp(state,'on') == 1
% Activate gripper
[gripAct,gripGoal] = rosactionclient('/my_gen3/custom_gripper_controller/gripper_cmd'
gripperCommand = rosmessage('control_msgs/GripperCommand');
gripperCommand.Position = 0.1; % 0.1 fully closed, 0 fully open

2-106
Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB

gripperCommand.MaxEffort = 500;
gripGoal.Command = gripperCommand;
pause(1);

% Send command
sendGoal(gripAct,gripGoal);
disp('Gripper closed...');
else
% Deactivate gripper
[gripAct,gripGoal] = rosactionclient('/my_gen3/custom_gripper_controller/gripper_cmd'
gripperCommand = rosmessage('control_msgs/GripperCommand');
gripperCommand.Position = 0.0; % 0.1 fully closed, 0 fully open
gripperCommand.MaxEffort = 500;
gripGoal.Command = gripperCommand;
pause(1);

% Send command
sendGoal(gripAct,gripGoal);
disp('Gripper open...');
end

pause(2);
end

2-107
3

Run on Target Hardware

• “Configure NVIDIA Jetson Board for Object Detection and Motion Planning” on page 3-2
• “Object Detection and Motion Planning Application with Onboard Deployment” on page 3-4
• “Detect Position and Orientation of a Rectangular Object Using OpenCV” on page 3-8
• “Joint Angle Control Versus Trajectory Control” on page 3-13
3 Run on Target Hardware

Configure NVIDIA Jetson Board for Object Detection and Motion


Planning

Note The workflow mentioned in this topic is tested on NVIDIA Jetson Nano™, NVIDIA Jetson TX2
and NVIDIA Jetson AGX Xavier™ compute boards, having Ubuntu 18.04 with ROS Melodic and
OpenCV 3.2.

This topic is part of the overall workflow described in “Object Detection and Motion Planning
Application with Onboard Deployment” on page 3-4.

To configure NVIDIA Jetson board for object detection and motion planning:

1 Ensure that you configure the NVIDIA Jetson compute board as per the instructions provided on
the respective web pages - NVIDIA Jetson Nano, NVIDIA Jetson TX2 or NVIDIA Jetson AGX
Xavier.
2 Perform Ubuntu Install of ROS Melodic on the Jetson board.
3 Install ROS packages and dependencies for Kinova Gen3 robot, as described in “Install ROS
Packages and Dependencies for ROS on External Computer” on page 1-17, on the Jetson board.
Because you are using the vision module to detect the object, the ROS packages for vision
module also need to be installed.
4 Check the libopencv-dev version by executing the following command in the terminal:

$ apt list -a libopencv-dev


5 If the installed version is greater than 4.0, then install the version lower than 4.0. For example,
execute the following command in the terminal to install libopencv-dev version 3.2.0:

$ sudo apt install libopencv-dev=3.2.0+dfsg-4ubuntu0.1


6 Execute Step 4 again and ensure that the installed version has been modified according to the
Step 5.
7 The necessary source files for the OpenCV node, responsible to detect the object are included in
the support package. Execute the following command in the MATLAB command window and
navigate to the folder location specified by the command output.

>> fullfile(codertarget.robotmanipulator.internal.getSpPkgRootDir,'resources')
8 The resources folder contains a ZIP file, opencv_node.zip. Extract the content of the ZIP file
and copy the opencv_node folder to the src folder of the catkin workspace folder that you
created in Step 3.

If you only have the access of Jetson board via SSH terminal, you can use wget command to
download the ZIP archive and then use unzip command to extract the content into the src folder.
You can host the opencv_node.zip to any file-hosting platform and generate a sharable link.
9 Execute this command to build the opencv node.

catkin_make --cmake-args -DCONAN_TARGET_PLATFORM=jetson

Once the build is completed, execute the command ‘source devel/setup.bash’ from the catkin
workspace folder that you created in Step3.

3-2
Configure NVIDIA Jetson Board for Object Detection and Motion Planning

See Also
“Object Detection and Motion Planning Application with Onboard Deployment” on page 3-4

3-3
3 Run on Target Hardware

Object Detection and Motion Planning Application with


Onboard Deployment
With Robotics System Toolbox Support Package for Manipulators, you can also eliminate need of a
continuous communication link between the robot and a generic host computer (with MATLAB). This
functionality is helpful for applications where the manipulator robot is mounted over a mobile
platform or robot. Additionally, when compared to custom compute boards, generic host computers
can be less efficient for certain tasks like perception, path planning, and Artificial Intelligence (AI).

Based on the control interface provided by the robot manufacturer, you can choose between a
MATLAB only workflow or MATLAB and ROS workflow for the design, simulation, development, and
verification.

The code generation and deployment require ROS Toolbox to be installed along with Robotics System
Toolbox Support Package for Manipulators.

Object Detection Using OpenCV


You can create the core logic for object detection using OpenCV. Based on the preferred workflow,
that is, MATLAB only or MATLAB and ROS, the core logic can be reused, once you achieve
satisfactory performance from the algorithm.

If the MATLAB only workflow is preferred, then Computer Vision Toolbox Interface for OpenCV in
MATLAB can be used to convert the OpenCV logic into a MATLAB MEX function. This helps you to
call the MEX function for object detection directly from the MATLAB script.

If the MATLAB and ROS workflow is preferred, then cv_bridge ROS package can be used to convert
the OpenCV logic into a standalone ROS node. The node can subscribe to the video feed from the
camera sensor and publish position and orientation of the detected object. This enables you to focus
on the motion planning and localization algorithm while directly subscribing to the ROS topics
published by the object detection node.

In the two workflows described in this section, a tight integration with OpenCV represents usage of
any external functionalities or tools. Many applications require an integration of various

3-4
Object Detection and Motion Planning Application with Onboard Deployment

functionalities offered by MATLAB and external software packages, and these workflows demonstrate
a design process which involves such use cases.

A typical algorithm that you can use to detect position and orientation of a rectangular object is
explained in “Detect Position and Orientation of a Rectangular Object Using OpenCV” on page 3-8.

MATLAB Only Workflow

For the design process using MATLAB only workflow, you start the development by utilizing
functionalities offered by the Robotics System Toolbox. The Rigid body tree robot model can be used
to simulate a robot at a particular joint configuration. Further, motion planning features can be
utilized to create a trajectory to pick the object.

As mentioned in the earlier section, you can reuse the object detection algorithm developed in
OpenCV and convert it to a MEX function using Computer Vision Toolbox Interface for OpenCV in
MATLAB. Finally, robot visualization features can be used to observe and validate the robot motion.
For more details on this workflow, refer to the example, “Simulate a Detect and Pick Algorithm Using
OpenCV Interface and Rigid Body Tree Robot Model” on page 2-73.

Once the desired performance is achieved and validated using simulation, various control interfaces
provided by the robot manufacturers can be leveraged to control the robot. KINOVA Robotics
provides the MATLAB APIs to control their Gen3 robot. For more information on how to proceed from
simulation to controlling the actual hardware, refer to “Track Pre-Computed Trajectory of Kinova
Gen3 Robot End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API” on page 2-
20.

3-5
3 Run on Target Hardware

MATLAB and ROS Workflow

For the design process using MATLAB and ROS workflow, you start the development by utilizing
functionalities offered by the Robotics System Toolbox, ROS Toolbox and Gazebo physics simulator.
The simulated world can be created in Gazebo, which consists of an object and the robot. A simulated
vision sensor can be mounted on the robot and the video feed from the camera sensor can be
published over a ROS network.

As stated in the previous section, cv_bridge ROS package can be used to convert the OpenCV logic
into a standalone ROS node. The motion planning algorithm is developed using Robotics System
Toolbox features, and ROS Toolbox functionalities are used to communicate with the ROS master. For
more details, refer to the example, “Design and Validate Object Detection and Motion Planning
Algorithms Using Gazebo, OpenCV, and MATLAB” on page 2-99.

Once the desired performance is achieved and validated using simulation, ROS driver interfaces
provided by the robot manufacturers can be leveraged to control the robot. KINOVA Robotics
provides ROS driver packages to control their Gen3 robot via ROS.

ROS Toolbox allows you to generate a standalone ROS node from a Simulink model, which can be
directly deployed to various boards. “Read Current Joint Angle of KINOVA Gen3 and Publish to ROS
Network” on page 2-84 provides a basic example of such a workflow. For more information on how to
proceed from simulation to controlling the actual hardware and deployment, refer to “Detect and Pick
Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control” on page 2-87
example.

3-6
Object Detection and Motion Planning Application with Onboard Deployment

See Also

Related Examples
• “Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot
Model” on page 2-73
• “Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV,
and MATLAB” on page 2-99
• “Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory
Control” on page 2-87

3-7
3 Run on Target Hardware

Detect Position and Orientation of a Rectangular Object Using


OpenCV
This topic explains the algorithm to detect position and orientation of a rectangular object using
OpenCV.

This topic is part of the overall workflow described in “Object Detection and Motion Planning
Application with Onboard Deployment” on page 3-4.

The figure shows the original image captured by the vision module of a Kinova Gen3 Robot. The
image consists of a rectangular object lying on a uniform background. The image also shows a small
portion of the gripper.

1 To find a position and orientation of a rectangular object, convert the image to grayscale image
for efficient object detection. You can also apply a median blur in the image to smoothen any salt-
and-pepper type of noise. The below figure shows the gray image derived from the original
image.

3-8
Detect Position and Orientation of a Rectangular Object Using OpenCV

2 Detect edges of the rectangular object. For this, you can use canny edge detection. The output
image of canny edge detector is a binary black and white image. The threshold of canny edge
detector can be adjusted such that only the periphery of the object is visible in the binary image.

3-9
3 Run on Target Hardware

3 Once you have the binary image, detect contours and approximate these contours by shapes with
less number of vertices.
4 Because you have the prior information about the shape of the object, you can further compute a
rectangle with minimum area, which completely encapsulate the less complex shape (which we
derived by approximating original contours). The minimum area rectangle contains information
of the object’s location and rotation with respect to the horizontal axis.
5 To avoid any false detection, you can check the area of the rectangle. If the area of the rectangle
falls inside the predefined range, the detected rectangle can be accepted as the approximation of
original object, and the position and orientation information can be passed to the motion
planning algorithm.

In the below figure, you can see the minimum area rectangle with red boundaries, superimposed
on the original image. The rectangle has a close resemblance with the rectangular object.

3-10
Detect Position and Orientation of a Rectangular Object Using OpenCV

The key OpenCV functions used in this workflow are the following:

• cv::cvtColor
• cv::medianBlur
• cv::Canny
• cv::findContours
• cv::approxPolyDP
• cv::minAreaRect

Verify Object Detection with NVIDIA Jetson Board


After you create the algorithm for object detection, you need to verify it using the NVIDIA Jetson
board:

1 Perform the setup and configuration of the Jetson compute board as mentioned in “Configure
NVIDIA Jetson Board for Object Detection and Motion Planning” on page 3-2.
2 The intended motion of the robot is described in “Detect and Pick Object Using KINOVA Gen3
Robot Arm with Joint Angle Control and Trajectory Control” on page 2-87 example. The robot
first moves to a scan position where the vision module is looking straight down. At this point,
algorithm waits for the position and orientation data of the object from the object detection node.
Hence the node should be able to detect the object when the robot is at scan position.

Manually move the robot near the scan position using joystick or webapp. The joint configuration
for the scan position is [0 0 180 266 0 272 90] degrees.

3-11
3 Run on Target Hardware

3 Open new terminal window at the same folder location where you downloaded the ROS packages
from KINOVA, and execute ‘source devel/setup.bash’. Launch the kinova_vision node by
executing the following command:

$ roslaunch kinova_vision kinova_vision_color_only.launch device:=<ipaddress>

Replace the <ipaddress> field in the command with an actual IP address from the robot.
4 Open new terminal window at the same folder location, and execute ‘source devel/setup.bash’.
Launch the detect_object node by executing the following command:

$ rosrun opencv_node detect_object

This launches two image windows. One image is the output of canny edge detection and the other one
is the original image with an overlay of a red rectangle. In an ideal setting, the red rectangle must
match the boundary of the object.

Troubleshooting Object Detection


If you find any issues with the object detection, try the following steps:

• If the red rectangle is not matching the object, adjust the slider on canny edge detection image
such that only the object boundary is visible.
• If the red square is not visible at all or it is not on the actual object, modify the minimum and
maximum area threshold. Navigate to /opencv_node/src folder and open the
detect_object.cpp file. Modify AreaThreshMin and AreaThreshMax values accordingly.
Rebuild and relaunch the node to validate the object detection.
• If you are not sure about threshold values, uncomment the line which prints the area of all
detected contours, rebuild the node, and then launch the detect_object node again. This prints
the area values for all detected contours on the terminal. You can note the area for the desired
object and adjust area threshold values accordingly. Rebuild and relaunch the node to validate the
object detection.

3-12
Joint Angle Control Versus Trajectory Control

Joint Angle Control Versus Trajectory Control

Joint Angle Control


In joint angle control, the robot computes the control inputs required to achieve the desired joint
configuration internally. This control strategy enables the robot to achieve the desired joint
configuration in short time.

However, if you trace the end-effector position during the movement, the trace is not exactly as
intended. Commanding only the desired joint angle configuration has certain inherent issues such as
deviation from the desired motion. For example, if the desired motion from one position to another
position is only along the z axis, the actual motion is not exactly restricted to the z axis. This is
because you are not explicitly controlling the end effector position in joint angle control. Such
variance can be undesirable when the manipulator is operating in confined space or within the
environment with lot of obstacles.

Trajectory Control
In trajectory control, you are explicitly defining the intermediate path for the robot, and the end-
effector tightly follows the desired path. Because we are not commanding the robot to utilize the
maximum allowed joint velocity and acceleration, the entire motion in trajectory control takes more
time to complete when compared to the joint angle control.

For trajectory control, you typically use trapezoidal velocity profiles that are computed to reach the
desired joint configuration using trapveltraj. Finally, the data points are interpolated to generate a
pre-calculated trajectory profile that includes joint position, velocity, and acceleration commands.

Compare the Control Strategies


In the below figure, time history for all actuators is shown when a robot moves from a scan position
to pick an object and retract to the first position, which helps you to compare joint angle versus
trajectory control strategies. This data is based on the example, “Detect and Pick Object Using
KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control” on page 2-87.

3-13
3 Run on Target Hardware

Here, in the first case, robot is controlled using joint angle configuration command to pick the object.
For the second case (trajectory control), the object is placed at the same location and the robot is
controlled using pre-computed trajectory command. It is quite evident that all actuators follow similar
trajectory for both cases. However, in the second case, the robot takes more time to complete the
task.

The similarities are more clear when the time history is plotted using the normalized time for both
cases.

3-14
Joint Angle Control Versus Trajectory Control

From the time history of the filtered joint velocities, the prime reasons for the difference in time are
these:

• When the robot is controlled using joint angle configuration command, the internal motion
planning algorithm utilizes maximum allowed acceleration to achieve the desired joint velocity.
• When the robot is controlled using pre-computed trajectory, the allowed acceleration and velocity
limits are conservative. If the pre-computed trajectory violates these limits, then the robot rejects
such trajectories.

Hence, while generating trapezoidal velocity profiles, conservative time estimate is used to complete
the motion. The trapezoidal shape of joint velocities in the example validates the fact that the robot is
able to follow the commanded joint angle and velocity profiles.

3-15
3 Run on Target Hardware

3-16

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy