Robotmanipulator Ug
Robotmanipulator Ug
R2024b
How to Contact MathWorks
Phone: 508-647-7000
Get Latest URDF Robot Model for Kinova Gen3 Robot . . . . . . . . . . . . . . . 1-9
Get Started
2
Verify Connection to Kinova Gen 3 Robot . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
iii
Test Basic Connection and Move the Wrist Joint . . . . . . . . . . . . . . . . . . . . 2-4
Create Robot APIs Wrapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Display Joint Angles of the Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Move the Wrist Joint of the Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Disconnect from the Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5
Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to
Control the Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-12
Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to
Acquire Image Data from Vision Sensor . . . . . . . . . . . . . . . . . . . . . . . . 2-19
Connect to the ROS Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-19
Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB
......................................................... 2-29
Read Current Joint Angles from KINOVA Gen3 Robot Arm . . . . . . . . . . . 2-61
iv Contents
Control Individual Joint Angle of KINOVA Gen3 Robot . . . . . . . . . . . . . . 2-63
Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid
Body Tree Robot Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-73
Read Current Joint Angle of KINOVA Gen3 and Publish to ROS Network
......................................................... 2-84
Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle
Control and Trajectory Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-87
v
1
After you install the support package, you can refer to the examples that show you how to manipulate
the Kinova Gen3 robot.
• Windows 10
• Linux: Ubuntu® 18.04 LTS, Ubuntu 16.04 LTS
1 On the MATLAB® Home tab, in the Environment section, select Add-Ons > Get Hardware
Support Packages.
2 In the Add-On Explorer window, click the support package and then click Install.
On the MATLAB Home tab, in the Resources section, select Help > Check for Updates.
1 On the MATLAB Home tab, in the Environment section, click Add-Ons > Manage Add-Ons.
1-2
Install Support for Manipulator Hardware
2 In the Add-On Manager window, find and click the support package, and then click Uninstall.
Hardware Setup
Hardware boards and devices supported by MathWorks® require additional configuration and setup
steps to connect to MATLAB and Simulink®. Each support package provides a hardware setup
process that guides you through registering, configuring, and connecting to your hardware board.
If the support package is already installed, you can start the hardware setup by opening the Add-On
Manager.
In the Add-On Manager, start the hardware setup process by clicking the Setup button .
After starting, the Hardware Setup window provides instructions for configuring the support package
to work with your hardware.
Follow the instructions on each page of the Hardware Setup window. When the hardware setup
process completes, you can open the examples to get familiar with the product and its features.
1-3
1 Setup and Configuration
Note This section explains the task to be completed as part of the step—Configure Network
Adapter over Ethernet—of the Hardware Setup process (using the Hardware Setup screens). Do
not perform this as a standalone task
Windows
Follow these steps if you have installed Robotics System Toolbox Support Package for Kinova Gen3
Manipulators on Windows:
1 Connect the Ethernet cable from the host computer to the robot base.
2 On the Windows Start menu, search for Network status and open the Network status setting.
Click Change adapter options.
3 In the Network Connections window, locate the Ethernet adapter connected to the robot and
click Properties from the context menu.
1-4
Set Up Network Adapter and Basic Network Ping
4 In the Ethernet Properties dialog box, edit the Internet Protocol Version 4 (TCP/IPv4)
Properties to set the values for the parameters as shown in this figure:
5 Click OK. In the Ethernet Properties dialog box, click OK to save the settings.
6 Perform a test ping using the Windows command prompt to the robot and ensure that the ping is
successful.
ping 192.168.1.10
1-5
1 Setup and Configuration
Linux
Follow these steps if you have installed Robotics System Toolbox Support Package for Kinova Gen3
Manipulators on Linux:
1 Connect the Ethernet cable from the host computer to the robot base.
2 In Linux, go to Network Connections.
3 Identify the Ethernet adapter which is connected to the robot and select it.
4 Select the Wired connection and click Edit to modify the adapter setting.
5 Navigate to IPv4 Settings, change Method to Manual, and add the IP Address, Netmask and
Gateway as per the following figure:
1-6
Set Up Network Adapter and Basic Network Ping
6 Perform a test ping using the terminal to the robot and ensure that the ping is successful.
ping 192.168.1.10
7 Continue with the Hardware Setup screens.
1-7
1 Setup and Configuration
You need to first download the required adaptor from Kinova Artifactory:
1 Go to this Github page to download the Vision Module MATLAB Image Acquisition Toolbox
Adaptor. Refer to the Installation Instructions section in the page and click the download link
displayed.
Windows
After you download and extract contents of the ZIP file, follow these steps to setup IMAQ adapter if
you have installed Robotics System Toolbox Support Package for Manipulators on Windows:
After the successful installation, provide the location of the adaptor library file
kinova_vision_imaq.dll in the Hardware Setup screen.
Linux
After you download and extract contents of the ZIP file, follow these steps to setup IMAQ adapter if
you have installed Robotics System Toolbox Support Package for Manipulators on Linux:
1 Install GStreamer and Glib by entering these commands at the Linux command prompt:
1-8
Get Latest URDF Robot Model for Kinova Gen3 Robot
[gen3Model, robotData]=loadrobot('kinovaGen3')
URDFSource = robotData.Source
An updated version of the URDF file is available for the Gen3 robot. Use the additional version
argument to use Version 12, associated with release 2.2.1 for the Gen3 robot.
[gen3Model, robotData]=loadrobot('kinovaGen3','Version',2);
URDFSource = robotData.Source
Any different URDF version can be imported in MATLAB using importrobot function. For example,
to import Version 12 of URDF robot model of Kinova Gen3 robot for further algorithm development:
1 Download the gen3.xacro file corresponding to your version of the Gen3 robot from Kinova’s
ROS Kortex Github repository.
robot = importrobot('gen3.urdf')
1-9
1 Setup and Configuration
A set of standard messages are included in the default ROS message catalog of ROS toolbox.
However, you can create your own ROS custom messages and use them in MATLAB and Simulink
with ROS networks to transmit information. For this, rosgenmsg function is used. For more
information on this, refer to ROS Custom Message Support and rosgenmsg.
The kortex_driver ROS package from Kinova Robotics contains several custom ROS messages and
services. In order to control the Gen 3 robot using ROS, these message definitions need to be added
to the MATLAB ROS message catalog. One option to do this is to use rosgenmsg. As mentioned in the
ROS System Requirements, generating custom messages for ROS, you must build the ROS packages.
This requires you to have Python®, CMake, and a C++ compiler for your platform.
To make the process of adding custom ROS message definitions to MATLAB easy, the definitions for
Kinova Gen 3 robot are included in the support package and the hardware setup process consists of a
step to register these pre-generated message definitions with the MATLAB. This workflow is ideal for
the use-cases where the custom messages and services from kortex_driver package are not modified.
Note Generating custom ROS message definitions using rosgenmsg on Windows OS has limitation on
maximum number of custom messages. Hence, only limited message and service definitions are
included in the support package. These definitions are sufficient to perform the hardware setup
successfully and to run all the examples included in the support package (for details, see “Default Set
of Messages for Windows” on page 1-20).
If you are planning to use any custom message which is not part of the default messages for
Windows, then refer following instructions to generate custom message definitions for the addition
ROS messages.
• If the ROS messages from Kinova Robotics are modified and are used to communicate with and
control the robot Or
• A new ROS package is being created, which is dependent on the ROS packages supplied by Kinova
Robotics, to communicate with and control the robot. Or
• A new message definition for windows needs to be added to the existing set of definitions supplied
with the support package.
1 Navigate to the custom messages folder by executing the following command in the MATLAB
command window.
fullfile(codertarget.robotmanipulator.internal.getSpPkgRootDir,'resources','ROSMsgs')
2 Based on the operating system platform, there should be a folder with a name starting with
custommsg in the resources folder. Remove custommsg folder from the MATLAB path and save
the current path by executing savepath command in the MATLAB command window.
1-10
Add Custom ROS Message Definitions to MATLAB
7 Generating custom ROS message definitions using rosgenmsg on Windows OS has limitation on
maximum number of custom messages. Hence, limit the number of custom messages and
services to a minimum required. If you want to just add few ROS messages to the set of custom
message / service definitions, then ensure that you first copy .msg and .srv files for the default
message set and then additional files for the ROS messages which you want to add.
8 Navigate to the folder which contains the custom ROS package and execute rosgenmsg on
MATLAB command window.
9 Wait for the process to complete and once it is done execute the mentioned commands to modify
MATLAB path, clear classes and rehash toolboxcache.
10 Check the output of MATLAB command rosmsg list for the messages from the group
kortex_driver or whatever the name of your custom ROS package is.
1-11
1 Setup and Configuration
This interface is ideal for use-cases where the intended end goal involves standalone manipulation of
the Gen3 robot only. The hardware support package lets you acquire various sensor data from the
robot manipulator, control the robot manipulator, connect with the robot hardware to test and
validate your algorithms, prototype robot manipulator algorithms, and simulate robot manipulators
using rigid body tree models from Robotics System Toolbox™.
This interface is supported on Windows and Linux operating systems and requires a valid installation
of Robotics System Toolbox. If you are planning to use vision sensors of the robot then a valid
installation of Image Acquisition Toolbox is also required. To know more about the capabilities and
features of this interface, watch the video Developing a Pick-And-Place Application in MATLAB and
Simulink and refer the examples.
In many robotics use-cases, the environment consists of multiple robots and sensors, and
the standalone manipulation is not common. In such scenarios, ROS is widely used due to inherent
advantages of hardware abstraction, device drivers, libraries, visualizers, message-passing, package
management, and more. This interface is ideal for use-cases where the manipulator is part of the
existing ROS network. With this interface Robotics System Toolbox™ features can be used to
Simulate and generate trajectories for Kinova Gen 3 robot and then the robot can be controlled using
various ROS services and actions using ROS Toolbox.
1-12
Select Interface for Connecting to Kinova Gen3 Robot
This feature is supported for Windows and Linux operating systems and requires a valid installation
of Robotics System Toolbox and ROS Toolbox. However, the automatic hardware setup via hardware
setup screens is only available for Linux 18.04 and 16.04 operating systems.
• Automatic hardware setup screens guide you to configure the host computer to communicate with
the robot via ROS if the MATLAB and ROS are installed on the same host computer.
• If the MATLAB and ROS are installed on a different machine, then refer to “Install ROS Packages
and Dependencies for ROS on External Computer” on page 1-17 to configure the host computer
and the ROS computer manually.
1-13
1 Setup and Configuration
• You can use Linux VMprovided by the MathWorks on the host computer with Windows OS and the
MATLAB to control the robot using ROS. Refer to “Install ROS Packages and Dependencies for
ROS on External Computer” on page 1-17 configure the host computer and the VM manually.
• If the robot is connected to a separate machine having Linux with ROS and needs to be controlled
using the MATLAB installed on a Windows host computer, then refer to “Install ROS Packages and
Dependencies for ROS on External Computer” on page 1-17 to configure the host computer and
the ROS computer manually.
1-14
Install ROS Packages and Dependencies for ROS
Note This section explains the steps to install ROS packages and dependencies if MATLAB and ROS
are installed on the same computer, which is connected to Kinova Gen3 robot. If the ROS and
MATLAB are installed on different computers, see “Install ROS Packages and Dependencies for ROS
on External Computer” on page 1-17.
Kinova Robotics provides set of external ROS packages to communicate and control Gen3 robot.
More information on the individual packages can be found on this GitHub Page. Execute the following
steps sequentially to clone required packages and install required dependencies.
1 Install python3 and python3-pip by executing the following command in the Linux terminal.
mkdir -p kinova_ws/src
cd kinova_ws
4 Download the ros_kortex ROS packages (version 2.2.1) from GitHub archive of Kinova Robotics.
5 Extract the contents of the ZIP file to the src folder. The src folder should contain
ros_kortex-2.2.1 folder.
6 Install dependencies from the custom ROS packages by executing the following command in the
Linux terminal.
catkin_make
8 Once the build is successful, continue with the hardware setup screens if you are not planning to
use the vision module. If you are planning to use the vision module, execute the setup
instructions mentioned in the following section.
1 Install GStreamer and associated packages by executing the following command in the Linux
terminal.
1-15
1 Setup and Configuration
catkin_make
catkin_make install
1-16
Install ROS Packages and Dependencies for ROS on External Computer
Note This section describes the steps to be performed if the ROS is installed on an external
computer than the one with MATLAB installed. If the ROS and MATLAB are installed on the same
computer, see “Install ROS Packages and Dependencies for ROS” on page 1-15.
This section describes the steps to be performed if the ROS is installed on an external computer than
the one with MATLAB installed.
Kinova Robotics provides a set of external ROS packages to communicate and control Gen3 robot. For
more information on the individual packages, refer to this GitHub Page. Execute the following steps
sequentially to clone required packages and install required dependencies on the computer which is
connected to the robot and have ROS installed.
1 Verify the OS version by executing the following command in the Linux terminal.
lsb_release -d
The support package has been tested on Ubuntu 18.04 with ROS Melodic and Ubuntu 16.04 with
ROS Kinetic.
2 Verify the ROS version by executing the following command in the Linux terminal.
rosversion -d
If the ROS version is detected as <unknown>, then add ROS environment variables in your
default bash shell launch script. Instructions to do the same are available under Section 1.5 of
this webpage. If you are using ROS melodic, see this webpage instead.
3 Install python3 and python3-pip by executing the following command in the Linux terminal.
sudo apt install python3 python3-pip
4 Install conan using pip and setup a default profile.
sudo python3 -m pip install conan
conan config set general.revisions_enabled=1
conan profile new default --detect > /dev/null
conan profile update settings.compiler.libcxx=libstdc++11 default
5 Create a new workspace with src folder.
mkdir -p kinova_ws/src
cd kinova_ws
6 Download the ros_kortex ROS packages (version 2.2.1) from GitHub achieve of Kinova Robotics.
If you only have the access of Jetson™ board via SSH terminal, you can use wget command to
download the ZIP archive and then use unzip command to extract the content into the src folder.
7 Extract the contents of the ZIP file to the src folder. The src folder should contain
ros_kortex-2.2.1 folder.
8 Navigate to the workspace folder that you have created in step 5 (for example, kinova_ws).
Install dependencies from the custom ROS packages by executing the following command in the
Linux terminal.
1-17
1 Setup and Configuration
If you are using the Vision module, perform the setup instructions mentioned in the following section
else go to “Configure MATLAB for Custom ROS Message Definitions for Kinova Gen3” on page 1-18.
Note In the below command, you need to specify the correct ROS version, that is melodic or
kinetic, according to your ROS installation version.
The ROS package kortex_driver supplied by Kinova Robotics contains custom ROS messages which
are not part of the default message catalog of MATLAB. Hence, the message definition of these
custom messages must be added to the MATLAB to communicate with the robot using ROS.
Pre-generated custom message definitions are included with this support package. The pre-
configured message definitions are generated from the ROS package kortex_driver, as supplied by
1-18
Install ROS Packages and Dependencies for ROS on External Computer
Kinova Robotics. If you want to modify the message definitions or create your own ROS package,
which is dependent on kortex_driver, then refer to “Add Custom ROS Message Definitions to
MATLAB” on page 1-10 for more details on the workflow.
1 Execute the following function to add custom ROS message definitions to message catalog.
Status = codertarget.manipROSInterface.addCustomROSmsg('kinova');
If custom ROS message definitions are added to the ROS message catalog, the function returns true.
The setup is complete. Continue with the hardware setup screens and refer to the “Read Current
Joint Angles from KINOVA Gen3 Robot Arm” on page 2-61 example to acquire joint angles from the
robot.
See Also
“Add Custom ROS Message Definitions to MATLAB” on page 1-10
Related Examples
• “Read Current Joint Angles from KINOVA Gen3 Robot Arm” on page 2-61
1-19
1 Setup and Configuration
• Custom Messages
1 ActuatorFeedback
2 BaseCyclic_Feedback
3 BaseFeedback
4 CartesianSpeed
5 CartesianTrajectoryConstraint
6 CartesianTrajectoryConstraint_type
7 ConstrainedJointAngles
8 ConstrainedPose
9 Empty
10 GripperCyclic_Feedback
11 GripperCyclic_MessageId
12 InterconnectCyclic_Feedback
13 InterconnectCyclic_Feedback_tool_feedback
14 InterconnectCyclic_MessageId
15 JointAngle
16 JointAngles
17 JointTrajectoryConstraint
18 MotorFeedback
19 Pose
• Custom Services
1 PlayCartesianTrajectory
2 PlayJointTrajectory
1-20
2
Get Started
1 Ensure that the robot is connected to the host computer and the network adapter is configured.
For details, see “Set Up Network Adapter and Basic Network Ping” on page 1-4. Identify the IP
Address of the robot.
2 Verify the network connection with the robot.
To do this, use the system command at the MATLAB prompt to test network connectivity with
the robot (assume that the IP address is '192.168.1.10'):
IP = ‘192.168.1.10’;
system(['ping ' IP],'-echo');
If the network adapter is configured correctly, you should receive a reply from the robot in
response to the ping command. If you do not receive a reply, refer to “Set Up Network Adapter
and Basic Network Ping” on page 1-4 or “Troubleshooting Manipulator Support Package” on
page 2-9.
2-2
Download MATLAB MEX APIs for Kinova Gen 3
The required functions to connect with the manipulator can be downloaded from the Kinova
Artifactory. The download page does not load properly in the Internet Explorer® or Microsoft Edge®
web browsers.
The examples that are provided in this release of the support package are developed and tested with
the KORTEXTM API version 2.2.1 and with the manipulator having a firmware version 2.2.1. For
detailed instructions on changing the robot firmware, refer to the user guide for Gen3 robot.
Once the download is complete, extract the ZIP file to a preferred location and add the extracted
folder and subfolders to the MATLAB path. For more information on how to add folder to the MATLAB
search path, refer to addpath.
The MATLAB APIs provided by Kinova Robotics includes pre-compiled MEX files. However, you can
build them again (for build instructions, refer to this GitHub page of Kinova Robotics). The package
includes a MEX interface function kortexApiMexInterface, which serves as a mexFunction (C)
function (entry point) for C/C++ MEX functions. It also includes the MATLAB system object, kortex.
2-3
2 Get Started
Simulink.importExternalCTypes(which('kortex_wrapper_data.h'));
2 Create an API instance to connect to the robot
gen3 = kortex;
This creates an object of class kortex with default properties for IP address, user name and
password.
3 Modify the IP address of the robot
isOk = gen3.CreateRobotApisWrapper;
In the previous step, you read current joint angles from the robot. The following command specifies
that the wrist joint of the actuator be moved by 5 degrees.
Specify constraints for the robot motion (if you specify 0, the default values are used):
constraintType = int32(0);
speed = 0;
duration = 0;
2-4
Test Basic Connection and Move the Wrist Joint
2-5
2 Get Started
After you perform the initial configuration activities as described in “Set Up IMAQ Adapter for
matlab_kortex Driver” on page 1-8, you can configure MATLAB to acquire images from vision module
of the Kinova Gen3 robot.
In the MATLAB command window, use the imaqregister command to register the adaptor from
KINOVA Robotics with Image Acquisition Toolbox. This step needs to be performed only once.
• For the Windows operating system, the library file is ‘kinova_vision_imaq.dll’ and it is located
in the installation folder. For example, if the installation folder is ‘C:\Program Files\Kinova
\Vision Imaq’ then the command to register the adaptor is:
imaqreset;
b Get information about the available image acquisition hardware
imaqhwinfo;
• Windows OS
vid1.FramesPerTrigger = 1;
src1 = getselectedsource(vid1);
c Change device properties
src1.CloseConnectionOnStop = 'Enabled';
src1.Ipv4Address = ‘192.168.1.10’;
d Preview the video object
preview(vid1);
e Close the preview and delete the video object
closepreview(vid1);
delete(vid1);
2-6
Configure MATLAB to Acquire Images from Vision Module
• The only difference in the workflow for Linux operating system is the name of adaptor. For
Linux, the adaptor name is libkinova_vision_imaq instead of kinova_vision_imaq
for Windows.
• Perform the remaining steps, which are same as mentioned under Windows operating
system.
2-7
2 Get Started
• If you are using a the game pad / joystick connected to the Kinova Gen 3 robot, press the Left
Bumper button to clear any existing faults of the robot.
• You can open the browser and enter the IP address of the robot in the search bar to open the
KINOVA® KORTEX™ Web App. Login to the robot and navigate to the error symbol at the top
right corner. Click on the Error symbol and then click Clear All.
2-8
Troubleshooting Manipulator Support Package
• Test the connection with the robot using KINOVA® KORTEX™ Web App.
• Ensure that the robot is powered on.
• Test the setup by controlling the robot with a game pad.
• Check the IP address of the robot.
• If you are connecting using Ethernet, ensure that the Ethernet cable is properly connected to your
computer and the robot.
• Ensure that the proper network adapter is selected and configured to communicate with the
robot.
• Clear any existing faults using game pad or KINOVA® KORTEX™ Web App. For more information,
refer to “Clear Existing Faults” on page 2-8.
Perform any of these actions to troubleshoot image acquisition issues with the Kinova Gen3 robot:
To resolve this issue, wait for at least 30 seconds before initiating the next image acquisition. If
the error persists, then power-cycle the robot.
2-9
2 Get Started
• The Gen3 robot accepts the precomputed trajectories with certain restrictions to angular velocity
and acceleration of each joint as well as on the time stamp of each input.
• The initial timestamp of each trajectory should be exactly zero second and the current position of
the robot should exactly match the starting point of the trajectory.
• The successive trajectory points should have time increment of exactly 1 milliseconds, and overall
trajectory should not be more than 30 seconds long, that is 30,001 trajectory points.
• Joint positions for all actuators must be within the specified joint limits throughout the entire
trajectory.
• Similarly, angular velocity and acceleration for each joint should be within the specified limits
throughout the entire trajectory.
• For more information on the limits of individual joint angle, velocity and acceleration, refer to the
user guide of Kinova Gen3 robot.
• For more information on the limitations of SendPreComputedTrajectory, refer to this GitHub page.
2-10
Getting Started with ROS Connectivity
Products Required
• Robotics System Toolbox
• ROS Toolbox
The robot can be connected to the MATLAB using different methods depending on the communication
interface provided by the robot manufacturer. For the Kinova Gen3 robot arm, Kinova provides a
MATLAB MEX API and ROS API that uses the Ethernet port or Wi-Fi to communicate with the robot.
For more information on MATLAB MEX API, see “Download MATLAB MEX APIs for Kinova Gen 3” on
page 2-3. To configure the host computer for ROS API, see “Install ROS Packages and Dependencies
for ROS” on page 1-15 or “Install ROS Packages and Dependencies for ROS on External Computer”
on page 1-17.
This example uses ROS API provided by KINOVA Robotics to connect the Gen3 robot to ROS network
and to perform various common operations using ROS and MATLAB. For more information on the
'ros_kortex' ROS packages, see GitHub page of KINOVA Robotics.
Ensure that the robot hardware is setup completely and connected to the host computer before
proceeding further. For more information, see “Verify Connection to Kinova Gen 3 Robot” on page 2-
2.
1 “Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to Control the Robot” on
page 2-12
2 “Connect to ROS Network” on page 2-13
3 “Read Joint Angles” on page 2-14
4 “Manipulate Individual Joint Angle Using MATLAB” on page 2-15
5 “Control Cartesian Pose Using MATLAB” on page 2-16
6 “Send Precomputed Trajectory to the Robot” on page 2-17
2-11
2 Get Started
If the hardware setup and the IP address are correct, a message 'The Kortex driver has been
initialized correctly!' displays on the terminal window.
2-12
Connect to ROS Network
Initialize MATLAB global node and connect to the existing ROS network.
rosinit;
2-13
2 Get Started
Execute the following commands in the MATLAB sequentially to read joint angles from the robot. As
explained in KINOVA github page, the first part of the ROS topic, 'my_gen3’ might be different
based on the robot name set during the roslaunch command.
Create subscriber for a ROS topic /my_gen3/joint_states and receive the latest message.
jSub = rossubscriber('/my_gen3/joint_states');
jMsg = receive(jSub);
angles = int16(jMsg.Position(1:7)'.*180/pi);
disp(angles);
2-14
Manipulate Individual Joint Angle Using MATLAB
Create a service client with an empty message to control the individual joint angles.
[jointSrv,jointRequest] = rossvcclient('/my_gen3/base/play_joint_trajectory');
Populate the blank message with the desired joint angle values.
Use the default type and value for the constraints. This will set the constraint type to angular velocity
constraint and uses the default limit for the maximum Angular velocity.
call(jointSrv,jointRequest);
2-15
2 Get Started
baseSub = rossubscriber('/my_gen3/base_feedback');
baseMsg = baseSub.LatestMessage;
Set the offset in the current Cartesian pose. First three values corresponds to the offset in X, Y and Z
positions respectively and are in meters. The last three values correspond to the offset in rotation
angles with respect to X, Y and Z axis respectively and are in degrees.
Create a service client and an empty ROS message for the ROS service /my_gen3/base/
play_cartesian_trajectory.
[cartTrajSrv,cartTrajMsg] = rossvcclient('/my_gen3/base/play_cartesian_trajectory');
Set the desired Cartesian pose using the existing pose and offsets.
Set the velocity constraints on the translation speed and rotational speed.
speedConstraint = rosmessage('kortex_driver/CartesianSpeed');
speedConstraint.Translation = 0.5; % m/s
speedConstraint.Orientation = 45; % deg/s
cartTrajMsg.Input.Constraint.OneofType.Speed = speedConstraint;
call(cartTrajSrv,cartTrajMsg);
2-16
Send Precomputed Trajectory to the Robot
The expected motion of the robot is same as shown in the example “Generate a Trajectory Using a
Set of Waypoints for KINOVA Gen3 Robot End-Effector” on page 2-37.
Define waypoint offset and the time instances to achieve the waypoint.
waypointTimes = [0 7 14 21 28];
waypointOffsets = [0 0 0 0 0 0;
-0.1 0.2 0.2 pi/4 0 0;
-0.2 0 0.1 0 0 pi/4;
-0.1 -0.2 0.2 -pi/4 0 0;
0 0 0 0 0 0]; % [X Y Z yaw pitch roll]
Calculate the joint configuration for each waypoint using inverse kinematic.
ik = inverseKinematics('RigidBodyTree',gen3);
for idx = 1:numel(waypointTimes)
ikGoal = trvec2tform(taskWaypoints(idx,1:3)) * eul2tform(taskWaypoints(idx,4:6),'ZYX');
ikSoln = ik('EndEffector_Link',ikGoal,[1 1 1 1 1 1],jAngs);
jointWaypoints(:,idx) = ikSoln(1:7);
end
2-17
2 Get Started
jointNames = {'joint_1','joint_2','joint_3','joint_4','joint_5','joint_6','joint_7'};
packageJointTrajectory(trajGoal,jointNames,q,qd,qdd,trajTimes)
sendGoal(trajAct,trajGoal);
function packageJointTrajectory(msg,jointNames,q,qd,qdd,trajTimes)
% Initialize values
N = numel(trajTimes);
numJoints = size(q,1);
zeroVals = zeros(numJoints,1);
% Assign constraints
for idx = 1:numJoints
msg.GoalTolerance(idx) = rosmessage('control_msgs/JointTolerance');
msg.GoalTolerance(idx).Name = jointNames{idx};
msg.GoalTolerance(idx).Position = 0;
msg.GoalTolerance(idx).Velocity = 0.1;
msg.GoalTolerance(idx).Acceleration = 0.1;
end
2-18
Connect to the Kinova Gen3 Robot and Initiate Required ROS Nodes to Acquire Image Data from Vision Sensor
If the hardware setup and provided IP address are correct, a message 'The stream has started'
displays on the terminal window for color and depth sensors.
Initialize MATLAB global node and connect to the existing ROS network.
rosinit;
rgbImgSub = rossubscriber('/camera/color/image_rect_color/compressed');
depthImgSub = rossubscriber('/camera/depth/image');
Read RGB and depth images and plot them in a single subplot.
subplot(2,1,1)
rgbImg = readImage(rgbImgSub.LatestMessage);
imshow(rgbImg)
title('Color Image')
subplot(2,1,2)
depthImg = readImage(depthImgSub.LatestMessage);
imshow(depthImg,[0 4])
title('Depth Image')
drawnow;
rosshutdown;
2-19
2 Get Started
This example shows how to solve inverse kinematics problems using Rigid Body Trees and move the
Kinova Gen3 7-DoF Ultralightweight Robot arm to follow the desired trajectory.
Clear all output in the live script by right clicking anywhere in the live script and then clicking ‘Clear
All Output’ or navigate to VIEW > Clear all Output.
Required Products
Create a Rigid Body Tree for Gen3 robot, set the home configuration and calculate the transformation
at the home configuration:
gen3 = loadrobot("kinovaGen3");
gen3.DataFormat = 'column';
q_home = [0 15 180 -130 0 55 90]'*pi/180;
eeName = 'EndEffector_Link';
T_home = getTransform(gen3, q_home, eeName);
show(gen3,q_home);
axis auto;
view([60,10]);
2-20
Track Pre-Computed Trajectory of Kinova Gen3 Robot End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API
The inverseKinematics System object™ creates an inverse kinematic (IK) solver to calculate joint
configurations for a desired end-effector pose based on a specified rigid body tree model.
ik = inverseKinematics('RigidBodyTree', gen3);
Disable random restarts for inverse kinematic solver. AllowRandomRestart parameter indicates if
random restarts are allowed. Random restarts are triggered when the algorithm approaches a
solution that does not satisfy the constraints. A randomly generated initial guess is used.
ik.SolverParameters.AllowRandomRestart = false;
Specify weight priorities for pose tolerances, as a six-element vector. The first three elements
correspond to the weights on the error in orientation for the desired pose. The last three elements
correspond to the weights on the error in xyz position for the desired pose.
weights = [1, 1, 1, 1, 1, 1];
Since the desired trajectory is selected to start near the home position, define initial guess for solver
as the home position of the robot.
2-21
2 Get Started
q_init = q_home;
This section helps you is to create a circular trajectory for the end effector to track. Keep the
orientation of the end effector as constant for the whole range of motion. Define center and radius of
the circular trajectory.
Define total time and time step, and based on that generate waypoints for the circular trajectory.
dt = 0.25;
t = (0:dt:10)';
theta = t*(2*pi/t(end))-(pi/2);
points = center + radius*[0*ones(size(theta)) cos(theta) sin(theta)];
hold on;
plot3(points(:,1),points(:,2),points(:,3),'-*g', 'LineWidth', 1.5);
xlabel('x');
ylabel('y');
zlabel('z');
axis auto;
view([60,10]);
grid('minor');
2-22
Track Pre-Computed Trajectory of Kinova Gen3 Robot End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API
The inverse kinematics solver finds a joint configuration that achieves the specified end-effector pose.
Specify an initial guess for the configuration and the desired weights on the tolerances for the six
components of pose. The inverse kinematics solver returns information about its convergence and it
is recommended to check the status of the solution.
This section helps you to find a joint configuration for a fixed end effector orientation and variable
position defined by waypoints calculated in the previous sections. The current solution is used as the
initial guess for the next waypoint to ensure smooth and continuous trajectory. Uncomment the
display command to see the status of each inverse kinematics iteration.
numJoints = size(q_home,1);
numWaypoints = size(points,1);
qs = zeros(numWaypoints,numJoints);
for i = 1:numWaypoints
T_des = T_home;
T_des(1:3,4) = points(i,:)';
[q_sol, q_info] = ik(eeName, T_des, weights, q_init);
2-23
2 Get Started
figure; set(gcf,'Visible','on');
ax = show(gen3,qs(1,:)');
ax.CameraPositionMode='auto';
hold on;
% Plot waypoints
plot3(points(:,1),points(:,2),points(:,3),'-g','LineWidth',2);
axis auto;
view([60,10]);
grid('minor');
hold on;
2-24
Track Pre-Computed Trajectory of Kinova Gen3 Robot End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API
Expected Motion of the robot (Assuming you start from the retract position)
2-25
2 Get Started
Press ‘y’ and press 'Enter' key on the keyboard if you want to send commands to the Kinova Gen3
robot to track the calculated trajectory or press 'Enter' to finish the example.
prompt = 'Do you want to send same trajectory to the hardware? y/n [n]: ';
str = input(prompt,'s');
if isempty(str)
str = 'n';
end
if str == 'n'
clear;
return;
end
As explained in the example “Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB”
on page 2-29, the MATLAB API for Kinova Gen3 robot supports multiple modes to command the
robot. The high-level commands to reach a desired joint configuration or Cartesian pose cannot be
used to follow a smooth trajectory as the robot always reaches the destination before processing the
next command. Hence, the end-effector velocity between the successive commands reaches to zero,
which results into a choppy motion. The pre-computed joint trajectories can be used to command the
robot to track set of waypoints ensuring smooth motion.
A valid pre-computed joint trajectory is a set of timestamp, angular position, angular velocity, and
angular acceleration for each joint at each waypoint. You must adhere to certain constraints while
2-26
Track Pre-Computed Trajectory of Kinova Gen3 Robot End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API
calculating the trajectory. For more information, see SendPreComputedTrajectory and Precomputed
Joint Trajectories.
Calculate joint velocity and acceleration at each waypoint using the numerical
differentiation
qs_deg = qs*180/pi;
vel = diff(qs_deg)/dt;
vel(1,:) = 0;
vel(end+1,:) = 0;
acc = diff(vel)/dt;
acc(1,:) = 0;
acc(end+1,:) = 0;
Interpolate the joint position, velocity and acceleration to ensure the 0.001 seconds time
step between two trajectory points
timestamp = 0:0.001:t(end);
qs_deg = interp1(t,qs_deg,timestamp);
vel = interp1(t,vel,timestamp);
acc = interp1(t,acc,timestamp);
Simulink.importExternalCTypes(which('kortex_wrapper_data.h'));
gen3Kinova = kortex();
gen3Kinova.ip_address = '192.168.1.10';
isOk = gen3Kinova.CreateRobotApisWrapper();
if isOk
disp('You are connected to the robot!');
else
error('Failed to establish a valid connection!');
end
Note that the valid range of input for SendJointAngles is 0 to 360 degrees while the computed angles
are in a range of -180 to 180 degrees. Hence the joint angles must be wrapped around 0 to 360
degrees.
jointCmd = wrapTo360(qs_deg(1,:));
constraintType = int32(0);
speed = 0;
duration = 0;
2-27
2 Get Started
return;
end
while 1
[isOk,~, actuatorFb, ~] = gen3Kinova.SendRefreshFeedback();
show(gen3, ((actuatorFb.position)*pi/180)','PreservePlot',false);
drawnow;
if isOk
if max(abs(wrapTo360(qs_deg(1,:))-actuatorFb.position)) < 0.1
disp('Starting point reached.')
break;
end
else
error('SendRefreshFeedback error')
end
end
while 1
[isOk,~, actuatorFb, ~] = gen3Kinova.SendRefreshFeedback();
show(gen3, ((actuatorFb.position)*pi/180)','PreservePlot',false);
drawnow;
if isOk
if max(abs(wrapTo360(qs_deg(end,:))-actuatorFb.position)) < 0.1
disp('End Point reached.')
break;
end
else
error('SendRefreshFeedback error')
end
end
isOk = gen3Kinova.DestroyRobotApisWrapper();
Clear workspace
clear;
2-28
Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB
This example demonstrates how to connect the Kinova Gen3 7-DoF Ultralightweight Robot arm to
MATLAB® and Simulink®. This also helps you to get feedback from the robot and to send control
commands.
Note: Execute the script section by section as mentioned in this example. Also, keep the E-
STOP button close to you to stop any unnecessary movement of the robot.
Depending on the communication interface provided by the robot manufacturer, the robot can be
connected to the MATLAB using different methods. For Kinova® Gen3 robot arm, Kinova provides a C
++ API that uses the Ethernet port or Wi-Fi® to communicate with the robot. For more details, see
“Download MATLAB MEX APIs for Kinova Gen 3” on page 2-3.
You can utilize the MEX Wrapper provided by Kinova robotics or use the system object to control the
robot. Using system object allows you to utilize the same architecture for the Simulink
implementation. In the following sections, In the following sections, the system object will be used in
the MATLAB script to communicate with the robot. For more information on the ‘kortex’ system
object, see GitHub page of Kinova Robotics.
Before proceeding further ensure that the robot hardware is setup completely and connected to the
host computer. For more details, see Verify Connection to KINOVA Gen 3 Robot.
Clear all output in the live script by right clicking anywhere in the live script and then clicking ‘Clear
All Output’ or navigate to VIEW > Clear all Output.
Required Products
Create an API instance using your robot's IP address to establish connection with the robot. You can
create multiple instances that connect to the same robot or to multiple robots.
Simulink.importExternalCTypes is used to import certain bus definitions from the external
header file. This has to be done explicitly only in the MATLAB scripts. For Simulink models, the same
command can be added in the Model Callbacks.
Simulink.importExternalCTypes(which('kortex_wrapper_data.h'));
gen3Kinova = kortex();
gen3Kinova.ip_address = '192.168.1.10';
gen3Kinova.user = 'admin';
gen3Kinova.password = 'admin';
Use the created API instance to connect to the robot. While connecting, each MATLAB script must
include a function call to CreateRobotApisWrapper at the beginning and
DestroyRobotApisWrapper at the end of the script. Connect with the robot using
2-29
2 Get Started
CreateRobotApisWrapper command and if the connection is successful, the return flag isOk
should be true.
isOk = gen3Kinova.CreateRobotApisWrapper();
if isOk
disp('You are connected to the robot!');
else
error('Failed to establish a valid connection!');
end
If the connection fails, see “Troubleshooting Manipulator Support Package” on page 2-9.
The Kinova Gen3 robot contains multiple sensor suites across the base, actuator, and end-effector.
SendRefreshFeedback provides the data from these sensors in three structures corresponding to the
base, the actuators, and the gripper(interconnect).
If the robot and MATLAB is configured correctly, the actuator feedback consists of numerically
consistent nonzero values. Uncomment the display commands to see the values.
if isOk
disp('Base feedback');
disp(baseFb);
disp('Actuator feedback');
disp(actuatorFb);
disp('Gripper feedback');
disp(interconnectFb);
else
error('Failed to acquire sensor data.');
end
The MATLAB API for Kinova Gen3 robot supports these modes for sending commands to the robot:
• High-Level commands: Use this mode to send the robot to a desired configuration, move the
gripper, or to send a Cartesian command. The robot will internally interpolate to reach the desired
configuration and there is no self-collision checking.
• Prescribed trajectories: Use this method if you need the robot to follow a smooth trajectory
(example: motion planning frameworks). You need to send an array of joint positions, velocities,
and accelerations. For more information, see SendPreComputedTrajectory and the example,
“Track Pre-Computed Trajectory of Kinova Gen3 Robot End-Effector Using Inverse Kinematics and
KINOVA KORTEX MATLAB API” on page 2-20.
High-Level Commands
Expected Motion of the robot (Assuming you are starting from the retract position)
2-30
Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB
This section helps you to send the robot to the home configuration. SendJointAngles command can be
used to move the robot to a particular position defined by a set of joint angles. Since the Kinova Gen3
robot has seven actuators, input to the function is a vector of size 1 X 7. Valid range of joint angle is 0
to 360 degrees.
jointCmd = [0 15 180 230 0 55 90];
if isOk
disp('Command sent to the robot. Wait for the robot to stop moving.');
else
disp('Command error.');
end
Command sent to the robot. Wait for the robot to stop moving.
2-31
2 Get Started
Command sent to the robot. Wait for the robot to stop moving.
If your robot arm is equipped with a Robotiq 2F-85 gripper, then execute the following code to close
and open the gripper using velocity command. Here, toolCommand depicts the type of control mode
which is used. For the velocity control, the value should be set to 2. Also, toolCmd is used to set the
speed of opening or closing the gripper. For more information on the SendToolCommand, see GitHub
page from Kinova Robotics.
toolCommand = int32(2); % Velocity control mode
toolDuration = 0;
toolCmd = -1; % Close the gripper with full speed
isOk = gen3Kinova.SendToolCommand(toolCommand, toolDuration, toolCmd);
if isOk
disp('Command sent to the gripper. Wait for the gripper to close.')
2-32
Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB
else
error('Command Error.');
end
Command sent to the gripper. Wait for the gripper to stop moving.
toolCmd = 0;
isOk = gen3Kinova.SendToolCommand(toolCommand, toolDuration, toolCmd);
if isOk
disp('Command sent to the gripper. Wait for the gripper to stop moving.')
else
error('Command Error.');
end
Command sent to the gripper. Wait for the gripper to stop moving.
2-33
2 Get Started
You can also send Cartesian commands to the robot using the function SendCartesianPose. The robot
solves the inverse kinematics internally and calculates the joint angles required to achieve the
position.
The orientation is represented with Euler angles and the convention used is Tait-Bryan, extrinsic
ZYX.
2-34
Connect to Kinova Gen3 Robot and Manipulate the Arm Using MATLAB
if isOk
disp('Command sent to the robot. Wait for robot to finish motion and stop');
else
error('Command Error.');
end
Command sent to the robot. Wait for robot to finish motion and stop
Command sent to the robot. Wait for robot to finish motion and stop
2-35
2 Get Started
Command sent to the robot. Wait for robot to finish motion and stop
isOk = gen3Kinova.DestroyRobotApisWrapper();
Clear workspace
clear;
2-36
Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot End-Effector
This example shows how to generate and interpolate trajectories from a set of waypoints. This
example covers common ways of generating trajectories for robot arms such as cubic polynomials,
quintic polynomials, and trapezoidal trajectories.
Before proceeding further, see the example “Track Pre-Computed Trajectory of Kinova Gen3 Robot
End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API” on page 2-20. This
provides you information on inverse kinematic for robotic manipulators and sending commands to
Kinova® Gen3 robot to track pre-computed trajectories.
Clear all output in the live script by right clicking anywhere in the live script and then clicking ‘Clear
All Output’ or navigate to VIEW > Clear all Output.
Create a Rigid Body Tree for Gen3 robot, set the home configuration and calculate the transformation
at the home configuration:
gen3 = loadrobot("kinovaGen3");
gen3.DataFormat = 'column';
q_home = [0 15 180 -130 0 55 90]'*pi/180;
eeName = 'EndEffector_Link';
T_home = getTransform(gen3, q_home, eeName);
T_home(1:4,4) = [0;0;0;1];
show(gen3,q_home);
axis auto;
view([60,10]);
2-37
2 Get Started
Define orientation.
The orientation is represented in Euler angles and the convention used is the Tait-Bryan, extrinsic
ZYX.
orientations = [pi/2 0 pi/2;
(pi/2)+(pi/4) 0 pi/2;
pi/2 0 pi;
(pi/2)-(pi/4) 0 pi/2;
pi/2 0 pi/2]';
2-38
Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot End-Effector
waypointVels = 0.1 *[ 0 1 0;
-1 0 0;
0 -1 0;
1 0 0;
0 1 0]';
waypointAccels = zeros(size(waypointVels));
waypointAccelTimes = diff(waypointTimes)/4;
ik = inverseKinematics('RigidBodyTree',gen3);
ikWeights = [1 1 1 1 1 1];
ikInitGuess = q_home';
ikInitGuess(ikInitGuess > pi) = ikInitGuess(ikInitGuess > pi) - 2*pi;
ikInitGuess(ikInitGuess < -pi) = ikInitGuess(ikInitGuess < -pi) + 2*pi;
2-39
2 Get Started
Set includeOrientation to zero if you do not want to define different orientation at different waypoints.
When includeOrientation is set to false, a default orientation is used to compute the trajectory.
includeOrientation = true;
numWaypoints = size(waypoints,2);
numJoints = numel(gen3.homeConfiguration);
jointWaypoints = zeros(numJoints,numWaypoints);
for idx = 1:numWaypoints
if includeOrientation
tgtPose = trvec2tform(waypoints(:,idx)') * eul2tform(orientations(:,idx)');
else
tgtPose = trvec2tform(waypoints(:,idx)') * eul2tform([pi/2 0 pi/2]); %#ok<UNRCH>
end
[config,info] = ik(eeName,tgtPose,ikWeights',ikInitGuess');
jointWaypoints(:,idx) = config';
end
2-40
Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot End-Effector
Select method of interpolation from trapezoidal velocity profiles, third-order polynomial, fifth-order
polynomial or B-spline polynomial by modifying the variable trajType.
trajType = 'trap';
switch trajType
case 'trap'
[q,qd,qdd] = trapveltraj(jointWaypoints,numel(trajTimes), ...
'AccelTime',repmat(waypointAccelTimes,[numJoints 1]), ...
'EndTime',repmat(diff(waypointTimes),[numJoints 1]));
case 'cubic'
[q,qd,qdd] = cubicpolytraj(jointWaypoints,waypointTimes,trajTimes, ...
'VelocityBoundaryCondition',zeros(numJoints,numWaypoints));
case 'quintic'
[q,qd,qdd] = quinticpolytraj(jointWaypoints,waypointTimes,trajTimes, ...
'VelocityBoundaryCondition',zeros(numJoints,numWaypoints), ...
'AccelerationBoundaryCondition',zeros(numJoints,numWaypoints));
case 'bspline'
ctrlpoints = jointWaypoints; % Can adapt this as needed
[q,qd,qdd] = bsplinepolytraj(ctrlpoints,waypointTimes([1 end]),trajTimes);
% Remove the first velocity sample
qd(:,1) = zeros (7,1);
otherwise
error('Invalid trajectory type! Use ''trap'', ''cubic'', ''quintic'', or ''bspline''');
end
config = q(:,idx)';
end
2-41
2 Get Started
Expected motion of the robot (Assuming you are starting from the retract position)
2-42
Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot End-Effector
Press ‘y’ and hit enter if you want to command the Kinova Gen3 robot to track the calculated
trajectory or hit enter to finish the example.
prompt = 'Do you want to send same trajectory to the hardware? y/n [n]: ';
str = input(prompt,'s');
if isempty(str)
str = 'n';
end
if str == 'n'
clear;
return;
end
q = q*180/pi;
qd = qd*180/pi;
qdd = qdd*180/pi;
timestamp = (0:0.001:waypointTimes(end))';
qs_deg = interp1(trajTimes',q',timestamp);
vel = interp1(trajTimes',qd',timestamp);
acc = interp1(trajTimes',qdd',timestamp);
2-43
2 Get Started
Simulink.importExternalCTypes(which('kortex_wrapper_data.h'));
gen3Kinova = kortex();
gen3Kinova.ip_address = '192.168.1.10';
gen3API.user = 'admin';
gen3API.password = 'admin';
isOk = gen3Kinova.CreateRobotApisWrapper();
if isOk
disp('You are connected to the robot!');
else
error('Failed to establish a valid connection!');
end
2-44
Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot End-Effector
status = 1;
%% Check if the robot has reached the end position
while status
[isOk,~, actuatorFb, ~] = gen3Kinova.SendRefreshFeedback();
show(gen3, ((actuatorFb.position)*pi/180)','Frames','off','PreservePlot',false);
drawnow;
if isOk
[~,status] = gen3Kinova.GetMovementStatus();
else
error('SendRefreshFeedback error');
end
end
isOk = gen3Kinova.DestroyRobotApisWrapper();
Clear Workspace.
clear;
2-45
2 Get Started
This example shows how to connect to the KINOVA® Gen3 7-DoF Ultralightweight Robot arm with
Simulink®. It includes blocks to get feedback from the robot and send control commands.
Introduction
Robotics System Toolbox™ Support Package for KINOVA Gen3 Manipulators enables you to control
manipulators using MATLAB® and Simulink. This support package utilizes APIs provided by robot
manufactures to acquire various sensor data, simulate robot models, and control the robot. Prototype
algorithms and perform simulations of these robots using rigid body tree models from Robotics
System Toolbox or Simscape™ Multibody™ robot models. Connect with the robot hardware to test
and validate your algorithms.
In this example, KORTEX™ system object from KINOVA is used to connect and control the Gen3
robot.
Prerequisites
Model
2-46
Control KINOVA Gen3 Robotic Manipulator Using KINOVA KORTEX System Object and Simulink
The general structure of the model is shown in the figure above. The model consists primarily of two
system objects, i.e., Trajectory Feeder and KORTEX Main. The KORTEX Main system is associated
with the Kortex system object provided by KINOVA Robotics. The input and output structure of the
system object is pre-defined and more information on the interface see kortex System object.
The KORTEX Main system object receives a command to process, and various setpoints form the
Trajectory feeder system. The robot specific parameters need to be set before running the Simulink
model.
The KORTEX Main system object receives a command to process, and various setpoints form the
Trajectory feeder system. The trajectory feeder system waits for the robot to stop moving and then
send the next set of commands. Expected robot motion is similar to the one mentioned in “Connect to
Kinova Gen3 Robot and Manipulate the Arm Using MATLAB” on page 2-29.
The trajectory Feeder system object consists of a set of waypoints that the robot will follow. Refer to
the setupImpl method of the system object to modify the waypoints.
Note: Ensure that the E-STOP is close to you to stop the movement of robot, as the robot moves
according to the specified list of waypoints.
After configuring the robot specific parameters, click SIMULATE on the Simulink toolbar and then
click Run. The robot starts moving based on the set of waypoints, and all the waypoints are executed.
2-47
2 Get Started
2-48
Control Cartesian Position of KINOVA Gen3 Robot End-Effector Using Inverse Kinematics Block and KINOVA KORTEX System Object
This example shows how to solve inverse kinematics problems using Rigid Body Trees and how to
control the KINOVA Gen3 7-DoF Ultralightweight Robot Arm to follow the desired trajectory.
Introduction
Robotics System Toolbox™ Support Package for KINOVA® Gen3 Manipulators enables you to control
manipulators using MATLAB® and Simulink®. This support package utilizes APIs provided by robot
manufactures to acquire various sensor data, simulate robot models, and to control the robot.
Prototype algorithms and perform simulations of these robots using rigid body tree models from
Robotics System Toolbox or Simscape™ Multibody™ robot models. Connect with the robot hardware
to test and validate your algorithms.
In this example, the InverseKinematics System object™ from Robotics System Toolbox is used to
calculate the actuator joint angles for the specified Cartesian pose. The InverseKinematics System
object creates an inverse kinematic (IK) solver to calculate joint configurations for a desired end-
effector pose based on a specified rigid body tree model.
Prerequisites
Model
2-49
2 Get Started
Inverse Kinematics
The inverseKinematics System object creates an inverse kinematic (IK) solver to calculate joint
configurations for a desired end-effector pose based on a specified rigid body tree model. Create a
rigid body tree model for your robot using the rigidBodyTree class. This model defines all the joint
constraints that the solver enforces. If a solution is possible, the joint limits specified in the robot
model are obeyed.
2-50
Control Cartesian Position of KINOVA Gen3 Robot End-Effector Using Inverse Kinematics Block and KINOVA KORTEX System Object
Setpoint Control
The Setpoint Control subsystem takes the desired Cartesian position and pose of the end-effector of
the robot and applies a coordinate transformation to generate the desired for the Inverse Kinematics
block. In this example, the desired orientation of the end-effector is fixed, and the Cartesian position
is controlled via three input sliders. The subsystem also contains the Simulation 3D Actor block from
Simulink 3D Animation™ to locate object in Simulink 3D world.
Input Sliders
2-51
2 Get Started
Simulation 3D Robot block from Robotics System Toolbox™ enables visualization of RigidBodyTree
objects in Simulink 3D world. The command actual robot subsystem uses Kortex system object from
KINOVA Robotics to connect and control the Gen3 robot.
Note: The robot will move according to the specified input, so please have the E-STOP close to you.
Ensure that 'Move Robot' toggle switch is in off position before running the model. This will prevent
the sudden and unintended motion of the robot.
After configuring the robot specific parameter in the Kortex system object properties, navigate to the
SIMULATE tab and press Run. The simulated robot will move to the position specified by the slider
position, and any change in the slider position will be reflected in the end-effector position of the
simulated robot.
2-52
Control Cartesian Position of KINOVA Gen3 Robot End-Effector Using Inverse Kinematics Block and KINOVA KORTEX System Object
After validating the joint configuration generated by the inverse kinematic block, change 'Move
Robot' toggle switch to on position. This will send the joint angles generated by the inverse kinematic
block to the robot.
In the absence of Simulink 3D Robot and Actor block, this model can still be used to control the Gen3
robot. Comment out the Simulink 3D Robot block and Simulink 3D Actor in 'Setpoint Control'
subsystem before running the model without valid installation of Simulink 3D Animation. However,
validate the joint configuration generated by the inverse kinematic block using display or scope
blocks before sending it to the actual robot.
2-53
2 Get Started
This example shows how to connect to the cameras of Kinova Gen3 7-DoF Ultralightweight Robot and
get the RGB and depth streams. Acquired images are utilized to create colorized point clouds of the
environment.
Kinova® robotics provides Image Acquisition Toolbox™ adaptor to access the camera streams.
Ensure that you have completed the installation instructions for Image Acquisition Toolbox adaptor
before proceeding with this example (for details, see “Configure MATLAB to Acquire Images from
Vision Module” on page 2-6).
Clear all output in the live script by right clicking anywhere in the live script and then clicking ‘Clear
All Output’ or navigate to VIEW > Clear all Output.
Products required
To start working with the RGB and depth streams of the camera, you need to know the intrinsic and
extrinsic parameters of the camera. The intrinsic parameters represent the optical center and focal
length of the camera. The extrinsic parameters represent the location of the camera in the 3-D scene.
You can get initial values from the device manufacturer, but it is recommended to run a calibration
routine to get the exact values of your device. In this example, you can use the values returned by the
manipulator to generate the colorized point cloud from the RGB and Depth image.
isOk = gen3Kinova.CreateRobotApisWrapper();
if isOk
disp('You are connected to the robot!');
else
error('Failed to establish a valid connection!');
end
2-54
Generate Colorized Point Cloud Using Vision Module of Kinova Gen3 Robot
To access the RGB channel, create a video input object. A video input object represents the
connection between MATLAB® and a specific image acquisition device.
vidRGB = videoinput('kinova_vision_imaq', 1, 'RGB24');
vidRGB.FramesPerTrigger = 1;
Search all the video source objects associated with the video input object and return the video source
object.
srcRGB = getselectedsource(vidRGB);
Modify the video source objects associated with the video input object.
srcRGB.Ipv4Address = gen3Kinova.ip_address;
srcRGB.CloseConnectionOnStop = 'Enabled';
Follow the same steps to connect to the depth stream of the RGB-D camera.
vidDepth = videoinput('kinova_vision_imaq', 2, 'MONO16');
vidDepth.FramesPerTrigger = 1;
srcDepth = getselectedsource(vidDepth);
srcDepth.Ipv4Address = gen3Kinova.ip_address;
srcDepth.CloseConnectionOnStop = 'Enabled';
2-55
2 Get Started
After connecting to the hardware, acquire images from the RGB and Depth camera. If you have
already stored images on your computer, then load them in depthData and rgbData variables
respectively.
depthData = getsnapshot(vidDepth);
rgbData = getsnapshot(vidRGB);
figure;
imshow(rgbData);
figure;
h = imshow(depthData,hot);
ax = h.Parent;
h.CDataMapping = 'scaled';
ax.CLimMode = 'auto';
2-56
Generate Colorized Point Cloud Using Vision Module of Kinova Gen3 Robot
Using the intrinsic parameters of both cameras and the extrinsic parameters, you can convert the
RGB and depth streams to a colorized point cloud.
figure;
ax2 = pcshow(pc);
axis(ax2, [-0.3, 0.3, -0.3, 0.3, 0.3, 0.6]);
view(ax2, [1,-60]);
xlabel('x');ylabel('y');zlabel('z');
2-57
2 Get Started
if exist('vidRGB', 'var')
if isvalid(vidRGB)
closepreview(vidRGB);
delete(vidRGB);
end
end
if exist('vidDepth', 'var')
if isvalid(vidDepth)
closepreview(vidDepth);
delete(vidDepth);
end
end
Clear Workspace
clear;
2-58
Generate Colorized Point Cloud Using Vision Module of Kinova Gen3 Robot
Function Definition
% Preallocate
depthData= double(depthData);
depthData(depthData == 0) = nan;
[dh, dw] = size(depthData);
% Reproject each 3D point on the color image: Get indices into the RGB frame
xind = (pcPnts(:,:,1) * rgb_fx)./ pcPnts(:,:,3) + rgb_ppx;
yind = (pcPnts(:,:,2) * rgb_fy)./ pcPnts(:,:,3) + rgb_ppy;
rgbDataTemp = uint8(rgbData(:,:,1));
color_temp = zeros(dh,dw,'uint8');
color_temp(validInd) = rgbDataTemp(rgbLinearInd);
pcRGB(:,:,1) = color_temp;
rgbDataTemp = uint8(rgbData(:,:,2));
color_temp = zeros(dh,dw,'uint8');
color_temp(validInd) = rgbDataTemp(rgbLinearInd);
2-59
2 Get Started
pcRGB(:,:,2) = color_temp;
rgbDataTemp = uint8(rgbData(:,:,3));
color_temp = zeros(dh,dw,'uint8');
color_temp(validInd) = rgbDataTemp(rgbLinearInd);
pcRGB(:,:,3) = color_temp;
end
2-60
Read Current Joint Angles from KINOVA Gen3 Robot Arm
This example shows you how to connect to the KINOVA Gen3 7-DoF Ultralightweight Robot arm with
Simulink® using Robot Operating System (ROS). This includes ROS message Subscribe block to get
feedback from the robot.
Products Required
• MATLAB®
• Simulink
• Robotics System Toolbox™
• ROS Toolbox™
Hardware Required
Introduction
Robotics System Toolbox Support Package for Manipulators enables you to control manipulators
using MATLAB and Simulink. This support package utilizes ROS packages provided by robot
manufacturers to acquire various sensor data, simulate robot models, and control the robot. You can
prototype algorithms and perform simulations of these robots using rigid body tree models from
Robotics System Toolbox™ or Simscape™ Multibody™ robot models. This support package also
allows you to connect with the robot hardware to test and validate your algorithms.
In this example, sensor_msgs/JointState ROS message is used to get current joint configuration of
Kinova® Gen3 robot.
Prerequisites
Model
2-61
2 Get Started
The model consists of a ROS subscriber block which is configured for /my_gen3/joint_states
ROS topic. As explained in the Github page of Kinova Robotics, the first part of the ROS topic,
'my_gen3' might be different based on the robot name set during the roslaunch command. The joint
angles and gripper position is extracted from the received message and then displayed using Display
blocks.
After launching required ROS nodes and connecting to the ROS network, click Simulate on the
Simulink toolbar and then click Run. The Display blocks in Simulink displays the current joint angles
and gripper position of the robot.
2-62
Control Individual Joint Angle of KINOVA Gen3 Robot
This example shows you how to connect to the KINOVA Gen3 7-DoF Ultralightweight Robot arm with
Simulink® using Robot Operating System (ROS). This includes ROS message Subscribe and Publish
blocks to get feedback from the robot and then send commands to the robot to reach a particular
joint configuration.
Products Required
• MATLAB®
• Simulink
• Robotics System Toolbox™
• ROS Toolbox
Hardware Required
Introduction
Robotics System Toolbox Support Package for KINOVA Gen3 Manipulators enables you to control
manipulators using MATLAB and Simulink. This support package utilizes ROS packages provided by
robot manufacturers to acquire various sensor data, simulate robot models, and control the robot.
You can prototype algorithms and perform simulations of these robots using rigid body tree models
from Robotics System Toolbox or Simscape™ Multibody™ robot models. This support package also
allows you to connect with the robot hardware to test and validate your algorithms.
In this example, sensor_msgs/JointState ROS message is used to get current joint configuration of
Kinova Gen3 robot.
Prerequisites
Model
2-63
2 Get Started
The model consists of a ROS subscriber block which is configured for /my_gen3/joint_states
ROS topic. As explained in the GitHub page of Kinova Robotics, the first part of the ROS topic,
'my_gen3' might be different based on the robot name set during the roslaunch command. Current
joint angles are extracted from the received message and then converted to degrees. Further, user
specified offset in each joint angle is added to the current value and given to the Send Joint
Trajectory subsystem.
This subsystem consists of three different blocks. The Blank ROS message block, first from the left, is
configured to create a blank message for kortex_driver/PlayJointTrajectoryRequest ROS
service request. The MATLAB function block packJointTraj, assigns various values such as desired
joint configuration, type of constraint, and maximum velocity to blank message fields. The call service
2-64
Control Individual Joint Angle of KINOVA Gen3 Robot
Note: Ensure that the E-STOP is close to you to stop the movement of robot, as the robot moves
according to the specified offset.
After launching required ROS nodes and connecting to the ROS network, click Simulate on the
Simulink toolbar and then click Run. The robot will not move from the current position at the start as
there is no change in the joint angle offset. Now change the offset values and apply the change. This
triggers the Send Joint Trajectory subsystem and the robot moves to the desired joint angle
configuration.
2-65
2 Get Started
This example shows you how to connect to the KINOVA Gen3 7-DoF Ultralightweight Robot arm with
Simulink® using Robot Operating System (ROS). This includes ROS message Subscribe and Publish
blocks to get feedback from the robot and then send commands to the robot to reach a particular
Cartesian configuration.
Products Required
• MATLAB®
• Simulink
• Robotics System Toolbox™
• ROS Toolbox
Hardware Required
Introduction
Robotics System Toolbox Support Package for KINOVA Gen3 Manipulators enables you to control
manipulators using MATLAB and Simulink. This support package utilizes ROS packages provided by
robot manufacturers to acquire various sensor data, simulate robot models, and control the robot.
You can prototype algorithms and perform simulations of these robots using rigid body tree models
from Robotics System Toolbox or Simscape™ Multibody™ robot models. This support package also
allows you to connect with the robot hardware to test and validate your algorithms.
Prerequisites
Model
2-66
Send KINOVA Gen3 Robot Arm to Desired Cartesian Pose
The model consists of a ROS Subscriber block which is configured for /my_gen3/joint_states
ROS topic. As explained in the GitHub page of Kinova Robotics, the first part of the ROS topic,
'my_gen3' might be different based on the robot name set during the roslaunch command. From the
received message, information about the current Cartesian pose is extracted and displayed on various
Display blocks. These values can be used as a reference while providing the Cartesian setpoint to the
robot. Further, user specified Cartesian setpoint is given to the Send Desired Pose subsystem.
This subsystem consists of three different blocks. The Blank ROS message block , first from the left, is
configured to create a blank message for kortex_driver/PlayCartesianTrajectoryRequest
ROS service request. The MATLAB function block, packCartesianTraj assigns various values such as
desired Cartesian configuration, type of constraint, and maximum velocity to blank message fields.
2-67
2 Get Started
Note: Ensure that the E-STOP is close to you to stop the movement of robot, as the robot moves
according to the specified Cartesian setpoint.
After launching required ROS nodes and connecting to the ROS network, click Simulate on the
Simulink toolbar and then click Run. The robot will not move from the current position at the start as
there is no change in the joint angle offset. The 'Move Robot manual switch is initially connected to a
ground, so the Send Desired Pose subsystem will not trigger. Now, change the desired Cartesian pose
and then change the state of 'Move Robot' manual switch. This triggers the Send Desired Pose
subsystem and the robot moves to the desired Cartesian configuration.
2-68
Send Precomputed Trajectory to KINOVA Gen3 Robot Arm
This example shows you how to connect to the KINOVA Gen3 7-DoF Ultralightweight Robot arm with
Simulink® using Robot Operating System (ROS). This includes ROS message Subscribe and Publish
blocks to get feedback from the robot and then send command to the robot to follow a precomputed
joint trajectory.
Products Required
• MATLAB®
• Simulink
• Robotics System Toolbox™
• ROS Toolbox
Hardware Required
Introduction
Robotics System Toolbox Support Package for KINOVA Gen3 Manipulators enables you to control
manipulators using MATLAB and Simulink. This support package utilizes ROS packages provided by
robot manufacturers to acquire various sensor data, simulate robot models, and control the robot.
You can prototype algorithms and perform simulations of these robots using rigid body tree models
from Robotics System Toolbox or Simscape™ Multibody™ robot models. This support package also
allows you to connect with the robot hardware to test and validate your algorithms.
In this example /my_gen3/joint_states ROS message is used to get current joint configuration of
KINOVA Gen3 robot. Then follow joint trajectory custom ROS action is used for sending command to
a robot to follow a desired precomputed joint trajectory. For more information on the advantages of
the precomputed trajectory and its limitation, see “Track Pre-Computed Trajectory of Kinova Gen3
Robot End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API” on page 2-20.
Prerequisites
Model
2-69
2 Get Started
The model consists of a ROS Subscriber block which is configured for /my_gen3/joint_states
ROS topic. As explained in the GitHub page of Kinova Robotics, the first part of the ROS topic,
'my_gen3' might be different based on the robot name set during the roslaunch command. The
current joint configuration and Cartesian offset waypoints along with the time to achieve each
waypoint is fed to Package and Send Trajectory subsystem. This subsystem calculates joint
configuration for each waypoint using Inverse Kinematics block and then calculates a smooth
trajectory between waypoints using trapezoidal velocity profile interpolation. This precomputed joint
trajectory command is further given to the robot using /my_gen3/
gen3_joint_trajectory_controller/follow_joint_trajectory/goal ROS action.
2-70
Send Precomputed Trajectory to KINOVA Gen3 Robot Arm
The waypoint offsets are considered from the home position. Hence the model consists of a subsystem
and a push button which sends robot to the home position from its current position. The structure of
this subsystem is similar to the example “Send KINOVA Gen3 Robot Arm to Desired Cartesian Pose”
on page 2-66.
This subsystem consists of three primary blocks related to ROS and auxiliary subsystem to calculate
the joint configuration for each waypoint. The Blank ROS message block is configured to create a
blank message for control_msgs/FollowJointTrajectoryActionGoal ROS action goal. The
MATLAB function block, packageTrajectory calculates interpolated joint angle trajectory and assigns
various values to blank message fields. Finally, the publisher block is configured for /my_gen3/
2-71
2 Get Started
This subsystem consists of auxiliary algorithm to calculate the joint configuration for each waypoint.
Primarily it uses certain frame transformations and Inverse Kinematics block to calculate joint angle
configuration for each Cartesian waypoint. For more information on Inverse Kinematics, see “Control
Cartesian Position of KINOVA Gen3 Robot End-Effector Using Inverse Kinematics Block and KINOVA
KORTEX System Object” on page 2-49.
Note: Ensure that the E-STOP is close to you to stop the movement of robot, as the robot moves
according to the specified waypoints.
After launching required ROS nodes and connecting to the ROS network, click Simulate on the
Simulink toolbar and then click Run. The robot will not move when simulation starts. Press the Send
To Home button to command the robot to move from its current position to the home position. Wait
for the robot to stop moving completely and then press Send Trajectory. This triggers the Package
and Send Trajectory subsystem and smooth interpolated trajectory will be calculated based on the
desired waypoints. The trajectory calculation process is computationally intensive and will take few
minutes to complete. After the trajectory is calculated, it is provided to the robot and the robot starts
moving. The precomputed trajectory has to meet certain requirements in order to be accepted by the
robot. For more information on the limitation and requirements of precomputed trajectories, see
GitHub page of Kinova Robotics. The trajectory followed by the robot with default waypoint offsets is
similar to Generate a Trajectory Using a Set of Waypoints for KINOVA Gen3 Robot End-Effector
example. For more information, see “Generate a Trajectory Using a Set of Waypoints for KINOVA
Gen3 Robot End-Effector” on page 2-37.
2-72
Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot Model
This example shows how to simulate an end-to-end rigid body tree workflow for a KINOVA® Gen3
robot to detect a rectangular object using OpenCV Interface, and then pick and move the object with
inverse kinematics.
This example uses the algorithm created using Computer Vision Toolbox™ Interface for
OpenCV in MATLAB to detect the object from the image (either simulated image or a captured
image), and calculates the position and orientation data. Robotics System Toolbox™ is used to model,
simulate, plan motion, and visualize the manipulator.
Note: Execute the script section-by-section as mentioned in this example. Do not skip any section as
each section contains some dependency on reference variables.
Required Products
• MATLAB®
• Robotics System Toolbox™
• Robotics System Toolbox™ Support Package for Manipulators
• Computer Vision Toolbox™ (required only if you want to use the detectObjectOCV MEX function
for object detection, provided with this example)
• Image Processing Toolbox™ (required as a dependency of Computer Vision Toolbox)
• Computer Vision Toolbox Interface for OpenCV in MATLAB (required only if you want to modify
the detectObjectOCV MEX function, provided with this example)
Create a rigid body tree model for Kinova Gen3 robot, and define initial pose and get end effector
transformation.
The initial pose is defined such that the focus of the wrist-mounted camera on Gen3 robot is exactly
downwards, and the camera axis is perpendicular to the surface of the bench (to be created in the
next step) in the downward direction.
% Define initial pose by specifying the joint values (in radians) at ['Shoulder_Link', 'HalfArm1_
q_home = [0 0.35 pi -1.20 0 -pi/2 pi/2]';
Create a simulation environment with a bench (to which the base of manipulator is fixed) and a box
placed on the bench.
2-73
2 Get Started
To create the collision geometry for both objects for visualization, you can use collisionBox
function. To add desired pose for the collision geometry, use the homogeneous transformation matrix.
Note: The object must be placed within the field of view limit of the camera for successful object
detection. For a given camera scan position, the camera field of view limits are [0.365, 0.635] and
[-0.239, 0.243] respectively for the X and Y axis. The object height must be within the appropriate
limit ([0.03 0.1]) as per the gripper geometry.
% Create bench mesh (X_length, Y_length, Z_length)
bench = collisionBox(1.3, 1.3, 0.05);
Visualize the robot model and the environment objects. Set the color of the bench and the box
objects.
% Close the previous simulation figure window if it is present while running this script again
close(findobj('type','figure','name','Simulation of Gen3 to Detect and Pick Object'))
show(gen3,q_home,'PreservePlot',false,'FastUpdate',true,'Frames',"off");
set(gcf,'Visible','on');
hold on
2-74
Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot Model
Find Camera Transformation with Reference to the Base of Kinova Gen3 Robot
In Kinova Gen3 robot, the camera is located at 0.041m (offset) in y-direction in the local frame of the
end effector.
However, if your robot is situated at some height from the reference plane, then consider that value
in the zDistance parameter.
Note: If you are using a different object instead of the one provided in this simulation, update the
boxHeight parameter accordingly. This value is used to calculate the location of the object in
subsequent sections.
2-75
2 Get Started
Note: The parameters defined here are based on the vision module provided by Kinova for the Gen3
robot. If you are using any other camera module, update the camera parameters accordingly.
Select the type of image, which is the input for the OpenCV object detection function.
Select Simulated Image for using the simulated image, or select Camera Image if you want to use
an actual image captured by the vision module camera for the simulation purpose.
To use Kinova vision module to capture actual image for the simulation, you need to complete the
steps 1 and 2 explained in “Configure MATLAB to Acquire Images from Vision Module” on page 2-6.
selectImageType = ;
rgbImg = exampleHelperAcquireImage(environmentWorld,actualCameraTransf,selectImageType,cameraPara
Note: If you want to use an existing image saved on your computer, then load the image (using
imread function) in the "rgbImg" variable after running this section. Also, update the camera
parameters accordingly and select the image type as Simulated Image in the drop-down menu, so
that you can proceed without hardware connection.
Use OpenCV Object Detection Algorithm and Compute Object Location in a Global or Base
Frame of Kinova Gen3 Robot
Select Use detectObjectOCV MEX Function from the below drop-down menu to use the MEX
function generated from the OpenCV file (detectObjectOCV.cpp) to detect the rectangular object
and extract the position and orientation parameters in terms of pixel and degrees, respectively.
Note: To use detectObjectOCV MEX Function, Computer Vision Toolbox is a required product.
Please ensure that this toolbox is installed in your system. To check the list of installed products, use
"ver" command in the MATLAB command window.
If Computer Vision Toolbox is not installed in your system, then select Skip Object Detection to skip
the OpenCV object detection to avoid the issues related to MEX files dependency.
selectObjectDetectionWorkflow =
selectObjectDetectionWorkflow =
"OpenCVWorkflowEnabled"
The overall algorithm to detect an object's position and orientation is similar to the one explained in
“Detect Position and Orientation of a Rectangular Object Using OpenCV” on page 3-8.
Note: If you are using an actual image from the vision module or a captured custom image for the
simulation, then you may need to change the threshold parameters of the OpenCV object detection
2-76
Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot Model
logic. Use the below command in the MATLAB command window to open the
detectObjectOCV.cpp file to make the necessary changes.
If you have made changes in the OpenCV source file (detectObjectOCV.cpp), then follow these
steps to generate an updated MEX file that reflects the changes (for more information regarding
these steps, see Computer Vision Toolbox™ Interface for OpenCV in MATLAB):
The below section of code calculates the absolute position of the object in a global frame of reference.
if (strcmp(selectObjectDetectionWorkflow,"OpenCVWorkflowEnabled"))
% Add path to the MEX function location and use it to compute the
% location and orientation of the object
dirPath = codertarget.robotmanipulator.internal.getSpPkgRootDir;
dirFullPath = fullfile(dirPath,'resources','opencv_mex');
addpath(dirFullPath);
result = detectObjectOCV(rgbImg);
% Convert the result into double data type for further calculations
result = double(result);
Note: if you are using the actual image of a different object, change the boxHeight parameter
accordingly in the "boxCenterPoint" definition.
Update the simulation environment to display the object in the simulation world as per the input
image and the computed object location.
2-77
2 Get Started
% Note: If you want to update the object dimension as per the actual
% object, then update it here by changing box.X, box.Y, box.Z parameter
% values.
2-78
Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot Model
Motion Planning to Pick and Move the Object with Inverse Kinematics
% Define the sample time and trajectory time array (For simulation
% purpose, we are using the same trajectory time for all the three tasks)
waypointTimes = [0;4];
% Sample time
ts = 0.2;
% Creating a trajectory time array
trajTimes = waypointTimes(1):ts:waypointTimes(end);
In this task, compute the waypoints, orientations and the initial guess for inverse kinematics.
pickedPartTransf = getTransform(gen3,q_home,'pickedPart');
% Define reach to height (+Z direction) for Task-1 and Task-3
reachToHeight = 0.15;
% Configure the waypoint to reach 0.15m above the object for the first task
goalPointForTask1 = boxCenterPoint' + [0;0;reachToHeight];
waypointForTask1 = horzcat([pickedPartTransf(1,4); pickedPartTransf(2,4); pickedPartTransf(3,4)],
goalPointForTask1);
ikInitGuess = q_home';
computedTrajForTask1 = exampleHelperGenerateTrajectory(gen3,waypointForTask1,orientations,trajTim
r = rateControl(1/ts);
for idx = 1:numel(trajTimes)
config = computedTrajForTask1.position(:,idx)';
show(gen3,config','PreservePlot',false,'FastUpdate',true,'Frames',"off");
waitfor(r);
end
2-79
2 Get Started
pickedPartTransf = getTransform(gen3,config','pickedPart');
waypointForTask2 = horzcat([pickedPartTransf(1,4); pickedPartTransf(2,4); pickedPartTransf(3,4)],
orientations = [deg2rad(orientationAtGraspPose) pi 0; deg2rad(orientationAtGraspPose) pi 0]';
ikInitGuess = config;
computedTrajForTask2 = exampleHelperGenerateTrajectory(gen3,waypointForTask2,orientations,trajTim
r = rateControl(1/ts);
for idx = 1:numel(trajTimes)
config = computedTrajForTask2.position(:,idx)';
show(gen3,config','PreservePlot',false,'FastUpdate',true,'Frames',"off");
waitfor(r);
end
2-80
Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot Model
partBody = getBody(gen3,'pickedPart');
addCollision(partBody,"box", [Box.mesh.X, Box.mesh.Y, Box.mesh.Z]);
CurrentRobotJConfig = config';
% Compute current transformation matrix for pickedPart for setting up the pose of the object mesh
CurrentRobotTaskConfig = getTransform(gen3,CurrentRobotJConfig,'pickedPart');
patchObj(2).Visible = 'off';
show(gen3,CurrentRobotJConfig,'PreservePlot',false,'FastUpdate',true,'Frames',"off");
Box.mesh.Pose = CurrentRobotTaskConfig ;
Box.plotHandle.Matrix = Box.mesh.Pose;
2-81
2 Get Started
Task-3: Reach to 15cm above the surface of the bench after grabbing the object
Compute waypoints, orientations and the initial guess for inverse kinematics for Task-3.
pickedPartTransf = getTransform(gen3,config','pickedPart');
goalPointForTask3 = [pickedPartTransf(1,4);pickedPartTransf(2,4);reachToHeight];
waypointForTask3 = horzcat([pickedPartTransf(1,4); pickedPartTransf(2,4); pickedPartTransf(3,4)],
ikInitGuess = config;
computedTrajForTask_3 = exampleHelperGenerateTrajectory(gen3,waypointForTask3,orientations,trajTi
r = rateControl(1/ts);
for idx = 1:numel(trajTimes)
config = computedTrajForTask_3.position(:,idx)';
show(gen3,config','PreservePlot',false,'FastUpdate',true,'Frames',"off");
CurrentRobotTaskConfig = getTransform(gen3,config','pickedPart'); %Computing current transfor
Box.mesh.Pose = CurrentRobotTaskConfig;
Box.plotHandle.Matrix = Box.mesh.Pose;
waitfor(r);
end
2-82
Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot Model
disp('--*All simulation tasks of picking and moving the object are completed*--');
--*All simulation tasks of picking and moving the object are completed*--
clear partBody patchObj Bench CurrentRobotTaskConfig robot transf vfocalLength hfocalLength orien
orientationAtGraspPose i idx Rz tempPose TBench worldCenterBoxwrtCenterPixel actualpartXY ...
reachToHeight r ikInitGuess pickedPartTransf config correctPose CurrentRobotJConfig ans ax be
cameraZ environmentWorld goalPointForTask1 goalPointForTask3 actualCameraTransf boxHeight box
endEffectorTransf centerBoxwrtCenterPixel zDistance boxCopy dirPath path
2-83
2 Get Started
This example shows how to deploy robotics algorithms developed in Simulink as a standalone ROS
node to NVIDIA Jetson compute boards. The example includes ROS Subscribe block to get feedback
from the robot and relay the same information over a new ROS topic using ROS Publish block.
Required Products
• MATLAB®
• Simulink®
• Robotics System Toolbox™
• ROS Toolbox
• Simulink Coder
Required Hardware
Introduction
Robotics System Toolbox Support Package for Manipulators enables you to control manipulators
using MATLAB and Simulink. The support package utilizes ROS packages provided by robot
manufactures to acquire various sensor data, simulate robot models, and control the robot.
You can further utilize the standalone ROS node generation feature from the ROS Toolbox to convert
robotics algorithms into a standalone ROS node. Such a node can be automatically deployed to
various compute boards using Simulink. The deployment of robotics algorithms eliminates the
requirement of constant communication between host PC (with MATLAB installed) and the robot.
This is beneficial in many application workflows. For more information on the deployment workflow
refer to Object Detection and Motion Planning Application with Onboard Deployment.
In this example, sensor_msgs/JointState ROS message is used to get current joint configuration of
Kinova Gen3 robot. One of the joint angles is then extracted form the received data and published via
a new ROS topic with std_msgs/Float32 message.
Prerequisites
• Perform the Setup and Configuration of the Jetson compute board as per instructions mentioned in
“Configure NVIDIA Jetson Board for Object Detection and Motion Planning” on page 3-2.
• Refer Read Current Joint Angles from KINOVA Gen3 Robot Arm to know more about the ROS
interface between for Kinova Gen3 robot.
Connect to KINOVA Gen3 Robot and Initiate Required ROS Nodes to Control the Robot
Open the terminal from the catkin workspace containing ros_kortex ROS packages in the Jetson
board. Execute 'source devel/setup.bash' and the following command in the terminal to launch
required ROS nodes.
2-84
Read Current Joint Angle of KINOVA Gen3 and Publish to ROS Network
Also, replace the IP address of the robot and gripper based on the actual hardware setup. For more
information on the roslaunch command, see GitHub page of KINOVA Robotics.
If the hardware setup and the IP address are correct, a message 'The Kortex driver has been
initialized correctly!' displays on the terminal window.
Configure ROS network parameter for connectivity and deployment as per setup instructions
mentioned under the section Configure the Connection to the ROS Device in Generate a Standalone
ROS Node from Simulink.
Note: The workspace folder must be the same in which you have downloaded the ROS packages from
Kinova Robotics.
Because you already started the ROS core in the earlier step, you need not start the ROS core on the
rosdevice using runCore command from MATLAB.
Model
The model consists of a ROS Subscribe block, which is configured for /my_gen3/joint_states ROS
topic. As explained in the Github page of Kinova Robotics, the first part of the ROS topic, my_gen3,
might be different based on the robot name that is set during the roslaunch command. The joint
angles position is extracted from the received message and then the value of the first joint angle is
passed to the subsystem block.
2-85
2 Get Started
The subsystem creates a std_msgs/Float32 type blank message and populate the data with first joint
angle. The populated message is then published using /relay_joint_states topic.
In the previous step, the required ROS nodes are already launched from the Jeston board, to ensure
the connection with the robot and compute board. Open Simulation on the Simulink toolbar and
then click Run. This establishes a connection between MATLAB and ROS master running on the
Jetson board.
To verify the intended behavior of the Simulink model, type rostopic list in the MATLAB command
window. The ROS topic /relay_joint_states should be listed in the output.
Navigate to ROS tab and press Build & Deploy. This will generate the code from the model, transfer it
to specified workspace on Jetson and build it.
You can use isNodeRunning, runNode and stopNode feature of rosdevice to interact with the
deployed ROS node. Repeat the same verification steps as mentioned in the earlier section to verify
the intended behavior of the autogenerated ROS node.
2-86
Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control
Detect and Pick Object Using KINOVA Gen3 Robot Arm with
Joint Angle Control and Trajectory Control
This example shows how to deploy object detection and motion planning workflows developed in
Simulink® as a standalone ROS node to NVIDIA® Jetson® compute boards. The KINOVA® Gen3
robot receives key vision sensor parameters, current robot state, and position and orientation of the
object in the captured image using ROS network. The gathered data is further used to localize the
object in world frame and the robot is commanded to reach the object and pick it.
This example includes two models: one specific to joint angle control and the other specific to
trajectory control.
For a comparison between the two control strategies, see “Joint Angle Control Versus Trajectory
Control” on page 3-13.
Required Products
• MATLAB®
• Simulink
• Robotics System Toolbox™
• ROS Toolbox
• Simulink Coder™
• Stateflow®
Required Hardware
Introduction
Robotics System Toolbox Support Package for Manipulators enables you to control manipulators
using MATLAB and Simulink. The support package utilizes ROS packages provided by robot
manufactures to acquire various sensor data, simulate robot models, and control the robot.
You can further utilize the “Generate a Standalone ROS Node from Simulink” (ROS Toolbox) feature
from the ROS Toolbox to convert robotics algorithms into a standalone ROS node. Such a node can be
automatically deployed to various computer boards using Simulink. The deployment of robotics
algorithms eliminates the requirement of constant communication between host PC (with MATLAB
installed) and the robot. This is beneficial in many application workflows. For more information on the
deployment workflow, refer to “Object Detection and Motion Planning Application with Onboard
Deployment” on page 3-4.
In this example, several ROS Subscribe blocks are used to gather key robot and camera sensor
parameters. These parameters along with the current joint configuration of the robot, position of the
camera sensor in world frame, and the object position and orientation relative to the image, are used
to localize the object in world frame. Once the object position and orientation are known in world
frame, various functionalities offered by Robotics System Toolbox are used to command the robot.
2-87
2 Get Started
This example demonstrates a workflow to design and validate complex robotics applications, which
requires integration of various open-source utilities with MATLAB, and further control the robot using
ROS drivers. The example has been tested with KINOVA Gen3 robot with the vision module and
Robotiq 2f-85 gripper. However, the overall design of the example is modular and can be easily
modified for different manipulators which can be controlled using ROS driver.
Prerequisites
• Understand the algorithm to detect position and orientation, as explained in “Detect Position and
Orientation of a Rectangular Object Using OpenCV” on page 3-8.
• Perform the Setup and Configuration of the Jetson compute board as per instructions mentioned in
“Configure NVIDIA Jetson Board for Object Detection and Motion Planning” on page 3-2.
• Refer to the “Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo,
OpenCV, and MATLAB” on page 2-99 example for more information on the object localization
algorithm.
Connect to KINOVA Gen3 Robot and Initiate Required ROS Nodes to Control the Robot
Open the terminal from the catkin workspace containing ros_kortex ROS packages in the Jetson
board. Execute 'source devel/setup.bash' command and the following commands in the terminal to
launch required ROS nodes.
Also, replace the IP address of the robot and gripper based on the actual hardware setup. For more
information on the roslaunch command, see GitHub page of KINOVA Robotics.
2-88
Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control
If the hardware setup and the IP address are correct, a message 'The Kortex driver has been
initialized correctly!' displays on the terminal window.
2. Open new terminal window at the same folder location and execute 'source devel/setup.bash'.
Launch the kortex_vision node by executing the following command.
3. Open new terminal windows at the same folder location and execute 'source devel/setup.bash'.
Launch the detect_object node by executing the following command.
This launches two image windows - One is the output of canny edge detection and the other is the
original image with an overlay of a red rectangle. In an ideal setting, the red rectangle should match
the boundary of the object. If the object detection is not as desired, refer to “Troubleshooting Object
Detection” on page 3-12 for more details.
Configure ROS network parameter for connectivity and deployment as per setup instructions
mentioned under the section Configure the Connection to the ROS Device in “Generate a Standalone
ROS Node from Simulink” (ROS Toolbox).
Note: The workspace folder must be the same in which you have downloaded the ROS packages from
Kinova Robotics.
Because you already started the ROS core in the earlier step, you need not start the ROS core on the
rosdevice using runCore command from MATLAB.
2-89
2 Get Started
The model consists of a state machine, which performs object localization and motion planning for a
robotic manipulator.
SendToJointConfiguration Subsystem
This subsystem commands the robot to move to a particular joint angle configuration. It also
commands the gripper to open, and uses ROS interface for Kinova Gen3 robot. For more information
refer “Control Individual Joint Angle of KINOVA Gen3 Robot” on page 2-63. It also monitors if the
robot has moved to it's commanded position. If you are using a different manipulator which can be
controlled by ROS driver, then modify this subsystem accordingly.
2-90
Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control
getGraspPose Subsystem
This subsystem calculates the robot joint configuration to reach the object. To compute the desired
joint configuration, it uses vision sensor properties such as image height, image width and focal
length; current joint configuration of the robot, and the relative position and orientation of the object.
The algorithm to calculate the desired joint configuration is similar as mentioned in “Design and
Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB” on
page 2-99.
Move to Position 1
2-91
2 Get Started
Once the desired joint configuration has been calculated, robot is commanded to move to the first
position. It is about 15 cm directly above the object. Because the rotation of the object is considered
while calculating the desired joint configuration, the gripper should align with the orientation of the
object when the robot reaches to Position 1.
Move to Position 2
Once the robot reaches to the first position, it is commanded to move to the second position. At the
end of robot motion, the object should be inside the grasping region of the robot.
2-92
Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control
At this stage, the robot gripper is at the desired position to pick the object. The subsystem sends the
command to close the gripper. The desired closing position and effort might change with size and
material of the object.
2-93
2 Get Started
Once the object is firmly picked up by the gripper, the robot is commanded to retract to first position.
Once the robot motion is complete, the execution of model stops.
Modify Offsets
The image localization and path planning algorithm considers certain offsets while calculating
desired grasping pose. The default values are assigned to these offsets in the “Types of Callbacks”
(Simulink) of the model. The description for each offset is as following:
• heightOffset: Offset between Robot base and the base of the object. If the object is lying on the
same surface as the robot base, then the offset will be zero.
• graspPoseZOffset: Z offset for first position. This will decide the height of the robot gripper from
the object at first position.
2-94
Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control
• graspPoseOffset: Final correction in the calculation of grasp pose. Use this offset to remove any
modeling errors and to set the robot gripper height above the surface at second position.
• cameraOffset: Offset between robot body and camera sensor. Fixed for Kinova Gen3 robot.
Ensure that you change these values accordingly before proceeding further. You can start with
estimated values and then fine tune them by observing the robot's behavior.
In the previous step, the required ROS nodes are already launched from the Jetson board, to ensure
the connection with the robot and compute board. Place the object in the field of view of the vision
sensor. Click Simulate on the Simulink toolbar and then click Run. This establishes a connection
between MATLAB and ROS master running on the Jetson board.
Observe the robot motion and confirm that the robot is picking the object as intended.
Navigate to ROS tab and click Build & Deploy. This will generate the code from the model, transfer
it to specified workspace on Jetson, and build it.
You can use isNodeRunning (ROS Toolbox), runNode (ROS Toolbox) and stopNode (ROS Toolbox)
feature of rosdevice to interact with the deployed ROS node. Repeat the same verification steps as
mentioned in the earlier section to verify the intended behavior of the autogenerated ROS node.
In the previous model, the desired joint angle configuration has been sent to the robot. However, you
do not have any explicit control of the intermediate trajectory between current joint configuration
and the desired joint configuration. For an accurate control of the end effector position, you can
control the robot by sending a pre-computed joint position, velocity, and acceleration commands.
To know more about this control strategy, open the manipdemo_DetectAndPick_Traj model.
2-95
2 Get Started
The model consists of a state machine, which performs object localization and motion planning for a
robotic manipulator. The overall state machine is similar to the state machine described in the model
for joint angle control. For example, the subsystem to move to position 2 looks like this:
2-96
Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control
The Package and Send Trajectory subsystem in Move_Robot_Position2 consists of three primary
blocks related to ROS and auxiliary subsystem to calculate the joint configuration for each waypoint.
The Blank ROS message block is configured to create a blank message for control_msgs/
FollowJointTrajectoryActionGoal ROS action goal. The MATLAB function block, packageTrajectory
calculates interpolated joint angle trajectory and assigns various values to blank message fields.
Finally, the publisher block is configured for /my_gen3/gen3_joint_trajectory_controller/
follow_joint_trajectory/goal ROS action and for sending commands to robot to follow the precomputed
joint angle trajectory.
2-97
2 Get Started
You can verify the entire model for trajectory control using the same steps as mentioned in Run and
Verify the Model using MATLAB connectivity and Run and Verify the Model Using
Standalone ROS deployment (for the manipdemo_DetectAndPick_Joint model in the previous
sections).
2-98
Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB
This example shows how to design and validate complex application workflows for a robotic
manipulator like the Kinova Gen3 7-DoF Ultralightweight Robot using Gazebo® physics simulator,
OpenCV, Robotics System Toolbox™ and ROS Toolbox.
Overview
This example uses functionalities offered by OpenCV to detect the location and orientation of an
object from a video stream. This information is then passed to the motion planning algorithm using
the ROS network. The motion planning algorithm uses functionalities offered by Robotics System
Toolbox to calculate the location of the object in the world frame and commands the simulated robot
to move and pick the object.
In this example, the Gazebo physics simulator is used with a simulated robot and an object. All the
necessary files required for the ROS setup are included in the virtual machine (VM) available for
download.
Clear all output in the live script by right-clicking anywhere in the live script and then clicking ‘Clear
All Output’ or navigate to VIEW > Clear all Output.
2-99
2 Get Started
Required Products
This example builds on key concepts from the following related examples:
Start a ROS-based simulator for a KINOVA Gen3 robot and configure the MATLAB® connection with
the robot simulator.
This example uses a virtual machine (VM) that is available for download. To download and install the
same, use the link available under Download Virtual Machine section in Get Started with Gazebo
and a Simulated TurtleBot (ROS Toolbox). Follow the below steps after the installation is complete
(ignore the other steps after the first step (Download and install the ROS Virtual Machine)
mentioned in Get Started with Gazebo and a Simulated TurtleBot).
1 After installing the ROS Virtual Machine, launch the Ubuntu® virtual machine desktop.
2 On the Ubuntu desktop, click the Gazebo Recycling World - Object Detection icon to start the
Gazebo world built for this example.
3 Specify the IP address and port number of the ROS master in Gazebo so that MATLAB can
communicate with the robot simulator. For this example, the ROS master in Gazebo uses the IP
address of 172.19.98.176 displayed on the Desktop. Adjust the rosIP variable based on your VM.
4 Start the ROS 1 network using rosinit.
The simulated world consists of a Kinova Gen3 robot mounted on a table and a rectangular object.
The robot also has a simulated camera sensor attached, which transmits a video stream over a ROS
network.
To simulate and control the robot arm in Gazebo, the VM contains the ros_kortex ROS package, which
is provided by KINOVA. The packages use ros_control to control the joints to desired joint positions.
2-100
Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB
First, unpause the Gazebo simulation engine. This will start the video stream from the simulated
camera sensor mounted on the robot.
physicsClient = rossvcclient('gazebo/unpause_physics');
physicsResp = call(physicsClient,'Timeout',3);
2-101
2 Get Started
The ROS node with object detection logic is placed at /home/user/catkin_ws/src/opencv_node location
in the virtual machine. The detect_object.cpp file contains the algorithm to subscribe to the video
feed and use OpenCV to detect the rectangular object. The Position and Orientation of the detected
object are further transmitted to the existing ROS network via different ROS topics.
The overall algorithm to detect an object's position and orientation is similar to the one explained in .
The necessary modifications are done in the algorithm to enable the interface with ROS.
In the initial robot pose, the object is not entirely visible in the field of view of the camera sensor. The
following commands move the simulated robot to a predefined position such that the large area of the
table is captured.
Gripper open...
[trajPub,trajCmd] = rospublisher('/my_gen3/gen3_joint_trajectory_controller/command');
jointWaypointTimes = 5;
reachJointConfiguration(trajPub,trajCmd,jointWaypoints,jointWaypointTimes);
Moving to Position
Position Reached
Once the robot reaches the scan position, the detected object can be seen in the two image windows
launched by the object detection node. The algorithm superimposes a red rectangle over the detected
object so we can verify that the correct object has been identified. You can change the slider in the
Binary Image window and observe the effect. This slider is associated with the threshold value of
canny edge detector algorithm.
2-102
Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB
To change the position of the object on the table, select the object first and press t. This will enable
Translation Mode in Gazebo, and axes marker will be displayed on the object. Interact with this
marker to change the position of the object.
Similarly, to change the orientation of the object, press r. This will enable Rotation Mode in Gazebo,
and axes marker will be displayed on the object. Interact with this marker to change the orientation
of the object.
Use Rest Model Poses option to reset the robot and object position. Do not use Reset World option
as it results in loss of communication between ROS master and MATLAB.
2-103
2 Get Started
The object detection node provides the location of the object with reference to the captured image.
The position coordinates and rotation of the object with respect to the horizontal axis are published to
three different ROS topics. The following section of code utilizes the object position information,
properties of the simulated camera, and robot's current pose to localize the object in the global world
frame.
imageWidth = single(jMsg.Width);
imageHeight = single(jMsg.Height);
focalLength = single(jMsg.K(1));
%% End Effector
endEffectorFrame = "gripper";
tmpsub = rossubscriber('/opencv/centery');
tmp = receive(tmpsub,10);
result(2) = tmp.Data;
tmpsub = rossubscriber('/opencv/angle');
tmp = receive(tmpsub,10);
result(3) = tmp.Data;
2-104
Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB
The following section of code calculates the desired grasping pose to reach the object. Once we have
the desired grasping pose, the joint angle configuration for the desired pose is computed using
inverse kinematics. The desired grasping pose also considers the rotation of the object to align the
gripper with the object.
ik = inverseKinematics('RigidBodyTree',gen3);
ik.SolverParameters.AllowRandomRestart = 0;
Move to Position 1
The robot is first commanded to move 15 cm directly above the object. In the process, the gripper is
also aligned with the object.
%% Go to Position 1
reachJointConfiguration(trajPub,trajCmd,jointWaypoints1,jointWaypointTimes);
Moving to Position
Position Reached
Move to Position 2
Now the robot is commanded to reach the grasping pose. At the end of this motion, the object will be
inside the grasping range of the gripper.
jointWaypointTimes = 3;
reachJointConfiguration(trajPub,trajCmd,jointWaypoints2,jointWaypointTimes);
Moving to Position
Position Reached
pause(1);
Gripper closed...
2-105
2 Get Started
pause(3);
jointWaypointTimes = 5;
reachJointConfiguration(trajPub,trajCmd,jointWaypoints1,jointWaypointTimes);
Moving to Position
Position Reached
rosshutdown;
clear;
Helper Function to Control Joint Angles of the Robot and Activate the Gripper
The following function utilizes the ros_kortex ROS package from Kinova to control the robot.
function reachJointConfiguration(trajPub,trajCmd,jointWaypoints,jointWaypointTimes)
send(trajPub,trajCmd);
pause(1);
disp('Moving to Position');
jSub = rossubscriber('/my_gen3/joint_states');
err = 1;
while ( err > 1e-2)
jMsg = receive(jSub);
err = max(abs(jMsg.Velocity(2:8)));
end
disp('Position Reached');
end
The following function utilizes the ros_kortex ROS package from Kinova to activate the gripper by
sending a ROS action command.
function CommandActivateGripperROSGazebo(state)% This class is for internal use and may be remove
if strcmp(state,'on') == 1
% Activate gripper
[gripAct,gripGoal] = rosactionclient('/my_gen3/custom_gripper_controller/gripper_cmd'
gripperCommand = rosmessage('control_msgs/GripperCommand');
gripperCommand.Position = 0.1; % 0.1 fully closed, 0 fully open
2-106
Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB
gripperCommand.MaxEffort = 500;
gripGoal.Command = gripperCommand;
pause(1);
% Send command
sendGoal(gripAct,gripGoal);
disp('Gripper closed...');
else
% Deactivate gripper
[gripAct,gripGoal] = rosactionclient('/my_gen3/custom_gripper_controller/gripper_cmd'
gripperCommand = rosmessage('control_msgs/GripperCommand');
gripperCommand.Position = 0.0; % 0.1 fully closed, 0 fully open
gripperCommand.MaxEffort = 500;
gripGoal.Command = gripperCommand;
pause(1);
% Send command
sendGoal(gripAct,gripGoal);
disp('Gripper open...');
end
pause(2);
end
2-107
3
• “Configure NVIDIA Jetson Board for Object Detection and Motion Planning” on page 3-2
• “Object Detection and Motion Planning Application with Onboard Deployment” on page 3-4
• “Detect Position and Orientation of a Rectangular Object Using OpenCV” on page 3-8
• “Joint Angle Control Versus Trajectory Control” on page 3-13
3 Run on Target Hardware
Note The workflow mentioned in this topic is tested on NVIDIA Jetson Nano™, NVIDIA Jetson TX2
and NVIDIA Jetson AGX Xavier™ compute boards, having Ubuntu 18.04 with ROS Melodic and
OpenCV 3.2.
This topic is part of the overall workflow described in “Object Detection and Motion Planning
Application with Onboard Deployment” on page 3-4.
To configure NVIDIA Jetson board for object detection and motion planning:
1 Ensure that you configure the NVIDIA Jetson compute board as per the instructions provided on
the respective web pages - NVIDIA Jetson Nano, NVIDIA Jetson TX2 or NVIDIA Jetson AGX
Xavier.
2 Perform Ubuntu Install of ROS Melodic on the Jetson board.
3 Install ROS packages and dependencies for Kinova Gen3 robot, as described in “Install ROS
Packages and Dependencies for ROS on External Computer” on page 1-17, on the Jetson board.
Because you are using the vision module to detect the object, the ROS packages for vision
module also need to be installed.
4 Check the libopencv-dev version by executing the following command in the terminal:
>> fullfile(codertarget.robotmanipulator.internal.getSpPkgRootDir,'resources')
8 The resources folder contains a ZIP file, opencv_node.zip. Extract the content of the ZIP file
and copy the opencv_node folder to the src folder of the catkin workspace folder that you
created in Step 3.
If you only have the access of Jetson board via SSH terminal, you can use wget command to
download the ZIP archive and then use unzip command to extract the content into the src folder.
You can host the opencv_node.zip to any file-hosting platform and generate a sharable link.
9 Execute this command to build the opencv node.
Once the build is completed, execute the command ‘source devel/setup.bash’ from the catkin
workspace folder that you created in Step3.
3-2
Configure NVIDIA Jetson Board for Object Detection and Motion Planning
See Also
“Object Detection and Motion Planning Application with Onboard Deployment” on page 3-4
3-3
3 Run on Target Hardware
Based on the control interface provided by the robot manufacturer, you can choose between a
MATLAB only workflow or MATLAB and ROS workflow for the design, simulation, development, and
verification.
The code generation and deployment require ROS Toolbox to be installed along with Robotics System
Toolbox Support Package for Manipulators.
If the MATLAB only workflow is preferred, then Computer Vision Toolbox Interface for OpenCV in
MATLAB can be used to convert the OpenCV logic into a MATLAB MEX function. This helps you to
call the MEX function for object detection directly from the MATLAB script.
If the MATLAB and ROS workflow is preferred, then cv_bridge ROS package can be used to convert
the OpenCV logic into a standalone ROS node. The node can subscribe to the video feed from the
camera sensor and publish position and orientation of the detected object. This enables you to focus
on the motion planning and localization algorithm while directly subscribing to the ROS topics
published by the object detection node.
In the two workflows described in this section, a tight integration with OpenCV represents usage of
any external functionalities or tools. Many applications require an integration of various
3-4
Object Detection and Motion Planning Application with Onboard Deployment
functionalities offered by MATLAB and external software packages, and these workflows demonstrate
a design process which involves such use cases.
A typical algorithm that you can use to detect position and orientation of a rectangular object is
explained in “Detect Position and Orientation of a Rectangular Object Using OpenCV” on page 3-8.
For the design process using MATLAB only workflow, you start the development by utilizing
functionalities offered by the Robotics System Toolbox. The Rigid body tree robot model can be used
to simulate a robot at a particular joint configuration. Further, motion planning features can be
utilized to create a trajectory to pick the object.
As mentioned in the earlier section, you can reuse the object detection algorithm developed in
OpenCV and convert it to a MEX function using Computer Vision Toolbox Interface for OpenCV in
MATLAB. Finally, robot visualization features can be used to observe and validate the robot motion.
For more details on this workflow, refer to the example, “Simulate a Detect and Pick Algorithm Using
OpenCV Interface and Rigid Body Tree Robot Model” on page 2-73.
Once the desired performance is achieved and validated using simulation, various control interfaces
provided by the robot manufacturers can be leveraged to control the robot. KINOVA Robotics
provides the MATLAB APIs to control their Gen3 robot. For more information on how to proceed from
simulation to controlling the actual hardware, refer to “Track Pre-Computed Trajectory of Kinova
Gen3 Robot End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API” on page 2-
20.
3-5
3 Run on Target Hardware
For the design process using MATLAB and ROS workflow, you start the development by utilizing
functionalities offered by the Robotics System Toolbox, ROS Toolbox and Gazebo physics simulator.
The simulated world can be created in Gazebo, which consists of an object and the robot. A simulated
vision sensor can be mounted on the robot and the video feed from the camera sensor can be
published over a ROS network.
As stated in the previous section, cv_bridge ROS package can be used to convert the OpenCV logic
into a standalone ROS node. The motion planning algorithm is developed using Robotics System
Toolbox features, and ROS Toolbox functionalities are used to communicate with the ROS master. For
more details, refer to the example, “Design and Validate Object Detection and Motion Planning
Algorithms Using Gazebo, OpenCV, and MATLAB” on page 2-99.
Once the desired performance is achieved and validated using simulation, ROS driver interfaces
provided by the robot manufacturers can be leveraged to control the robot. KINOVA Robotics
provides ROS driver packages to control their Gen3 robot via ROS.
ROS Toolbox allows you to generate a standalone ROS node from a Simulink model, which can be
directly deployed to various boards. “Read Current Joint Angle of KINOVA Gen3 and Publish to ROS
Network” on page 2-84 provides a basic example of such a workflow. For more information on how to
proceed from simulation to controlling the actual hardware and deployment, refer to “Detect and Pick
Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control” on page 2-87
example.
3-6
Object Detection and Motion Planning Application with Onboard Deployment
See Also
Related Examples
• “Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot
Model” on page 2-73
• “Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV,
and MATLAB” on page 2-99
• “Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory
Control” on page 2-87
3-7
3 Run on Target Hardware
This topic is part of the overall workflow described in “Object Detection and Motion Planning
Application with Onboard Deployment” on page 3-4.
The figure shows the original image captured by the vision module of a Kinova Gen3 Robot. The
image consists of a rectangular object lying on a uniform background. The image also shows a small
portion of the gripper.
1 To find a position and orientation of a rectangular object, convert the image to grayscale image
for efficient object detection. You can also apply a median blur in the image to smoothen any salt-
and-pepper type of noise. The below figure shows the gray image derived from the original
image.
3-8
Detect Position and Orientation of a Rectangular Object Using OpenCV
2 Detect edges of the rectangular object. For this, you can use canny edge detection. The output
image of canny edge detector is a binary black and white image. The threshold of canny edge
detector can be adjusted such that only the periphery of the object is visible in the binary image.
3-9
3 Run on Target Hardware
3 Once you have the binary image, detect contours and approximate these contours by shapes with
less number of vertices.
4 Because you have the prior information about the shape of the object, you can further compute a
rectangle with minimum area, which completely encapsulate the less complex shape (which we
derived by approximating original contours). The minimum area rectangle contains information
of the object’s location and rotation with respect to the horizontal axis.
5 To avoid any false detection, you can check the area of the rectangle. If the area of the rectangle
falls inside the predefined range, the detected rectangle can be accepted as the approximation of
original object, and the position and orientation information can be passed to the motion
planning algorithm.
In the below figure, you can see the minimum area rectangle with red boundaries, superimposed
on the original image. The rectangle has a close resemblance with the rectangular object.
3-10
Detect Position and Orientation of a Rectangular Object Using OpenCV
The key OpenCV functions used in this workflow are the following:
• cv::cvtColor
• cv::medianBlur
• cv::Canny
• cv::findContours
• cv::approxPolyDP
• cv::minAreaRect
1 Perform the setup and configuration of the Jetson compute board as mentioned in “Configure
NVIDIA Jetson Board for Object Detection and Motion Planning” on page 3-2.
2 The intended motion of the robot is described in “Detect and Pick Object Using KINOVA Gen3
Robot Arm with Joint Angle Control and Trajectory Control” on page 2-87 example. The robot
first moves to a scan position where the vision module is looking straight down. At this point,
algorithm waits for the position and orientation data of the object from the object detection node.
Hence the node should be able to detect the object when the robot is at scan position.
Manually move the robot near the scan position using joystick or webapp. The joint configuration
for the scan position is [0 0 180 266 0 272 90] degrees.
3-11
3 Run on Target Hardware
3 Open new terminal window at the same folder location where you downloaded the ROS packages
from KINOVA, and execute ‘source devel/setup.bash’. Launch the kinova_vision node by
executing the following command:
Replace the <ipaddress> field in the command with an actual IP address from the robot.
4 Open new terminal window at the same folder location, and execute ‘source devel/setup.bash’.
Launch the detect_object node by executing the following command:
This launches two image windows. One image is the output of canny edge detection and the other one
is the original image with an overlay of a red rectangle. In an ideal setting, the red rectangle must
match the boundary of the object.
• If the red rectangle is not matching the object, adjust the slider on canny edge detection image
such that only the object boundary is visible.
• If the red square is not visible at all or it is not on the actual object, modify the minimum and
maximum area threshold. Navigate to /opencv_node/src folder and open the
detect_object.cpp file. Modify AreaThreshMin and AreaThreshMax values accordingly.
Rebuild and relaunch the node to validate the object detection.
• If you are not sure about threshold values, uncomment the line which prints the area of all
detected contours, rebuild the node, and then launch the detect_object node again. This prints
the area values for all detected contours on the terminal. You can note the area for the desired
object and adjust area threshold values accordingly. Rebuild and relaunch the node to validate the
object detection.
3-12
Joint Angle Control Versus Trajectory Control
However, if you trace the end-effector position during the movement, the trace is not exactly as
intended. Commanding only the desired joint angle configuration has certain inherent issues such as
deviation from the desired motion. For example, if the desired motion from one position to another
position is only along the z axis, the actual motion is not exactly restricted to the z axis. This is
because you are not explicitly controlling the end effector position in joint angle control. Such
variance can be undesirable when the manipulator is operating in confined space or within the
environment with lot of obstacles.
Trajectory Control
In trajectory control, you are explicitly defining the intermediate path for the robot, and the end-
effector tightly follows the desired path. Because we are not commanding the robot to utilize the
maximum allowed joint velocity and acceleration, the entire motion in trajectory control takes more
time to complete when compared to the joint angle control.
For trajectory control, you typically use trapezoidal velocity profiles that are computed to reach the
desired joint configuration using trapveltraj. Finally, the data points are interpolated to generate a
pre-calculated trajectory profile that includes joint position, velocity, and acceleration commands.
3-13
3 Run on Target Hardware
Here, in the first case, robot is controlled using joint angle configuration command to pick the object.
For the second case (trajectory control), the object is placed at the same location and the robot is
controlled using pre-computed trajectory command. It is quite evident that all actuators follow similar
trajectory for both cases. However, in the second case, the robot takes more time to complete the
task.
The similarities are more clear when the time history is plotted using the normalized time for both
cases.
3-14
Joint Angle Control Versus Trajectory Control
From the time history of the filtered joint velocities, the prime reasons for the difference in time are
these:
• When the robot is controlled using joint angle configuration command, the internal motion
planning algorithm utilizes maximum allowed acceleration to achieve the desired joint velocity.
• When the robot is controlled using pre-computed trajectory, the allowed acceleration and velocity
limits are conservative. If the pre-computed trajectory violates these limits, then the robot rejects
such trajectories.
Hence, while generating trapezoidal velocity profiles, conservative time estimate is used to complete
the motion. The trapezoidal shape of joint velocities in the example validates the fact that the robot is
able to follow the commanded joint angle and velocity profiles.
3-15
3 Run on Target Hardware
3-16