IJETR021664

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

International Journal of Engineering and Technical Research (IJETR)

ISSN: 2321-0869, Volume-2, Issue-4, April 2014

Visual Servoing and Motion control of a robotic


device in inspection and replacement tasks
Madhusmita Senapati, J.Srinivas, V.Balakrishnan

(ii) image-based VSC. In position-based system, the control is


Abstract This paper presents a generalized framework of performed in task-space based on the three-dimensional
an image-based visual servoing of an articulated arm which has information retrieved from the image. Here, the camera pose
to be deployed inside a simulated reactor vessel environment. is estimated using visual information and the control design is
Wall tiles of the vessel idealized as rectangular grids on a surface a classical statespace design. The quality of the response
are to be inspected and an attempt is made to replace the
depends on the quality of the pose estimation and makes the
damaged tiles during shutdown periods of the machine. The
vision sensing methodology of the proposed arm is explained.
control sensitive to camera calibration errors. In an
The arm has a camera located at the wrist (eye-in-hand) and the image-based system, feedback is defined based on
control action has to be taken place at joint level. The image-features and controller is designed to drive the image
preliminary results are only illustrated. features towards a goal configuration. Thus, it implicitly
solves the Cartesian motion planning problem. The approach
Index TermsIn vessel inspection, Kinematics, Manipulator is therefore, relatively robust to camera calibration and target
deployment, Serial robot, Visual-servoing. modelling errors. Image-based approaches exploit basically
2D visual measurements such as points or lines tracked in the
image during task execution.
I. INTRODUCTION
Robot has several links and joints and each requiring a
In recent years, a wide variety of applications regarding positioning reference in relation to the predefined origin
autonomous robot behaviour in unknown environments have point. The vision system defines image coordinates based on
been developed. The new generation robots are adapted to where the camera points-to without regard to a fixed reference
changing conditions in real time. Such behaviour is necessary origin. Pixel locations within an image frame must coincide
especially when facing difficult tasks in practice like search with the corresponding robot coordinates in order for proper
and rescue missions, reconnaissance, surveillance and visual robotic guidance. Several works were reported in
inspection in complex and dangerous surroundings. As an literature relating vision guided robotic systems. Early in
example, remote handling robots used in inspection and 1985, Sanderson et al.[1] proposed an adaptive control
maintenance of in-vessel components of fusion devices approach for the nonlinear time-varying relationship between
require a non-contact robust sensing system. In such the robot pose and image features in image-based servoing.
instances, the robot vision is crucial, since it mimics human They described detailed simulations of image-based visual
sense and allows for noncontact measurement from the servoing for a variety of 3-degree of freedom manipulators.
environment. The control inputs for the robot motors are Seaden and Ang [2] worked-on relative target-object
produced by processing image data (like extraction of (rigid-body) pose estimation for vision-based control of
contours, features, corners and other visual primitives). Basic industrial robots. They developed and implemented a
purpose of visual control is to control the pose of the robots closed-form target-pose estimation algorithm. Feddama [3]
end-effector relative to a target object or a set of target applied an explicit feature space trajectory generator and
features. Visual servoing or visual servo control (VSC) closed-loop joint control to overcome problems due to low
involves various techniques from image processing, computer visual sampling rate. Here, an experimental work based on
vision and control theory. Using such an approach, systems image visual servoing of a 4-degree of freedom robot was
with low cost sensors and actuators can be developed. In presented. Hashimoto et al.[4] also illustrated simulations for
VSC, the information from camera is used within the control comparing position-based and image-based approaches.
loop to position the tracking device as per the requirement. Korayem et al.[5] designed and simulated vision-based
The vision data may be acquired either from the camera that is control and performance tests for 3-P robot by visual C++
placed directly onto the manipulator (eye-in hand) or at a software. They used a camera which was installed on
fixed location in the scene (eye-to hand). The features on the end-effector of the robot to find a target. A feature-based
image plane are servo-controlled to their goal positions. visual servoing control on the end-effector was used to reach
There are two traditional approaches among all the the target. Jara et al.[6] employed Java for developing an
vision-based control schemes [1]: (i) position-based VSC and interactive tool for industrial robot simulations. Pinto et al.[7]
proposed a eye-on-hand system, where the use of cameras will
Manuscript received April 20, 2014.
be replaced by the 2D laser range finder, which is attached to
Ms.Madhusmitha Senapati, Department of Mechanical Engineering, a robotic manipulator executing predefined path to produce
National Institute of Technology, Rourkela, India, Ph:9040635247, grayscale images of workstation. Fang et al.[8] proposed
Dr.J.Srinivas, Department of Mechanical Engineering, National augmented realty in programming a robot for trajectory
Institutte of Technology, Rourkela, India, 769008, Ph.+91-661-2462503,
Dr.V.Balakrishnan, Institue of Plasma Research, Gandhinagar, India,
planning and transformation into task-optimized executable
Phone: +91-7923-962183, robot paths. Therefore, the impact of pose estimation in visual

324 www.erpublication.org
Visual servoing and motion control of robotic device in inspection and replacement tasks

servoing, where the relative pose between a camera and a torque-rating motor mounted at the wrist. The end-effector is
target can be used for real-time control of robot motion is a a two-state gripper with rubber pads and is operated by a
topic of present interest. worm-wheel based four bar mechanism. As the gripper will be
In a fusion reactor, the first wall inside a shielded-blanket is activated after end-effector is aligned with target point, this
a basic in-vessel component that often affected by the plasma translational degree of freedom is generally not considered in
strokes. In this regard, the tiles of first wall are supposed to overall degrees of freedom of the manipulator. The visual
withstand the intense flux of energetic particles (hydrogen sensor in this work is a digital camera mounted before the
isotopes and neutrons) as well as heat loads. It requires end-effector to monitor the target motion in a 3-dimensional
frequent inspection of wall tiles during shutdown periods. (3D) workspace. The camera is assumed calibrated and the
Remote in-vessel inspection and guided robotic systems are intrinsic and extrinsic parameters, such as the focal length, the
required in this regard. Several earlier works [9-13] have physical size and resolution of image sensor, the
illustrated the implementation issues of robots in fusion transformation matrix between the camera and the
end-effector, are known.
reactor vessels with ITER standards. Designing such a robotic
system involves multiple modules such as: flexible A. Kinematic Model
manipulator mechanism that advances freely into the ports of Kinematic model refers to the methodology of deriving the
the vessel, gripper designing for handling the wall-tiles, relationship between joint angles and the end-effector pose.
vision-based inspection scheme for monitoring, as well as the Conventional Denavit- Hartenberg (D-H) notation of link
remote control of joints as per the requirements. In the present frames is adopted and parameters are first identified. The link
work, vision module is presented for this specific application. homogeneous transformation matrices are first obtained from
The proposed 7-degrees of freedom articulated redundant the table of known and unknown variables. Fig. 2 shows the
robot manipulator configuration is first explained. Kinematics kinematic link frames considered for further analysis.
issues are briefly outlined. Vision sensing methodology in the
proposed manipulator is described. Z3 Z4 Z5
Z2
3
II. DESCRIPTION OF ROBOTIC MANIPULATOR X2 4 5 a5 6
a3 a4 X4 Pitch
The manipulator considered in present wok is an articulated X3 X5
serial redundant platform. It can be controlled by a teach Roll d8
pendant or a joy-stick device. It has a sturdy base that can be d2 7 X6 X7
moved on rails and locked at a particular position. Further, Z6
there is a waist which could be swivelled about vertical axis 2
just like other industrial commercial arms. This is controlled X1 Z7
by a high torque DC motor through metallic gear train. At the
end of waist, there is a shoulder joint which is driven by Z1 X0
another DC motor through belt-transmission. This link is d1 Z0
further connected in succession with two more links as shown
in Fig. 1.
Fig. 2 Kinematic model of the manipulator

Camera The D-H parameters of the manipulator are shown in Table-I.


TABLE I
Wrist D-H PARAMETERS OF THE MANIPULATOR
Elbow-2
Link d a Joint limits
1 0 d1 0 0 0-800 mm
Elbow-1 2 2 d2 0 -/2 -170o to 170o
3 3 0 a3 0 -60o to 90o
Shoulder 4 4 0 a4 0 -45o to 70o
5 5 0 a5 0 -45o to 70o
6 6 0 0 -/2 -170o to 170o
End-effector Torso 7 7 0 0 0 -100o to 100o

The transformation matrix of the coordinate system (frame) i,


represented in frame i-1 can be written as:

Fig.1 Schematic of the P6R manipulator system c i s i c i s i s i


a i c i
s c i c i a i s i
c i s i
i 1
T i , (i=1,2,...7) (1)
0 s i c i
di
The end of the final link is a place for joining with a wrist, i

which is controlled by joint motors facilitating the


end-effector that holds a tool to advance to a required posture. 0 0 1
0
In fact, the wrist pitch is controlled by a similar DC motor where, i, di, ai, i are the D-H parameters of link i as shown in
located at the base, while roll is guided by a relatively smaller Table I, with c=cos and s=sin. The link lengths
a3=a4=a5=240mm and waist height d1=300mm, The overall

325 www.erpublication.org
International Journal of Engineering and Technical Research (IJETR)
ISSN: 2321-0869, Volume-2, Issue-4, April 2014

forward kinematics [ 07T] [ 01T][12T][ 23T]...[67T] of the set of generalized image coordinates to characterize them. Let
manipulator can be easily obtained by multiplying individual s(q(t), ) be the generalized image coordinates with
link transformation matrices. This can be done with symbolic representing the geometric parameters associated with the
programming available in MATLAB/Maple environment. In features in the 3-D space. Then, the error vector is defined as:
control task, the joint motors are actuated as per the sensing e(t)=s(q(t),)-s* (4)
information available from Cartesian space. Inverse Here, s* is the desired feature information vector. The
kinematics therefore, computes the required joint angles when definition of parameter vector s determines the visual servo
the pose of the end-effector is supplied (see appendix). The control scheme. To design visual servo controller, a
Jacobian matrix describing the relationship between the joint relationship between the time derivative of s and camera
angular velocities and the corresponding end-effector linear velocity vc is first determined as follows:
velocities can be then obtained with the method of differential s L s v c (5)
transformations. Where, Ls is called image Jacobian matrix. Now, the
relationship between the camera velocity and error vector is
B. Frames of Reference obtained by considering s* to be constant parameter (due to
It is assumed that a camera is rigidly mounted over the wrist fixed goal pose) as follows:
and that the object is placed in the cameras field of view as e L s v c (6)
shown in Fig.3. In order to decrease the error, e e is chosen , resulting in
v c Ls e (7)
Camera {C} Here, Ls is pseudo inverse matrix of Ls, which cannot be
calculated in real conditions and hence an approximation
L s is often used. Fig.4 shows a relationship between the
Hand {H}
camera frame and the image frame.

Object {O} u Optic axis


P(x, y, z)
uo
v
vo
x
Base {R} y Image
Fig.3 Frames of Reference plane
zc
The relevant four coordinate frames are: the object frame {O} xc
centered at the object, the camera frame {C} centered at the yc Camera frame
camera lens, the manipulator hand frame {H} centered at the
robot end-effector and the robot base frame {R} centered at Fig.4 Image and camera frames
the robot base. In practice, locating an object with respect to
the robot base requires: (i) camera calibration, which A 3D point P can be projected into the image plane as a 2D
describes the relative position and orientation between the point using the perspective projections as:
object and the camera [ COT ] (ii) handeye calibration, which X u u0
x (8)
describes the relative position and orientation between the Z f
camera and the robot hand [ HCT] and (iii) robot calibration, Y v v0
y (9)
which is the manipulator kinematics relating hand frame with Z f
respect to base [ RHT] . Given a point P on the object (in The parameters u0, v0, f are called the camera intrinsic
homogeneous coordinates), the point described in the robot parameters. The velocity of 3D point referring to camera
frame is given by frame is:
P v c c P (10)
{R P} [ ROT ]{O P} (2)
where Where vc and c is instantaneous linear and angular
velocities of camera. Here, P=[x y z]T is a point on 3D object.
[ ROT] [ COT][ HCT][ RHT] (3) Using eqs.(8) and (9) along with eq.(10), we can simplify and
write down the relations as:
is a 44 homogeneous Euclidean transformation between the
object and robot coordinate frames.
X L xVc (11)
III. INTRODUCTION TO IMAGE-BASED VISUAL SERVOING
A visual servoing task is in fact minimization of an error Where X=[x y]T and Vc=[vx vy vz x y z]T are vectors. The
vector defined in image plane. When a number of image matrix Lx is image Jacobian given by [14]:
features on the image plane are given, it is required to select a

326 www.erpublication.org
Visual servoing and motion control of robotic device in inspection and replacement tasks

1 x remove the noise. There are basically two kinds of image


Z 0
Z
xy (1 x 2 ) y enhancement techniques: spatial domain methods and
Lx= (12)
1 y frequency-domain methods. Spatial domain methods directly
0 (1 y )
2
xy x
Z Z deal with image pixels. The pixel values are manipulated to
Here, the depth Z should be accurately measured; otherwise achieve desired enhancement of image. In frequency domain
methods, the image is first transferred into frequency domain
an estimate L x should be used.
(i.e., Fourier transform of the image is first computed).Then
all enhancement operations are performed and finally inverse
IV. PROPOSED METHODOLOGY
Fourier transforms is performed to get the resultant image.
The control problem can be defined as follows: Design a set In typical visual inspection of tiles of the reactor vessel, a
of joint trajectories, such that the end-effector can move to a sector of the entire surface is scanned from left to right in
desired grasp position using the available pose of target in the top-bottom direction. For simulation sake, tiles are
workspace obtained from vision system. Fig.5 shows the represented with rectangular grids drawn on a paper and the
workspace of the manipulator. The assembly of articulated paper is affixed to a cylindrical tank as shown in Fig.7.
robotic manipulator is tested in CATIA and the parts are
exported to ADAMS VIEW (R2013). The various joints are
applied to visualize the kinematic simulations.

Fig.7 Simulated set-up of vessel tile

Some of the grids are having a point (crack) or vertical line


Fig.5 Work volume at the end-effector (crack) or inclined line (cracks) etc. as shown in Fig.8. The
task is to predict the nature of the crack first using visual
Various steps involved in the process of vision-based image processing.
monitoring are: (i) digital image acquisition (ii)
pre-processing (filtering etc) (iii) feature extraction (to
acquire import image features like lines, edges) (iv)
detection/segmentation (to determine image points or regions
for further processing) and (v) decision making. The
automated tile-inspection procedure is explained below: (a) to
capture the image, a digital camera (BASLER acA640-90gm)
with the Sony ICX424 CCD sensor delivering 90 frames per
second at 1.4 MP resolution is employed. It has 659
(Horizontal) x 494 (vertical) pixels. Fig.6 shows the camera
used. It can be interfaced with PC using Ethernet port and
operates with a 12 V DC power supply. It is to be mounted at Fig.8 Tile having inclined crack
the end of arm and care has to be taken to reduce image
blurring.
The image processing operation is planned through
LabVIEW software as shown in Fig.9.

Image acquisition
Vision
assistant
Image filtering

Image template searching

Compute the image Vision


coordinates builder
Fig.6 Digital camera employed
Set the joint variable as per
Image processing techniques like median filtering, inverse kinematics
contrasting, brightness and edge sharpening have to be Fig.9 Real time maintenance
applied to enhance the quality of interested points and to

327 www.erpublication.org
International Journal of Engineering and Technical Research (IJETR)
ISSN: 2321-0869, Volume-2, Issue-4, April 2014
V. CONCLUSION REFERENCES
In this paper, some outlines of the image-based vision [1] A.C.Sanderson, L.E.Weiss, and C.P. Neuman, Dynamic Visual servo
control of robots: an adaptive image-based approach, Proc. IEEE
servoing approach with a redundant 7-DOF manipulator Robotics and Automation, Pittsburgh, 1985, Vol.2, pp.662-667.
possessing eye-in-hand configuration have been briefed-out. [2] M.Saedan and M.H.Ang, 3D Vision-based control of an industrial
The kinematics of the manipulator, vision system employed robot, Proc. IASTED Int. Conf. Robotics and Applications,
were explained. The simulated environment of wall tiles of Nov.19-22, 2001, Florida, Kpp.152-157.

reactor vessel was described. As a future scope the camera has [3] J.T.Feddema, C.S.G. Lee and O.R.Mitchell, Weighted selection of
image features for resolved rate visual feedback control, IEEE Trans.
to be attached to the end of arm and the captured images are to Robotics and Autonomous Systems, Vol.7, 1991, pp.31-47.
be processed for prediction of type of fault and the gripper [4] H.Hashimoto, T.Kimoto and T.Ebin, Manipulator control with
tactile sensing issues as well the faulty tile removal matters image-based visual servoing, Proc.IEEE Conf. Robotics and
will be considered. Automation, 1991, pp.2267-2272.
[5] M.H.Korayem, K..Khoshhal and A.Aliakbarpour, Vision-based robot
APPENDIX simulation and experiment for performance tests of robot, Int .J .Adv.
Manuf. Tech., Vol.25, 2005, pp.1218-1231.
Derivation of inverse kinematics of the present manipulator [6] C.A.Jara, F.A.,Candek, P.Gil, F. Torres, F. Esquemre and S.Dormido,
is based on the derivation of the inverse kinematics of a EJS+EjSRL: An interactive tool for industrial robot simulation,
PUMA 560 robot. Rotation of first rotational axis 2 is computer vision and remote operation, Robotics and Autonomous
systems, Vol.59, 2011, pp.389-401.
obtained by writing in following form:
[7] A.M.Pinto, L.F.Rocha and A.P..Moreira, Object-recognition using
laser-range finder and machine learning techniques, Robotics and
[ 02T]1 [ 07T] [ 23T][ 34T][ 45T][ 56T][ 67T] = [ 02T ]1 [T] (A1) Computer-Integrated Manufacturing, Vol.29, 2013, pp.12-22.
[8] H.C.Fang, S.K.Ong, and A.Y.C.Nee, Interactive robot trajectory
where, [T] is the actual orientation and position of planning and simulation using augmented reality, Robotics and
Computer-Integrated Manufacturing, Vol.28, 2012, pp.227-237.
end-effector given by:
[9] L.Gargiulo, P.Bayetti, V.Bruno, J.J.Cordier, Development of an ITER
n x o x a x p x relevant inspection robot, Fusion Engg., and Design, vol.83, 2008,
n o x a y p y pp.1833-1836.
[T]= x (A2) [10] A.Mutka, I.Dragangac, Z.Kovacic, Z.Postruzin and R.Munk, Control
n x o x a z p z
system fro reactor vessel inspection manipulator, 18th IEEE Int. Conf.

0 0 0 1 Control Application, St Peterburg, Russia, 2009, p.1312.
[11] J.M.Traverse, In-vessel component imaging systems: From the
present experience towards ITER safe operation, Fusion Engineering
Equating the (2,4) elements on both sides of eq.(A1), we get and Design, vol.84, pp.1862-1866, 2009.
[12] M.Houry, P.Bayetti, D.Keller, L.Gargiulo, V.Bruno and
-s2px +c2(d1-pz)=0 (A3) J.C.Hatchressaian, Development of in-situ diagnostics and tools
which gives handled by a light multipurpose carried for tokamak in-vessel
2=atan2(d1-px.pz) (A4) interventions, Fusion Engineering and Design, vol.85, 2010, p.1947.

2=+ atan2(d1-px.pz) (A5) [13] X.Peng, J.Yuan, W.Zhang, Y.Yang, Y.Song, Kinematic and dynamic
analysis of a serial robot for inspection process in EAST vacuum
When 2 is known the transform [ 02T(d1 , 2 )] is fully vessel, Fusion Engg., and Design, vol.87, 2012, pp.905-909.
defined. Rotation 4 is obtained by equating elements (1,4) [14] B.P.Larouchi and Z.H.Zhu, 2014, Autonomous robotic capture of
non-cooperative target using visual servoing and motion predictive
and (3,4) on both sides of eq.(A1):
control , Auton Robot, DOI 10.1007/s10514-014-9383-2.
c2px+s2(d1-pz) =a4c3c4+a5s3s4+a4c3+a3 (A6)
py-d2 =-a5s3c4 -a5c3s4-a4c3 (A7)
These equations give two sets of 4. The rotation 3 can be Madhusmitha Senapati is pursuing M.Tech Research in the area of
obtained by writing Robotics and Vision-based inspection at NIT Rourkela. She graduated in
Mechanical Engineering discipline from Utkal University. .
[ 04T ]1 [ 07T ] [ 45T ][ 56T ][ 67T] (A8) Dr.J.Srinivas is an Associate Professor in department of Mechanical
Engineering, NIT Rourkela. His topics of interest include: robotics and
Equating elements (1,4) and (3,4) from both sides of eq.(A8),
intelligent controls, dynamics and modeling. He guided various graduate and
we get an expression of the form: doctoral projects. He is a member of Institute of Engineers and has to his
3+4= atan2(K1, K2) (A9) credit around 80 papers published in various national and international
conferences/journals. He is a main author of a book on Robotics: Control and
Since 4 combination of solutions of 2 and 4 exists, 3 will Programming published by Narosa Publishing house. .
have 4 possible solutions. The process is continued for 5. Dr.V.Balakrishnan is a senior scientist at institute of plasma research,
Finally, a given wrist position can be achieved by 4 Gandhinagar and is presently working for indigenous tokomak vessel
Aditya. He has good expertise with earlier vessels SSR-I and SSR-II.
combinations of the 4 joint rotations 2, 3, 4 and 5. The
pitch and roll angles of the wrist 6 and 7 are obtained by
equating the terms in rotational matrices.

ACKNOWLEDGMENT
Authors thank the board of research in fusion science and
technology (BRFST) for sponsoring and financial support in
this project.

328 www.erpublication.org

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy