Invisible Leash Object-Following Robot
Invisible Leash Object-Following Robot
Invisible Leash Object-Following Robot
net/publication/297724318
CITATIONS READS
7 1,285
3 authors:
15 PUBLICATIONS 71 CITATIONS
Kansas State University
44 PUBLICATIONS 205 CITATIONS
SEE PROFILE
SEE PROFILE
Ajay Sharda
Kansas State University
68 PUBLICATIONS 595 CITATIONS
SEE PROFILE
All content following this page was uploaded by Daniel Flippo on 01 November 2016.
I L :O -F R
3
Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 10, N∘ 1 2016
z
F
o
r
Fig. 1. Concept map for object following w
a
r
d
2.1. Proof of Concept
InviLeash uses the KIPR platform from the KISS In-
stitute for Practical Robots [5]. This platform was se-
lected because it has ”blob detection” built in, making Fig. 3. Direc on conven ons (top view)
it ideal for a proof of concept. It also is an open source
platform and has a built-in motor-control library. The
KISS IDE environment for Windows was used for pro- vicinity, the ball was painted pink (a distinct and con-
gramming. More information on the KIPR Link and the trasting color not found in the environment). Using
KISS IDE can be found in the KIPR Link Manual or the the camera channel settings on the KIPR touchscreen,
KIPR Link C Standard Library (provided with KISS IDE the camera was set to detect the pink color.
download).
For each image that the camera acquires, a box is
2.2. Design identi ied around the largest ”blob” of pink (it is as-
sumed that this is the ball, as there is no other pink
The KIPR module with USB camera was mounted
in the environment) using the built-in features of the
to a four wheel drive remote control car. The servo
KIPR. The coordinates of the box and the length of the
motors of the car were controlled by the on-board
horizontal sides are used to determine the ball’s lo-
KIPR motor controllers. Fig. 2 shows the stripped car
cation in the image. Fig. 4 shows the parameters for
with module and camera mounted. One motor drives
the desired ball image location (white box) and the
the front axle, another motor drives the rear axle, and
actual ball image location (shaded box). It was deter-
a third motor turns the wheels on the front axle.
mined through iteration that the horizontal width of
the blob provided more repeatable results than the
Fig. 3 shows the direction conventions for the
vertical height of the blob.
robot, as a top view. Forward is a negative z-value,
reverse is positive. A negative x-value is left and right
is positive. Determining Z-Posi on The z-distance of the ball can
be estimated simply using the width of the blob. The
A ping-pong ball was used as the ball to follow. relation between z-distance and blob width was cali-
In order to avoid confusion with other objects in the brated using a second-order polynomial. Fig. 5 shows
4
Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 10, N∘ 1 2016
d2
dw
the results for six trials each at four different ball dis-
tances. The resulting calibration equation it to all 24
data points is Fig. 6. Calibra on of ball distance
Fig. 5. Calibra on of ball distance Mo on Finally, the three motor commands (one for
x-direction, two identical for the z-direction) are sent
and then the motors are shut off after the commanded
Determining Z-Motor Command The servo motor positions are reached. From this point, the loop is re-
command for z-direction must also be calibrated. The peated.
relation between motor command and traveled dis-
2.3. Results
tance is almost linear, but deviates from linear at the
two extremes of distance (near and far). The calibra- Overall, the InviLeash had satisfactory perfor-
tion data is shown in ig. 6, with a linear and a 2nd or- mance, considering its limitations (discussed later).
der polynomial it. The 2nd order polynomial was se- The average response and standard deviation from six
lected as a better it. The equation is trials is summarized in tab. 1. It can be noted that as
the ball moves further from the camera (in the direc-
tion of decreasing desired travel), the standard devia-
𝑧 = 0.0002(𝑧 ) + 0.3208(𝑧) − 4.9257 tion increases. This is likely related to the image reso-
(3) lution of the system. Fig. 7 and ig. 8 show a visual rep-
where z trav is the required travel distance in cm, resentation of the results. The average response was
and z motor is the necessary servo motor command (a promising.
5
Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 10, N∘ 1 2016
4. Conclusions
Fig. 7. Repeated Response Although this is a very simple platform, the po-
tential for future work is abundant. The initial results
were promising regarding the ability to follow an ob-
ject. There are limitations in the platform, including
processing speed which limits the speed of following,
and in-consistent identi ication of the target, but there
are plans to minimize the limitations. Employing a dif-
ferent computing platform such as Arduino or Lab-
VIEW myRIO may help improve processing and fol-
lowing speed. Additionally, using a distinct lashing
light as the target may help in consistent identi ication
of the target.
AUTHORS
Elizabeth Frink – Kansas State University, Manhattan,
Fig. 8. Average Response KS, e-mail: enfrink@ksu.edu.
Daniel Flippo∗ – Kansas State University, Man-
hattan, KS, e-mail: dk lippo@ksu.edu, www:
Limita ons and Problems The results of the http://www.bae.ksu.edu/people/faculty/ lippo/
InviLeash were limited by some important fac- index.html.
tors, summarized in tab. 2. The largest issue was Ajay Sharda – Kansas State University, Man-
with the limitations in image resolution. Regardless hattan, KS, e-mail: asharda@ksu.edu, www:
of the camera, the KIPR system has a set resolution http://www.bae.ksu.edu/people/faculty/sharda/
of 160x120. This greatly degrades the accuracy of index.html.
the system. In future work, a different platform will ∗
be used, such as openCV, an open source computer Corresponding author
vision program.
REFERENCES
Tab. 2. Issues with Proof of Concept
[1] R. Araujo and A. de Ameida, “Mobile robot path-
learning to separate goals on an unknown world”.
Issue Cause Solution In: Intelligent Engineering Systems Proceedings,
Distances are Limited image Use a different Budapest, 1997.
approximate resolution on platform, such as [2] N. S. Foundation. “National robotics initiative”,
KIPR openCV 2014.
Limited trou- Small screen on Use a different
bleshooting KIPR, no read- platform, such as [3] C. Gao, X. Piao, and W. Tong, “Optimal motion con-
out to computer openCV trol for ibvs of robot”. In: Proceedings of the 10th
screen World Congress on Intelligent Control and Automa-
tion, Bejing, China, 2012.
6
Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 10, N∘ 1 2016
[4] C.-H. Hu, X.-D. Ma, and X.-Z. Dai, “Reliable person
following approach for mobile robot in indoor en-
vironment”. In: Proceedings of the 9th Interna-
tional Conference on MAchine LEarning and Cyber-
netics, Boading, 2009.
[5] KISS. “Kiss hardware/software”, 2014.
[6] H. J. Min, N. Papanikolopoulos, C. Smith, and
V. Morellas, “Feature-based covariance matching
for a moving target in multi-robot following”. In:
19th Mediterranean Conference on Control and Au-
tomation, Corfu, Greece, 2011.
[7] L. O’Sullivan, P. Corke, and R. Mahony, “Image-
based visual navigation for mobile robots”. In:
2013 IEEE International Conference on Robotics
and Automation, Karlsruhe, Germany, 2013.
[8] L. Pari, J. Sebastian, A. Traslosheros, and L. Angel,
“Image based visual servoing: Estimated image ja-
cobian by using fundametal matrix vs analytic ja-
cobian”, Image Analysis and Recognition, vol. 5112,
2008, 706–717.
[9] F. Wen, K. Yuan, W. Zou, X. Chai, and R. Zheng, “Vi-
sual navigation of an indoor mobile robot based
on a novel arti icial landmark system”. In: Pro-
ceedings of the 2009 IEEE International Conference
on Mechantronics and Automation, Changchun,
China, 2009.