Invisible Leash Object-Following Robot

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/297724318

Invisible Leash: Object-Following Robot

Article · February 2016


DOI: 10.14313/JAMRIS_1-2016/1

CITATIONS READS

7 1,285

3 authors:

Elizabeth Frink Daniel Flippo

15 PUBLICATIONS 71 CITATIONS
Kansas State University
44 PUBLICATIONS 205 CITATIONS
SEE PROFILE
SEE PROFILE

Ajay Sharda
Kansas State University
68 PUBLICATIONS 595 CITATIONS

SEE PROFILE

All content following this page was uploaded by Daniel Flippo on 01 November 2016.

The user has requested enhancement of the downloaded file.


Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 10, N∘ 1 2016

I L :O -F R

Submi ed: 10th November 2015; accepted: 21st December 2015

Elizabeth Frink, Daniel Flippo, Ajay Sharda

DOI: 10.14313/JAMRIS_1-2016/1 method uses the image Jacobian to determine veloc-


ity vectors. It is necessary to know points in the en-
Abstract:
vironment in this method, but a modi ication of the
This study looked at developing a simple concept for an
method, Image Based Visual Navigation developed by
object-following robot, meant to follow a person in an
[7], does not need this information. Instead, motion
agricultural se ng. A KIPR Link was used as the frame-
vectors from consecutive images are obtained, and
work, and simple mathema cal rela onships were de-
compared to the desired motion vectors. Both meth-
rived to control the mo on of the robot. The work de-
ods are very involved mathematically, but a simpler
scribed is meant as a proof-of-concept, not as a full-scale
solution is desired for this application.
development. Reasonable results were obtained, indicat-
ing that the robot could successfully follow the object. 1.3. Person Following
Keywords: object-following, agriculture, KIPR Link, un- Hu et al tracks the size of a human torso, along
maned robot with a fuzzy PID, in order to follow the human [4].
This method is limited in its ability to keep up with the
human, but is able to discern the target human from
1. Introduc on other humans in the environment and reliably follow
Unmanned robots are of signi icant interest cur- the target at slower speeds. If it were not for the lim-
rently. There is a push from the NSF (National Science ited ability to keep up with a human, this could be a
Foundation) to accelerate the development of robots potential method for the application. Future work will
that can work beside or cooperatively with people [2]. involve including additional vision processors.
In the past, several different approaches have been
used to achieve unmanned control. These approaches 1.4. Feature-Matching
are summarized in the following sections. The goal Geometrical features of strategically placed land-
of this work is to develop a person-following method marks (distinct geometric images) with localization
that will work in many settings, where obstacles and and line tracking algorithms are used to navigate in-
equipment are always changing. Agriculture could use doors in work by Hu et al [9]. The robot from this work
this method to haul equipment around the farm or can achieve a speed of 18 cm/s and it is able to accu-
ield while the user is able to use his or her hands for rately discern its location from the landmarks.
other tasks. Outside of agriculture, this method could In order for a robot to follow another robot, geomet-
also be used for handicapped persons who need help rical and color-coded patterns can be used, as shown
carrying multiple items, such as groceries. Addition- by [6]. Then covariance matching allows the following
ally, any manual laborers who have a large amount robot to correctly identify its target robot/pattern. It
of equipment to carry around could bene it from this is not reasonable to place geometric features strategi-
person-following method. cally in an agricultural ield, so this method would be
less suitable for the desired application.
1.1. Path Learning
1.5. Project Objec ve
Path learning has been applied in several tech-
niques. Araujo and Ameida [1] uses a trial-and-error This paper will describe a proof of concept for a
method, with learning, to navigate a path. Spatial res- simple object-following robot, where the object could
olution is chosen using a game-theoretic cell-splitting be carried by or attached to a human. The concept
criteria. This method is able to avoid obstacles and will be referred to as ”InviLeash”. The feasibility of this
with each repeat of the same path or environment, its concept will be illustrated with preliminary results. As
navigation becomes quicker (less learning necessary). far as the authors are aware, an object-following ap-
This does not it as a direct solution for the problem, proach of this simplicity has not been developed yet.
but this method is of particular interest as an addition
for future work, allowing a unmanned robot to navi- 2. Methods
gate obstacles while following a person. The concept behind InviLeash is straightforward.
The robot will follow a single-color ball, using the
1.2. Image-Based Visual Servoing process illustrated in ig. 1 and described in later sec-
Multiple projects [3, 7, 8] have used image-based tions. Initial calibration requires acquiring an image
visual servoing, or a modi ication of the method. This of the ball at the desired distance and determining the

3
Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 10, N∘ 1 2016

image coordinates of the ball. The desired distance


was selected as a reasonable following distance of
45.7 cm. After initial calibration, a loop is iterated,
acquiring images, determining the current ball coor-
dinates, comparing to desired, determining direction
vector and translating to motor motion.

Fig. 2. Object-following robot prototype

z
F
o
r
Fig. 1. Concept map for object following w
a
r
d
2.1. Proof of Concept
InviLeash uses the KIPR platform from the KISS In-
stitute for Practical Robots [5]. This platform was se-
lected because it has ”blob detection” built in, making Fig. 3. Direc on conven ons (top view)
it ideal for a proof of concept. It also is an open source
platform and has a built-in motor-control library. The
KISS IDE environment for Windows was used for pro- vicinity, the ball was painted pink (a distinct and con-
gramming. More information on the KIPR Link and the trasting color not found in the environment). Using
KISS IDE can be found in the KIPR Link Manual or the the camera channel settings on the KIPR touchscreen,
KIPR Link C Standard Library (provided with KISS IDE the camera was set to detect the pink color.
download).
For each image that the camera acquires, a box is
2.2. Design identi ied around the largest ”blob” of pink (it is as-
sumed that this is the ball, as there is no other pink
The KIPR module with USB camera was mounted
in the environment) using the built-in features of the
to a four wheel drive remote control car. The servo
KIPR. The coordinates of the box and the length of the
motors of the car were controlled by the on-board
horizontal sides are used to determine the ball’s lo-
KIPR motor controllers. Fig. 2 shows the stripped car
cation in the image. Fig. 4 shows the parameters for
with module and camera mounted. One motor drives
the desired ball image location (white box) and the
the front axle, another motor drives the rear axle, and
actual ball image location (shaded box). It was deter-
a third motor turns the wheels on the front axle.
mined through iteration that the horizontal width of
the blob provided more repeatable results than the
Fig. 3 shows the direction conventions for the
vertical height of the blob.
robot, as a top view. Forward is a negative z-value,
reverse is positive. A negative x-value is left and right
is positive. Determining Z-Posi on The z-distance of the ball can
be estimated simply using the width of the blob. The
A ping-pong ball was used as the ball to follow. relation between z-distance and blob width was cali-
In order to avoid confusion with other objects in the brated using a second-order polynomial. Fig. 5 shows

4
Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 10, N∘ 1 2016

a1 position between 0 and 2047 ticks) for the required


a3 travel distance. For the z-direction motors, a position
command of 250 ticks results in 50.8 cm travel dis-
d1 d3 a2 tance for the vehicle used here.
aw

d2
dw

Fig. 4. Size parameters of desired blob (white) and


actual blob (shaded)

the results for six trials each at four different ball dis-
tances. The resulting calibration equation it to all 24
data points is Fig. 6. Calibra on of ball distance

𝑧 = −0.0006𝑎 +0.1009𝑎 −5.6627𝑎+117.7 (1)


Determining X-Posi on The inal calculations are for
where a is the width of the ball in the image, in the x-direction. Using a 1 , a 3 , d 1 , d 3 , and blob width,
pixels and z dist is the z-distance of the ball from the the difference in horizontal center of the blob vs hori-
robot, in cm. From this, the distance to travel, in cm, zontal center of the desired position is calculated, then
can be determined by subtracting the 45.7 cm desired the y-component of the distance is removed and only
distance. It is likely that the calibration is not linear be- the x-component of the distance is considered.
cause of the non-linear optical properties of a camera
lens.
𝑥 = ((𝑎 +𝑎 )/2−(𝑑 +𝑑 )/2)/(0.03667𝑎+0.59667)
𝑧 =𝑧 − 45.7 (2) (4)
where x dist is the x-component of the distance from
actual center to desired center (as described above)
and the other parameters are as shown in ig. 4.

Determining X-Motor Command The calibration for


the x-direction motor command would depend on how
far the x-direction motors would travel, so to simplify
this portion of the method a set motor command was
used for x-direction. If the robot needs to move right
the x-motor command would be +50 ticks, and if the
robot needs to move left the x-motor command would
be -50 ticks.

Fig. 5. Calibra on of ball distance Mo on Finally, the three motor commands (one for
x-direction, two identical for the z-direction) are sent
and then the motors are shut off after the commanded
Determining Z-Motor Command The servo motor positions are reached. From this point, the loop is re-
command for z-direction must also be calibrated. The peated.
relation between motor command and traveled dis-
2.3. Results
tance is almost linear, but deviates from linear at the
two extremes of distance (near and far). The calibra- Overall, the InviLeash had satisfactory perfor-
tion data is shown in ig. 6, with a linear and a 2nd or- mance, considering its limitations (discussed later).
der polynomial it. The 2nd order polynomial was se- The average response and standard deviation from six
lected as a better it. The equation is trials is summarized in tab. 1. It can be noted that as
the ball moves further from the camera (in the direc-
tion of decreasing desired travel), the standard devia-
𝑧 = 0.0002(𝑧 ) + 0.3208(𝑧) − 4.9257 tion increases. This is likely related to the image reso-
(3) lution of the system. Fig. 7 and ig. 8 show a visual rep-
where z trav is the required travel distance in cm, resentation of the results. The average response was
and z motor is the necessary servo motor command (a promising.

5
Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 10, N∘ 1 2016

Tab. 1. Actual Travel Results 3. Applica ons


Desired Actual Travel [cm] Future work will focus on applying this technique
Travel [in.] Avg StDev to agricultural applications, such as a helper robot
to follow a worker around with equipment or tools.
30.5 22.3 3.9
The robot will incorporate path-learning so that it
15.2 13.2 1.5
can also function without user interaction. Obstacle
0 -4.2 5.4
avoidance will also be necessary, as well as rugged
-15.2 -11.0 8.8
terrain navigation ability. This technique could also
be used in industrial settings, to tow trailers or to haul
pallets around a warehouse.

This platform could link with sensors such as LI-


DAR for local steering, or could be linked with GPS to
supplement or replace navigation information in loca-
tions where GPS information is limited. Additionally,
it is envisioned that color-lighted wands or balls could
be used as the object to follow, so that the robot can
distinguish between two different people. Along the
same lines, a serial pulse code could be employed.

4. Conclusions
Fig. 7. Repeated Response Although this is a very simple platform, the po-
tential for future work is abundant. The initial results
were promising regarding the ability to follow an ob-
ject. There are limitations in the platform, including
processing speed which limits the speed of following,
and in-consistent identi ication of the target, but there
are plans to minimize the limitations. Employing a dif-
ferent computing platform such as Arduino or Lab-
VIEW myRIO may help improve processing and fol-
lowing speed. Additionally, using a distinct lashing
light as the target may help in consistent identi ication
of the target.

AUTHORS
Elizabeth Frink – Kansas State University, Manhattan,
Fig. 8. Average Response KS, e-mail: enfrink@ksu.edu.
Daniel Flippo∗ – Kansas State University, Man-
hattan, KS, e-mail: dk lippo@ksu.edu, www:
Limita ons and Problems The results of the http://www.bae.ksu.edu/people/faculty/ lippo/
InviLeash were limited by some important fac- index.html.
tors, summarized in tab. 2. The largest issue was Ajay Sharda – Kansas State University, Man-
with the limitations in image resolution. Regardless hattan, KS, e-mail: asharda@ksu.edu, www:
of the camera, the KIPR system has a set resolution http://www.bae.ksu.edu/people/faculty/sharda/
of 160x120. This greatly degrades the accuracy of index.html.
the system. In future work, a different platform will ∗
be used, such as openCV, an open source computer Corresponding author
vision program.
REFERENCES
Tab. 2. Issues with Proof of Concept
[1] R. Araujo and A. de Ameida, “Mobile robot path-
learning to separate goals on an unknown world”.
Issue Cause Solution In: Intelligent Engineering Systems Proceedings,
Distances are Limited image Use a different Budapest, 1997.
approximate resolution on platform, such as [2] N. S. Foundation. “National robotics initiative”,
KIPR openCV 2014.
Limited trou- Small screen on Use a different
bleshooting KIPR, no read- platform, such as [3] C. Gao, X. Piao, and W. Tong, “Optimal motion con-
out to computer openCV trol for ibvs of robot”. In: Proceedings of the 10th
screen World Congress on Intelligent Control and Automa-
tion, Bejing, China, 2012.

6
Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 10, N∘ 1 2016

[4] C.-H. Hu, X.-D. Ma, and X.-Z. Dai, “Reliable person
following approach for mobile robot in indoor en-
vironment”. In: Proceedings of the 9th Interna-
tional Conference on MAchine LEarning and Cyber-
netics, Boading, 2009.
[5] KISS. “Kiss hardware/software”, 2014.
[6] H. J. Min, N. Papanikolopoulos, C. Smith, and
V. Morellas, “Feature-based covariance matching
for a moving target in multi-robot following”. In:
19th Mediterranean Conference on Control and Au-
tomation, Corfu, Greece, 2011.
[7] L. O’Sullivan, P. Corke, and R. Mahony, “Image-
based visual navigation for mobile robots”. In:
2013 IEEE International Conference on Robotics
and Automation, Karlsruhe, Germany, 2013.
[8] L. Pari, J. Sebastian, A. Traslosheros, and L. Angel,
“Image based visual servoing: Estimated image ja-
cobian by using fundametal matrix vs analytic ja-
cobian”, Image Analysis and Recognition, vol. 5112,
2008, 706–717.
[9] F. Wen, K. Yuan, W. Zou, X. Chai, and R. Zheng, “Vi-
sual navigation of an indoor mobile robot based
on a novel arti icial landmark system”. In: Pro-
ceedings of the 2009 IEEE International Conference
on Mechantronics and Automation, Changchun,
China, 2009.

View publication stats

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy