A Shadow Handler in A Video-Based Real-Time Monitoring System
A Shadow Handler in A Video-Based Real-Time Monitoring System
A Shadow Handler in A Video-Based Real-Time Monitoring System
M. Kilger
Siemens AG, Corporate Research and Development, ZFE ST SN 3
Otto-Hahn-Ring 6, D-8000 Munich 83
extremly difficult. The approach described in [9] also
doesnt classify the vehicles. Furthermore it requires a
manual initialisation of the background image and the
operating window. In [6] a quite different approach for
detecting the vehicles is used. The whole region of
interest is searched for regions of constant grey values. If
this grey value is different from the one of the road,
which was detected before, it is assumed that it is vehicle.
The system detects vehicles well, but with this approach
no real-time detection is possible with lowcost hardware
until now. In [8] the vehicle is detected by subtracting
two subsequent frames. The camera is positioned
vertically above the road, therefore the range of view is
limited. The detected vehicles are classified and their
position and velocity calculated. In [7] the classification
is made by matching the outlines of the detected vehicles
with templates. The calculation is done in real-time by
special purpose hardware.
Obviously the above mentioned work has only been
applied to scenes with ambient illumination. No results
have been reported for scenes with bright sunlight and
corresponding shadows.
In general, the shape of the vehicles has been modelled
by complex wire frame models, which dont allow for
real-time processing on lowcost hardware. On the other
hand classification by the bounding box is
computationally feasible, but is only robust, if the vehicle
can be reliably separated from its shadow.
In this paper a video-based traffic monitoring system is
presented which includes a shadow handling algorithm.
It is shown that traffic monitoring is possible even with
lowcost hardware and under difficult illumination
conditions. Furthermore it is also shown that a high-level
description of the observed scene can be built up with the
information obtained from all moving objects. From this
description expected results of low-level image routines
(such as tracks etc.) can be computed. The difference
between the actual results and the expected results can be
used to optimise low-level processing parameters or to
select among different altemative routines, depending on
the situation. Thus far the framework of this on-line
Abstract
A video-based system for traflc monitoring is presented.
The objective of the system is to set up a high-level
1. Introduction
Ever increasing traffic volumes demand more efficient
and intelligent management and control strategies. As a
prerequisite, on-line traffic data acquisition is necessary.
Typical requirements are [8]:
A wide range of view of at least lOOm should be
monitored,
The speed and position of each vehicle should be
reported exactly,
Vehicles should be counted, tracked and classified.
Several authors [5]..[9] have presented similar work, but
no one has yet met these requirements under difficult
illumination conditions typically found on a sunny day.
In [5] the vehicles are counted and the position and the
velocity of the vehicles are calculated. A wide area can be
monitored and the algorithms run in real-time. However
this p i a l model-based approach makes classification
11
12
~-
13
14
15
2.5. Tracking
The main purpose of the tracking algorithm is to track
the detected objects and to predict their respective
positions based on a state model [131. The results of this
algorithm provide the following attributes for each
detected object:
Current position,
Predicted position for the next image frame,
Currentspeed,
Predicted speed for the next image frame,
Width and
Plausibility.
Our implementation uses a constant velocity state model,
where the middle of the front edge is tracked. The whole
filtering and prediction is based on real world c e
ordinates [2].
In Fig.8 the current image with overlaid track numbers is
shown, where the track number identifies each vehicle.
Thereafter a classification of each vehicle can be made.
2.43.
2.6. Classif--ation
The vehicles have to be classified according to:
trucks,
cars,
motorcycles, bicycles.
The selection of the features best suited for the
classification depends on each situation. If the vehicle is
moving towards the camera, the first feature to evaluate
16
3. Implementation
The high-level algorithms (tracking, classification and
communication) are implemented on a PC with an Intel
i486 processor. The low-level routines (detection and
shadow separating algorithm) are implemented on a DSP
(Motorolas DSP96002). The high-level algorithms are
written in C and the time-critical low-level algorithms
are written in the assembler of the DSP.
The resolution of the image is 256x256 pixels with 256
grey levels. The frame rate (with full shadow separation)
is 5 Hz (without code optimisations).
4. Results
The algorithms were tested under normal traffic and
daylight conditions for several image sequences lasting
several hours. The vehicle detection rate was over 99%.
In the set-up phase with no a priori knowledge about the
observed scene the shadow detection rate was 95%. In
this case the shadow was detected correctly. In the
remaining 5 46 of the cases, the shadow was rejected after
several minutes in 90% of these cases, because the
shadow separating algorithm could not find shadows in
this direction. The knowledge about the possible shadow
during the daytime is saved and used for further shadow
analysis. If shadow appears in these cases, which is
normal for the monitoring system, the possible direction
of the shadow is well-known. Therefore the detection rate
increases to 98%.If the direction was found correctly the
shadow could be separated from the vehicles in 90% of
these cases. After filtering the width of the vehicle over
time the classification success rate was more than 95 %.
l -
6. Outlook
Work is currently in progress which extends the
approach as follows:
Switching between various algorithms depending on
the i1lumination conditions,
Adapting low-level image processing parameters by
using the knowledge of high-level routines,
Adapting high-level scene description parameters by
using the information obtained from a sequence of
images over several hours.
Adaptation of the whole system (continuous
parameter adaptation and choosing the best suited
algorithms depending on the observed scene).
References
M. Xilger: Video-based traffic monitoring. Roc. of 4th
Intemational Conference on Image Frocessing and Its
Applications, pp 89-92, Maastrichme Netherlands, Apr.
1992
W. Feiten, A.v. Brandt, G. Lawitzky, I. Leut&usser: A
video-based system for extracting traffic flow parameters.
Proc. 13th DAGM 1991, Munich, pp. 507-514 (in
German)
K.P. Karmann, A.v. Brandt: Moving object segmentation
based on adaptive reference images. Proc. of EUSIFCO
1990, Barcelona
R Gerl:
Detection of moving objects in natural
enviroment using image sequences. 1990, Diplomarbeit
TU Munich (in German)
A.Bielik, T. Abramczuk
Real-time wide-traffic
monitoring: information reduction and model-based
approach. Proc. 6th Scandinavian Conference on Image
Analysis. 1989, Oulu, Finland, pp. 1223-1230.
[lo]
[ll]
[12]
[13]