Virtual Dressing Room Application: April 2019
Virtual Dressing Room Application: April 2019
Virtual Dressing Room Application: April 2019
net/publication/333229483
CITATIONS READS
0 1,772
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Muhannad Al-Jabi on 21 May 2019.
selected piece of clothes mixed in and fitted to the customer fundamental image and device management features
body. like access to the Kinect sensors that are connected to
In [7], an image processing design flow for virtual the computer, access to image and depth data streams
dressing room applications was presented. The algorithm for from the Kinect image sensors and delivery of a
implementing the human friendly interface consists of three processed version of image and depth data to support
stages: detecting and sizing the user's body, detecting of skeletal tracking.
reference points based on face detection and augmented Kinect sensor mainly provides three streams: image
reality markers and super imposing the clothing over the stream, depth stream and audio stream, with detected range
user's image. from 1.2 to 3.5 meters. At this stage, the first two streams
would be utilized for development of human model, cloth
In [8], a system and method was presented for virtually simulation and GUI.
fitting a piece of clothing on an accurate presentation of a
user's body obtained by 3D scanning. The system allows the The middle camera is a 640×480 pixel @ 30 Hz RGB
user to access a database of garments and accessories camera, providing image stream which is delivered as a
available for selection. Finite element analysis was applied to succession of still-image frames for the application. The
determine the shape of the combined user body and garment quality level of color image determines how quickly data is
and an accurate visual representation of the selected garment transferred from the Kinect sensor array to the PC, which is
or accessory. easy for us to optimize the program on different platform.
The available color formats determine whether the image
In [9], a review for the recent example of virtual fitting data that is returned to our application code is encoded as
rooms and supporting technologies was presented. In [10], a RGB.
mass-spring system was used to simulate the physical
properties of the fabric and adaptive sewing forces to wrap The leftmost one is the IR light source with
the pattern around human model to form a virtual garment. corresponding 640×480 pixels @ 30 Hz IR depth-finding
The strain control was implemented for maintaining the camera with standard CMOS sensor on the right, which
pattern size with velocity adjustment and combined with the mainly provide the depth data stream. This stream provides
collision detection and response routine. frames in which the high 13 bits of each pixel give the
distance, in millimeters, to the nearest object at that
particular x and y coordinate in the depth sensor's field of
III. METHODOLOGY view.
Because of the increasing importance of Microsoft
Kinect image sensor in the market, we used it and WFP to • Skeleton API
capture the user physical measurements. Among API, Skeleton API provides information about
the location of users standing in front of the Kinect sensor
1) Introduction to Kinect General components array, with detailed position and orientation information as in
The components of Kinect for Windows are mainly the Fig.2. Those data are provided to application code as a set of
following, Fig.1: 20 point, namely skeleton position. This skeleton represents
a user’s current position and pose. Our applications can
1. Kinect hardware: including the Kinect sensor and the therefore utilize the skeleton data for measurement of
USB hub, through which the sensor is connected to the different dimension of users’ part and control for GUI.
computer; Skeleton data are retrieved as aforementioned image retrieval
2. Microsoft Kinect drivers: Windows 8 drivers for the method: calling a frame retrieval method and passing a
Kinect sensor; buffer while our application can then use an event model by
3. Microsoft Kinect SDK V 1.0: core of the Kinect for the hooking an event to an event handler in order to capture the
set of functionality and Windows API, supports frame when a new frame of skeleton data is ready.
Fig.1. Hardware and streams provided
695
Microsoft Kinect SDK version 1.0 In addition, the application keep calculating distances to
It supports up to four Kinect sensors on a single continue body movement:
computer, skeletal tracking, a Near Mode feature that lets the One of key point here it to:
camera recognize objects just 40cm away, stability and
696
1) Convert Data them, making use of the existing point and the new point
Units of distance calculated using joint -joint coordinates coordinates which different from the original, and the use of
results in meter so we have to convert it to pixel Data. new point in the application that we have done. After this
operation was the result acceptable to some extent and is
Using m – pixel converter closer to what we want.
1 m = 3779.527559055 pixel. When you start working on the trousers and try to put it
We convert every distances to pixels data in order to map on the points, we faced the problem of high points Hip center
pixel images to user body. dramatically and narrow the distance between the hip-Lift
and Hip right and their height, we used new points so we
created a point down the Hip Center completely and at the
same time we made two points are new to the bottom of the
Hip Lift and Wright and abroad each,
At first the trousers place is not suitable (to the top), but
after this algorithm became suitable largely.
3) 2D cloth
We divide the clothes into parts of pixels data to control
the movement of the body and the cloth in upper, lower
frame.
Using Photoshop CS6: we cut, divide, colored the cloth
to simulate it to a real body.
4) Interaction between human and 2D cloth
• First, we take skeleton Data.
• Second, we take Depth Data.
• Third, we take the RGB data
Fig.6. Body Calculation when user move
• Fourth, we measure the upper Frame and mapping the
2) Create new joint positions 2D cloth.
At the beginning of the work we have decided to use the • Fifth, we measure the Lower Frame and mapping the
key points that the Sensor found it, we face many problems 2D cloth.
in some points (hip center, hip left, hip right) and it imprecise
and inappropriate to what we need, we have many of the • Sixth, we measure the Lower, upper Frame and
attempts to work something that resolve this issue mapping the 2D cloth for both.
unexpected, in the end, we decided that we could make new
points away from the original points left or right or down.
697
2- The Kinect analyzes a person's body and identify the 11- You turn off the music
key points and at the same time we calculate the
distances necessary.
3- The Application give you advice to use it in a good way,
also you can hear voice.
12- From this bottomn you can take a picture of you when
4- Can now choose the type of cloths that you want to wear
you wearing the piece
(t-shirt, trousers, dress).
" ! 13- To be able to take off the clothes and try others, you
should press this buttomn.
$#%
VI. CONCUSION
After applying the cloth model with the improved
performance joint position, this application has become an
acceptable application to provide a virtual fitting room for
user to utilize:
1- Human measurement generated according to user body
stand in front of the Kinect.
2- Flexible and look-real cloth model for user to “wear”.
3- An easy control, user-friendly and fashionable body-
motion-based GUI for user utilized;
4- Many interesting and useful functionalities for user to
use in our application.
REFRENCES
[1] Higgins, K. R., Farraro, E. J., Tapley, J., Manickavelu, K., &
Fig.9. Application flow chart Mukherjee, S. (2018). U.S. Patent No. 9,898,742. Washington, DC:
U.S. Patent and Trademark Office.
[2] Isıkdogan, F., & Kara, G. (2012). A real time virtual dressing room
5- Once you click on one of these buttons show you a list application using kinect. CMPE537 Computer Vision Course Project.
containing all the pieces that are related .To this piece at
[3] Korszun, H. A. (1997). U.S. Patent No. 5,680,528. Washington, DC:
the opposite end and you can choose the appropriate and U.S. Patent and Trademark Office.
try. [4] Protopsaltou, D., Luible, C., Arevalo, M., & Magnenat-Thalmann, N.
(2002). A body and garment creation method for an Internet based
6- you can resized the pieces fit your body perfectly. virtual fitting room. In Advances in Modelling, Animation and
7- The application will select the appropriate size for you Rendering (pp. 105-122). Springer, London.
automatically based on calculation. [5] Steermann, M. C. (2014). U.S. Patent Application No. 14/215,649.
[6] Kjærside, K., Kortbek, K. J., Hedegaard, H., & Grønbæk, K. (2005).
8- The application gives tips for you harmonically pieces. ARDressCode: augmented dressing room with tag-based motion
tracking and real-time clothes simulation. In Proceedings of the
9- In the top of the interface you have options and central european multimedia and virtual reality conference.
characteristics. [7] Martin, C. G., & Oruklu, E. (2012). Human friendly interface design
for virtual fitting room applications on android based mobile
devices. Journal of Signal and Information Processing, 3(04), 481.
[8] Curry, S. W., & Sosa, L. A. (2017). U.S. Patent No. 9,773,274.
Washington, DC: U.S. Patent and Trademark Office.
[9] Pachoulakis, I., & Kapetanakis, K. (2012). Augmented reality
10- Through this button you listen the music while wearing platforms for virtual fitting rooms. The International Journal of
clothes. Multimedia & Its Applications, 4(4), 35.
[10] Zhong, Y., & Xu, B. (2009). Three-dimensional garment dressing
simulation. Textile Research Journal, 79(9), 792-803.
698