5 6307666044093727736
5 6307666044093727736
5 6307666044093727736
Smart Cities
Editors
Alberto Ochoa-Zezzatti
Universidad Autónoma de Ciudad Juárez
Genoveva Vargas-Solar
French Council of Scientific Research (CNRS)
Laboratory of Informatics on Images and Information Systems
France
Javier Alfonso Espinosa Oviedo
University of Lyon, ERIC Research lab
France
p,
A SCIENCE PUBLISHERS BOOK
First edition published 2021
by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742
and by CRC Press
2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
© 2021 Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, LLC
Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted
to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission
to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us
know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized
in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying,
microfilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.com or contact the
Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are not
available on CCC please contact mpkbookspermissions@tandf.co.uk
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used only for identifica-
tion and explanation without intent to infringe.
Library of Congress Cataloging-in-Publication Data
Names: Ochoa Ortiz Zezzatti, Carlos Alberto, 1974- editor. | Vargas-Solar,
Genoveva, 1971- editor. | Espinosa Oviedo, Javier Alfonso, 1983- editor.
Title: Innovative applications in smart cities / editors, Alberto
Ochoa-Zezzatti, Universidad Ciudad Juárez, México, Genovera
Vargas-Solar, French Council of Scientific Research (CNRS), Laboratory
of Informatics on Images and Information Systems, Cedex, France, Javier
Alfonso Espinosa Oviedo, University of Lyon, ERIC Research Lab, Cedex,
France.
Description: First edition. | Boca Raton : CRC Press, Taylor & Francis
Group, 2021. | “A science publishers book.” | Includes bibliographical
references and index. | Summary: “This research book is a novel,
innovative and adequate reference that compiles interdisciplinary
perspectives about diverse issues related with Industry 4.0 and Smart
Cities on different ways about Intelligent Optimisation, Industrial
Applications on the real world, Social applications and Technology
applications with a different perspective about existing solutions.
Chapters report research results improving Optimisation related with
Smart Manufacturing, Logistics of products and services, Optimisation of
different elements in the time and location, Social Applications to
enjoy our life of a better way and Applications that increase Daily Life
Quality. This book is organised into three scopes of knowledge: (1)
applications of Industry 4.0; (2) applications to improve the life of
the citizens in a Smart City; and finally (3) research associated with
the welfare of the working-age population and their expectations in
their jobs correlated with the welfare - work relationship”-- Provided
by publisher.
Identifiers: LCCN 2021000974 | ISBN 9780367820961 (hardcover)
Subjects: LCSH: Smart cities.
Classification: LCC TD159.4 .I486 2021 | DDC 307.760285--dc23
LC record available at https://lccn.loc.gov/2021000974
“Innovation” is a moto in the development of current and future Smart Cities. Innovation understood
by newness, improvement and spread, is often promoted by Information and Communication
Technologies (ICTs) that make it possible to automate, accelerate and change the perspective of the
way economy and “social good” challenges can be addressed.
In economics, innovation is generally considered to be the result of a process that brings
together various novel ideas to affect society and increase competitiveness. In this sense, future
Smart Cities societies’ economic competitiveness is defined as increasing consumers’ satisfaction
given by the right products price/quality ratio. Therefore, it is necessary to design production
workflows that maximise the resources used to produce the right quality products and services.
Companies’ competitiveness refers to their capacity to produce goods and services efficiently
(decreasing prices and increasing quality), making their products attractive in global markets. Thus,
it is necessary to achieve high productivity levels that increase profitability and generate revenue.
Beyond the importance of stable macroeconomic environments that can promote confidence, attract
capital and technology, a necessary condition to build competitive societies is to create virtuous
creativity circles that can propose smart and disruptive applications and services that can spread
across different social sectors strata.
Smart Cities have been willing to create technology-supported environments to make urban,
social and industrial spaces friendly, competitive and productive contexts in which natural and
material resources can be accessible to people, where citizens can develop their potential skills in
the best conditions possible. Since countries in different geographic locations, natural, cultural and
industrial ecosystems have to adapt their strategies to these conditions, Smart Cities solutions are
materialised differently. This book shows samples of experiences where industrial, urban planning,
health and sanitary problems are addressed with technology leading to disruptive data and artificial
intelligence centred applications. Sharing applied research experiences and results mostly applied in
Latin American countries, authors and editors show how they contribute to making cities and new
societies smart through scientific development and innovation.
9 Taylor & Francis
Taylor & Francis Group
http://taylorandfra ncis.com
Contents
Preface iii
Prologue vii
Khalid Belhajjame
Nowadays, through the democratisation of internet of things and highly connected environments,
we are living in the next digitally enriched generation of social media in which communication
and interaction for user-generated content are mainly focused on improving the sustainability of
smart cities. Indeed, the development of digital technologies in the different disciplines in which
cities operate, either directly or indirectly, alters expectations among those in charge of the local
administration and of citizens. Every city is a complex ecosystem with a lot of subsystems to make
it work, such as work, food, clothes, residence, offices, entertainment, transport, water, energy,
etc. As they grow, there is more chaos and most decisions are politicized, there are no common
standards and data is overwhelming. The intelligence is sometimes digital, often analogue, and
almost inevitably human.
The smart cities initiative aims to better exploit the resources in a city to offer higher-level
services to people. Smart cities are related to sensing the city’s status and acting in new intelligent
ways at different levels: people, government, cars, transport, communications, energy, buildings,
neighbourhoods, resource storage, etc. A smart city is much more than a high-tech city, it is a city
that takes advantage of the creativity and potential of new technologies to meet the challenges of
urban life. A smart city also helps to solve the sensitive issues of its citizens, such as insecurity, urban
mobility problems, water resources management and solid waste. They are not the instruments that
make a smart city, but everything that is achieved through the implementation of those processes.
A vision of the city of the “future”, or even the city of the present, rests on the integration
of science and technology through information systems. This vision implies re-thinking the
relationships between technology, government, city managers, business, academia and the research
community. Conclusions and actions are determined by the social, cultural and economic reality of
the cities and the countries in which they are located. Therefore, beyond smart cities as an object
of study, it is important to think about urban spaces that can host smart cities of different types and
built with different objectives, maybe those that have more priority and that can ensure the well-
being of citizens.
The advent of technology and the existence of the internet have helped transform traditional
cities into cities that are more impressive and interactive. There are terms analogous to “smart
cities”, such as a digital, intelligent, virtual, or ubiquitous city. The definition and understanding of
these terms determine the way challenges are addressed and projects are proposed to go towards a
“smart urban complex ideal” [2,6,21].
Overall, the evolution of a city into a smart city must focus on the fact that network-based
knowledge must not only improve the lives of those connected but also bring those who remain
unconnected into the fold, creating public policies that truly see the problems faced by big cities
and everyday citizens. Several smart cities in the most important capitals of the world and well-
known touristic destinations have developed urban computing solutions to address key issues of
city management, like transport, guidance to monuments, e-government, access to leisure and
culture, etc. In this way, citizens of different socio-economic groups, investors and government
administrators can have access to the resources of the city in an optimised and personalised manner.
Thus, a more intelligent and balanced distribution of services is provided thanks to technology that
can improve citizens life and opportunities. This has been more or less possible in cities where
the socio-economic and technological gap is not too great. Normally, solutions assume that cities
provide good quality infrastructure, including internet connection, access to services (energy, water,
roads, health), housing, urban spaces, etc. Yet, not all cities are developed in these advantageous
conditions, there are regions in the world where exclusion prevails in cities and urban spaces, where
people have little or no access to electricity, technology and connectivity, and where services are
not regulated. It is in this type of city that smart cities technology and solutions face their greatest
challenges.
In Mexico, projects on Smart Cities have been willing to promote sustainable urban development
through innovation and technology. The objective of the smart cities project has addressed the
improvement of life quality for inhabitants. Areas promoted in Smart Cities in Mexico are quite
diverse, ranging from environment, safety and urban design to tourism and leisure. This book
describes solutions to problems in these areas. Chapters describing use cases are also analysed
to determine the degree of improvement of citizens quality of life, human logistics within urban
spaces, of the logistics strategies and the access and distribution of services like transport, health or
assistance during disasters and critical events. The experiments described along the chapters of the
book are willing to show the way academics, inspired in living labs promoted in other cities, have
managed to study major smart cities problems and provide solutions according to the characteristics
of the cities, the investment of governments and industry and the willingness of people to participate
in this change of paradigm. Indeed, citizen participation is a cornerstone that must not be left aside.
After all, it is citizens who are beginning transformation and who constantly evaluate the results of
information integration. Citizen satisfaction is the best way to calibrate a smart city’s performance.
Urban computing1 is defined as the technology for acquisition, integration, and analysis of
big and heterogeneous data generated by a diversity of sources in urban spaces, such as sensors,
devices, vehicles, buildings, and human, for tackling the major issues that cities face. The study
of smart cities as complex systems is addressed through this notion of urban computing [23].
Urban computing brings computational techniques to bear on urban challenges such as pollution,
energy consumption, and traffic congestion. Using today’s large-scale computing infrastructure
and data gathered from sensing technologies, urban computing combines computer science with
urban planning, transportation, environmental science, sociology, and other areas of urban studies,
tackling specific problems with concrete methodologies in a data-centric computing framework.
1
Computing and Smart Cities Applications for the Knowledge Society. Available from: https://www.researchgate.net/pub-
lication/301271847_Urban_Computing_and_Smart_Cities_Applications_for_the_Knowledge_Society [accessed Jul 21
2020].
Prologue ix
Table 1 presents a summary of the families of applications that can be developed in the context
of urban computing: urban planning, transportation, environment, social and entertainment, energy,
economy and safety and security. Often, these applications can be organised on top of a general
urban computer framework reference architecture, enabling platforms that provide the technical
underlying infrastructure [16] necessary for these applications to work and be useful for the different
actors populating and managing urban territories.
In urban computing, it is vital to be able to predict the impact of change in a smart city’s
setting. For instance, how will a region’s traffic change if a new road is built there? To what extent
will air pollution be reduced if we remove a factory from a city? How will people’s travel patterns
be affected if a new subway line is launched? Being able to answer these kinds of questions with
automated and unobtrusive technologies will be tremendously helpful to inform governmental
officials’ and city planners’ decision making. Unfortunately, the intervention-based analysis and
prediction technology that can estimate the impact of change in advance by plugging in and out
some factors in a computing framework is not well studied yet. The objective would be to use this
technology to reduce exclusion and make citizens’ life more equal. How to guide people through
urban spaces with little or no land registry? How to compute peoples’ commute from home to work
when transport is not completely regulated? How to give access to services through applications that
can be accessible for all? For example, Latin American cities that appear as first in the Smart Cities
rankings (Buenos Aires, Santiago de Chile, São Paulo, Mexico City) are megacities of more than 10
million inhabitants. For many Smart Cities ideologies, the big urban “spots” are the antithesis of the
ideas and values of a truly smart city. Thus, there is room for scientific and technological innovation
to design smart cities solutions in these types of cities and thereby tackle requirements that will make
citizens’ lives better. This book is original in this sense because it describes smart cities solutions for
problems in this type of city. It provides use case examples of prediction solutions for addressing not
only smart cities issues, but urban computing as a whole.
the Middle East and Africa 10. In Canada, Ottawa’s “Smart Capital” project involves enhancing
businesses, local government, and communities using Internet resources. Quebec was a city highly
dependent upon its provincial government because of its weak industry until the early 1990s when
the city government kicked off a public-private partnership to support a growing multimedia sector
and high-tech entrepreneurship. In the United States, Riverside (California) has been improving
traffic flow and replacing ageing water, sewer and electric infrastructure through a tech-based
transformation. In San Diego and San Francisco, ICT have been major factors in allowing these
cities to claim to be a “City of the Future” for the last 15 years. Concerning Latin America, the Smart
Cities council recognizes the eight smartest cities in Latin America: Santiago (Chile), Mexico City
(Mexico), Bogota (Colombia), Buenos Aires (Argentina), Rio de Janeiro (Brazil), Curitiba (Brazil),
Medellin (Colombia) and Montevideo (Uruguay). Each city focusses on different aspects, including
automating pricing depending on traffic, smart and eco-buildings, electrical and eco-subway,
public Wi-Fi and public tech job programs, weather, crime, emergency monitoring, university and
educational programs.
In Mexico, the first successful Smart City project “Ciudad Maderas” (2013–2020) was
developed in Querétaro in the central part of the country. This project included the construction
of technology companies, hotels, schools, shopping centres, residential areas, churches and
huge urban spaces dedicated as a natural reserve in El Marques district. The purpose has been to
integrate technological developments into the daily lives of Queretaro’s inhabitants. Concerning
e-governance, the State Government has launched the Querétaro Ciudad Digital Application. The
purpose of this application is to narrow the gap between the citizens and the government. The
application is regarded worldwide as second-to-none technology. Cities like Mexico City have
focused on key services, such as transportation. A wide range of applications is readily available
to residents to accomplish their daily journeys from A to B: Shared Travel Services, Uber, Easy,
Cabify. Since 2014, the city of Guadalajara has been working on the Creative Digital City project
to promote the digital and creative industry in the region. The city of Tequila, also in the state of
Jalisco, promotes the project “Intelligent Tequila” for attracting tourism to the region. One of the
smart technologies already in use are heat sensors which help to measure massive concentrations in
public places. In Puebla, the Smart Quarter project develops solutions for improving mobility, safety
and life quality for the inhabitants. For example, proposing free Wifi in public areas and bike tracks
equipped with video – monitoring and alarm systems.
The European Union has put in place smart city actions in several cities, including Barcelona,
Amsterdam, Berlin, Manchester, Edinburgh, and Bath. In the United Kingdom, almost 15 years ago,
Southampton claimed to be the country’s first smart city after the development of its multi-application
smartcard for public transportation, recreation, and leisure-related transactions. Similarly, Tallinn
has developed a large-scale digital skills training program, extensive e-government, and an award-
winning smart ID card. This city is the centre of economic development for all of Estonia, harnessing
ICT by fostering high-tech parks. The European Commission has introduced smart cities in line 5
of the Seventh Framework Program for Research and Technological Development. This program
provides financial support to facilitate the implementation of a Strategic Energy Technology plan [4]
through schemes related to “Smart cities and communities”.
Statistics of the Chinese Smart Cities Forum report six provinces and 51 cities have included
Smart Cities in their government work reports in China [14,17]; of these, 36 are under new
concentrated construction. Chinese smart cities are distributed densely over the Pearl and Yangtze
River Deltas, Bohai Rim, and the Midwest area. Moreover, smart cities initiatives are spread in all
first-tier cities, such as Beijing, Shanghai, and Shenzhen. The general approach followed in this
city is to introduce some ICT during the construction of new infrastructure, with some attention
to environmental issues but limited attention to social aspects. A modern hi-tech park in Wuhan
is considered an urban complex that is multi-functional and ecological. Wuhan is a city that is
high-tech and self-sufficient. It is an eco-smart city designed for exploring the future of the city.
It is a natural and healthy environment and incubator of high culture and expands the meaning of
Prologue xi
the modem hi-tech park [18]. Taipei City clarified that the government must provide an integrated
infrastructure with ICT application and service [10]. China has encouraged the transition to urbanism
by improving public services and improving efficiency for transformation to the government model,
enhancing the economic development of the city. In 2008, a “digital plateau” was proposed; in 2009,
more than ten provinces set goals to build a smart city. China improved the construction of the city,
industrial structure, and social development. To start the implementation strategy, a good plan is
necessary, as well as knowledge of the importance of smart city construction.
Several Southeast Asian cities, such as Singapore [11], Taiwan, and Hong Kong, are following
a similar approach, promoting economic growth through smart city programs. Singapore’s IT2000
plan was designed to create an “intelligent island,” with information technology transforming work,
life, and play. More recently, Singapore has extensively been dedicated to implementing its Master
Plan in 2015 and has already completed the Wireless@SG goal of providing free mobile Internet
access anywhere in the city [7]. Taoyuan in Taiwan is supporting its economy to improve the quality
of living through a series of government projects such as E-Taoyuan and U-Taoyuan for creating
e-governance and ubiquitous possibilities.
Korea is building the largest smart city initiative in Korea, Songdo, a new town built from the
ground in the last decade and which plans to house 75,000 inhabitants [13]. The Songdo project
aims at developing the most wired smart city in the world. The project is also focused on buildings
and has collaborated with developers to build its networking technology into new buildings. These
buildings will include telepresence capabilities and many new technologies. The plan includes
installing telepresence in every apartment to create an urban space in which every resident can
transmit information using various devices [12], whereas a city central brain should manage the
huge amount of information [19]. This domestic offering is only the first step; Cisco aims to link
energy, communications, traffic, and security systems into one smart network [20]. At present, there
are 13 projects in progress towards the smart city initiatives of New Songdo [9].
Despite an increase in projects and research to create smart cities, it is still difficult to provide
cities with all the features, applications, and services required. Future smart cities will require a re-
thinking of the relationships between technology, government, city managers, business, academia
and the research community. This book shows some interesting results of use cases where different
communities and sectors interact to find alternative solutions for cities that are willing to become
smart and address their problems innovatively and effectively.
platform can consolidate information on the use and operation of all means of transport at the local
authority level (bus, tramway, bicycles, car, transport conditions). Thanks to this consolidated and
interpreted information, the city can offer its citizens the most appropriate solution, taking into
account the context and requirements of the traveller. In terms of Data Science, just as for parking in
the city, intermodality is first and foremost a subject of optimisation of resources under constraints.
More broadly, the addition of new data (video, traffic conditions) and the identification of non-
linear patterns (formation of traffic jams, congestion measurement) makes the subject rich and
complex. Finally, scoring and customer knowledge algorithms are exploited to take into account
user preferences to improve the recommendation. Perhaps it is not necessary to systematically
propose bicycle travel times to a daily bus user?
Providing fluid displacement of people within urban spaces is also an important problem that
can be solved through data science. Focussing on traffic management that is a daily problem in cities,
the coordination of traffic lights can be an effective means of regulating road traffic and limiting
congestion situations. By smoothing the flow and reducing the number of vehicles passing through
bottlenecks, it is possible to increase the daily flow and reduce the level of traffic congestion. For
instance, the reduction in speed on the ring road has had the effect of reducing traffic jams and
congestion. Despite the counter-intuitive side of this effect, the physical explanation comes from
the fact that by reducing the maximum speed, the amplitude of speed variations has been reduced
(fluid mechanics enthusiasts already know that it is better to be in laminar rather than chaotic flow
situations). Another contribution of the knowledge of traffic conditions is the use of models to test
the impact of road works or the construction of new infrastructures. In terms of data science, the
aim is to identify forms of congestion and to detect and use the most effective means of contrasting
them. Among them, one can generally act on the maximum speed allowed or on the regulation of
the timing of traffic lights according to traffic conditions (detected via cameras or GPS signals),
visibility conditions and, in general, the weather, the presence of pedestrians, the time of day or
other parameters that emerge as significant. Perhaps the most direct benefit expected is the reduction
of traffic jams and slowdowns and, indeed, the time required for a trip. Other collateral benefits are
the reduction of air and noise pollution and a reduction in the number of accidents caused by traffic
problems.
This book provides examples of solutions that can be proposed through data science solutions
that apply machine learning, data mining and artificial intelligence methods to data intended to make
people live better in their daily lives as citizens of cities with unfair distribution of services. It also
shows how data science can contribute to human logistics problems, particularly in the presence of
critical events happening in cities with few infrastructures or with a huge population. The use cases
are issued from the Mexican context, but they can be present in Latin American cities and, thus,
solutions can be reproduced in cities with similar characteristics. Having systems and solutions that
can promote foresight and planning are key in these kinds of urban places.
in the Latin American population, like obesity, breast cancer, colour-blindness and mental
workload.
- Part II: Applications to Improve a Smart City. The way people move around in cities and urban
spaces gives clues as to when many events of interest come up and in which hotspots. Part II
of the book consists of five chapters that describe technology-based techniques and approaches
to observing civilians’ behaviour and their quality of life. This part starts with an initial chapter
that surveys techniques for dealing with smart cities data, including collection strategies,
indexing and exploiting them through applications. Then, the four remaining chapters address
the analysis of citizens’ behaviour with the aim of proposing strategies for dealing with human
logistics in the presence of critical events (human avalanches, floods) and the way services
distribution to the population can be improved (distribution of shelter and evacuation routes).
- Part III: Industry 4.0, Logistics 4.0 and Smart Manufacturing. Mexico is the Latin American
country with the highest number of Smart Cities that offer economic advantages for they
represent niches enabling potential economic activities that have not been considered yet; for
example, the 4.0 technology sustainable promotion, the energy and agricultural sectors. A wide
range of growth opportunities lie ahead for companies falling in these categories. In the long
run, Smart Cities push forward towards Mexican economy diversification. This part of the book
consists of six that address the important services that activate the economy of smart cities.
Indeed, smart manufacturing is an important activator of industry smart cities, and it is activated
through techniques that are being developed in Industry and Logistics 4.0 ecosystems. Along
with its six, this part addresses algorithms for managing orders in warehouses and supply chains
in sectors like automotive industry and retail; and the impact of using technology and data
analytics methods in the aquaculture industry.
In conclusion, this book is important because it shows that it is possible and important to show
how key problems in non-ideal urban contexts, like the ones that characterize some cities in Latin
America and particularly in Mexico, can find solutions under the perspective of smart cities. Being
smart is maybe a good opportunity for academia, government and industry to work with society and
find alternatives to reduce unequal access to services, as well as sustainability and exclusion in the
complex urban megapolis.
Bibliography
[1] Abdulrahman, A., Meshal, A. and Imad, F.T.A. 2012. Smart Cities: Survey. Journal of Advanced Computer Science and
Technology Research, 2(2): 79–90.
[2] Deakin, M. and Al Waer, H. 2011. From intelligent to smart cities. Intelligent Buildings International, 3: 3, 140–152.
[3] Douglas, D. and Peucker. T. 1973. Algorithms for the reduction of the number of points required to represent a line or
its caricature. Canadian Cartographer, 10(2): 112–122.
[4] Duravkin, E. 2010. Using SOA for development of information system; Smart city International Conference on Modern
Problems of Radio Engineering. Telecommunications and Computer Science (TCSET).
[5] Giffinger, R., Fertner, C., Kramar, H., Kalasek, R., Pichler-Milanovic ́, N. and Meijers, E. 2007. Smart Cities: Ranking
of European Medium-sized Cities (Vienna: Centre of Regional Science, 2007).
[6] Gil-Castineira, F., Costa-Montenegro, E., Gonzalez-Castano, F.J., Lopez-Bravo, C., Ojala, T. and Bose, R. 2011.
Experiences inside the Ubiquitous Oulu Smart City. Computer, 44(6): 48–55.
[7] IDA Singapore, “iN2015 Masterplan”, 2012, http://www.ida.gov.sg/~/media/Files/Infocomm%20Landscape/iN2015/
Reports/realisingthevisionin2015.pdf.
[8] Ishida, Toru. 1999. Understanding digital cities. Kyoto Workshop on Digital Cities. Springer, Berlin, Heidelberg.
[9] Ishida, T. 2002. Digital City Kyoto. Communications of the ACM, 45: 7, 78–81.
[10] Jin Goo, K., Ju Wook, J., Chang Ho, Y. and Yong Woo, L. 2011. A network management system for u-city. 13th
International Conference on Advanced Communication Technology (ICACT).
[11] Kloeckl, Kristian, Oliver Senn and Carlo Ratti. 2012. Enabling the real-time city: LIVE Singapore! Journal of Urban
Technology, 19.2: 89–112
[12] Kuikkaniemi, K., Jacucci, G., Turpeinen, M., Hoggan, E., Mu, X. et al. 2011. From Space to Stage: How Interactive
Screens Will Change Urban Life. Computer, 44(6): 40–47,
xiv Innovative Applications in Smart Cities
[13] Lee, J.H., Hancock, M.G. and Hu, M. 2014. Towards an effective framework for building smart cities: Lessons from
Seoul and San Francisco. Technological Forecasting and Social Change.
[14] Liu, P. and Peng, Z. 2013. Smart Cities in China. IEEE Computer Society Digital Library, http:// doi.ieeecomputersociety.
org/10.1109/MC.2013.149.
[15] Pan, Yunhe et al. 2016. Urban big data and the development of city intelligence. Engineering, 2.2: 171–178.
[16] Psyllidis, Achilleas et al. 2015. A platform for urban analytics and semantic data integration in city planning. International
conference on computer-aided architectural design futures. Springer, Berlin, Heidelberg.
[17] Shi, L. 2011. The Smart City’s systematic application and implementation in China. International Conference on
Business Management and Electronic Information (BMEI).
[18] Shidan, C. and Siqi, X. 2011. Making Eco-Smart City in the future. International Conference on Consumer Electronics,
Communications and Networks (CECNet),
[19] Shwayri, S.T. 2013. A Model korean ubiquitous eco-city? The politics of making songdo. Journal of Urban Technology,
20: 1, 39–5.
[20] Strickland, E. 2011. Cisco bets on South Korean smart city. Spectrum, IEEE, 48(8): 11- Kaufmann.
[21] Townsend, A.M. 2013. Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia, New York: W.W.
Norton & Company.
[22] Vanolo, A. 2014. Smart mentality: The smart city as disciplinary strategy. Urban Studies, 51: 5, 883–898.
[23] Zheng, Yu et al. 2014. Urban computing: concepts, methodologies, and applications. ACM Transactions on Intelligent
Systems and Technology (TIST), 5.3: 1–55.
PART I
One of the fundamental aspects of smart cities is an improvement in the health sector, by providing
its citizens with better care and prevention and detection of diseases. Breast cancer is one of the
most common diseases and the one with the highest incidence in women worldwide. In smart
cities, to improve the quality of life of its citizens, especially for women, is to diagnose breast
tumors in shorter periods with simpler and automated methods. In this chapter, a new deep learning
architecture is proposed to segment breast cancer tumors.
1. Introduction
According to the World Health Organization (WHO), breast cancer is one of the most common
diseases with the highest incidence in women worldwide, about 522 thousand deaths are estimated
annually with data collected in 2012 (OMS, 2019). In smart cities, as regards to the health sector,
it seeks to improve the quality of life of its citizens (Kashif et al., 2020; Abdelaziz et al., 2019;
Rathee et al., 2019), thus there is a need to diagnose breast tumors in shorter periods with simpler
and automated methods that can produce accurate results. The most common method to an early
diagnostic is through mammographic images, however, these images usually have noise and
low contrast which can cause the doctor to have difficulty classifying different tissues. In some
mammogram images, malignant tissues and normal dense tissues are presented, but it is difficult to
contrast between them by applying simple thresholds when automatic methods are used (Villalba,
2016). Because of these problems, it is necessary to develop various approaches that can correctly
identify the malignant tissues, which represent higher intensity values compared to background
information and other regions of the breast. Also, regions where some normal dense tissues have
intensities similar to the tumor region have to be excluded (Singh et al., 2015).
Universidad Autónoma de Ciudad Juárez, Av. Hermanos Escobar, Omega, 32410 Cd Juárez, Chihuahua, México.
* Corresponding author: jose.mejia@uacj.mx
2 Innovative Applications in Smart Cities
2. Literature Review
There exist several diagnostic methods to perform timely detection, the use of mammography being
the method most used by medical staff because of the effective and safe results of the method. The
examination is carried out by firm compression of the breast between two plates, using ionizing
radiation to obtain images of breast tissue, which can be interpreted as benign or malignant
(Marinovich et al., 2018). Here, we review some methods for an automatic segmentation/detection
of malignant masses by processing the mammography image.
In (Hanmandlu et al., 2010), a comparison of two different semi-automated methods was
performed, using level sets method and watershed controlled by markers. Although both methods
are not very accurate, they were found to have a short processing time.
In the work of (Lempka et al., 2013), two automated methods were presented based on the
improvement of region growing and the segmentation with cellular neural networks. In the first
stage, the segmentation was carried out through an automated region growing whose threshold is
obtained through an artificial neural network. In the second method, segmentation is performed
by cellular neural networks, whose parameters are determined by a genetic algorithm (GA).
Intensity, texture and shape characteristics are extracted from segmented tumors. The GA is used
to select appropriate functions from the set of extracted functions. In the next stage, ANNs are
used to classify mammograms as benign or malignant. Finally, they evaluate the performance of
different classifiers with the proposed methods, such as multilayer perceptron (MLP), vector support
machines (SVM) and K-nearest neighbors (KNN). Among these methods, the MLP produced better
diagnostic performance in both methods. The sensitivity, specificity and accuracy indices obtained
are 96.87%, 95.94%, and 96.47%, respectively (Lempka et al., 2013).
In (Wang et al., 2014), a breast tumor detection algorithm was proposed in digital mammography
based on extreme machine learning (ELM). First, they use a median filter for noise reduction and
contrast improvement as data pre-processing. Next, wavelet transforms, morphological operations,
and the region growing are used for the segmentation of the edge of the breast tumor. Then, they extract
five textural features and five morphological features. Finally, they use ELM classifier to detect breast
tumors. In the comparison of the detection of breast tumors based on SVM, with the detection of breast
tumors based on ELM, not only does ELM have a better classification accuracy than the SVM, but
also a much-improved training speed. Also, the efficiency of classification, training and performance
testing of SVM and ELM were compared and the total number of errors for ELM was 84 while the
total number of errors for SVM was 96, showing that ELM has better abilities than SVM (Wang
et al., 2014).
In (Pereira et al., 2014), a computational method is presented as an aid to segmentation and mass
detection in mammographic images. First, a pre-processing method based on Wavelets transformation
and Wiener filtering was applied for image noise removal and enhancement. Subsequently, a method
was used for mass detection and segmentation through the use of thresholds, the Wavelet transform,
and a genetic algorithm. The method was quantitatively evaluated using the area overlay metric
(AOM). The mean standard deviation value of AOM for the proposed method was 79.2% ± 8%. The
Segmentation of Mammogram masses for Smart Cities Health Systems 3
method they propose presented a great potential to be used as a basis for the massive segmentation
of mammograms in the craniocaudal and mid-lateral oblique views.
The work of (Pandey et al., 2018) presents an automatic segmentation approach that is carried
out in three steps. First, they used adaptive Wiener filtering and media clustering to minimize the
influence of noise, preserve edges and eliminate unwanted artifacts. In the second step, they excluded
the heart area using a set of levels based on the active contour, where the initial contour points were
determined by the maximum entropy threshold and the convolution method. Finally, the pectoral
muscle is removed through the use of morphological operations and local adaptive thresholds in
the images. The proposed method was validated using 1350 breast images of 15 women, showing
excellent segmentation results compared to semi-automated methods drawn manually.
In (Chougrad et al., 2018) a computerized diagnostic system was developed based on deep
convolutional neural networks (CNN) that use transfer learning, which is ideal for handling small
data sets, such as medical images. After training some CNN architectures, they used precision and
AUC parameters to evaluate images from different databases, such as DDSM, INbreast, and BCDR.
The CNN model, named Inception v3, obtained the best results with an accuracy of 98.94%, and so
it was used as a basis to build the Breast Cancer Screening Framework. To evaluate the proposed
CAD system and its efficiency to classify new images, they tested it in a database different from
those used previously (MIAS) and obtained an accuracy of 98.23% and 0.99 AUC (Chougrad
et al., 2018).
3. Methodology
In this section, we present the methodology used for the development of an architecture based on
deep learning neural networks, for segmenting tumors in digital mammography.
The schematic of the methodology is presented in Figure 1.
The images used in this chapter are from the CBIS-DDSM which is a subset of the Database
for Screening Mammography (DDSM) which is a database with 2620 mammography studies. It
contains normal, benign and malignant cases with verified pathological information. This database
is a useful tool in the testing and development of decision support systems. The CBIS-DDSM
collection includes a subset of the DDSM data selected by a trained medical doctor. The images
were decompressed and converted to the DICOM format. The database also includes updated ROI
segmentation and delimitation tables, and pathological diagnosis for training data (Lee et al., 2017).
In ROI annotations for anomalies in the CBIS-DDSM data subset, they provide the exact position
of the lesions and their coordinates to generate the segmentation mask. In Figure 2, three images
obtained from the CBIS-DDSM database are shown, with tumors.
Figure 2: Examples of images obtained from the CBIS-DDSM database; images are shown with false color.
original size was too large to process. The procedure was repeated for each image in the database.
In Figure 3, it is shown an image from the database, and the clipping of the region of the tumor and
its mask.
Figure 3: Pre-processing of acquired images. (a) Original mammographic image. (b) Trim delimiting the tumour area. (c)
Tumour mask located by the given coordinates.
For segmentation evaluation, the Intersection over the Union Metric (IoU) was used. This is
an evaluation metric commonly used to measure the accuracy of an object detector in a particular
data set, calculating this metric is as simple as dividing the area of overlap between bounding boxes
by the area of the joint (Palomino, 2010; Rahman et al., 2016). The metric is also frequently used
to assess the performance of convolutional neural network segmentation (Rezatofighi et al., 2019;
Rosebrock, 2019).
The formula to calculate the IoU is
IoU=(Area of Overlap)/(Area of union) (1)
Another metric used is the true positive value (PPV) as (Hay, 1988; Styner, 2008).
TP
PPV = (2)
(TP + FP)
Where:
• TP are the true positive pixels, i.e., pixels that are part of the mass (object) and detected as mass;
• FP are false positive pixels, i.e., pixels that are not part of the mass (background) and detected
as mass.
We also used the true positive rate (TPR), which is a true positive that represents a pixel that is
correctly predicted to belong to the given class (González García, 2019). Its formula is:
TP TP
=
TPR = (3)
p (TP + FN)
Where:
• FN are false negative pixels, i.e., pixels that are part of the mass (object) but are classified as
background.
4. Results
This section presents the results obtained from the network on the test set, which consists of
150 images. In Figures 5 and 6, two images of the test set and the output of the proposed network
are presented.
In Figure 7, we show three graphical examples of the evaluation of the network. The green color
represents the true positives, the red color the false positives and the blue color the false negatives.
6 Innovative Applications in Smart Cities
Figure 5: Image “1” of the test set. (a) Original image, (b) network output, (c) real mask delineation by a medical doctor.
Figure 6: Image “1” of the test set. (a) Original image, (b) network output, (c) real mask delineation by medical doctor.
Figure 7: Each row is the input and output for the same mammogram. First column is the input mammogram, second
column is the true mask, third column is the output mask from the network and finally the fourth column shows the TP, FP,
and FN in green, red and blue, respectively.
Table 1 shows the IoU metric values obtained for the first eight images of the database processed
by the network.
Table 2 presents the precision or positive predictive value (PPV) obtained from first eight
images of the database.
Segmentation of Mammogram masses for Smart Cities Health Systems 7
The True Positive Rate (TPR) values obtained are presented in Table 3.
The average of the metrics for all tests was as follows: IoU as an average of 0.77, PPV averages
0.85, while TPR has an average of 0.88.
In general, the proposed architecture shows promising results; the total of TP is much greater
than the sum of FP plus FN. From Figure 7, it can be seen how most of the tumor mass is correctly
detected and segmented giving high TPR values. This is enough for a specialized doctor to note
a possible mass in the mammography. In addition, the calculation of the total mass could serve
as a quantitative measure to evaluate the response of the tumor to the treatment. On the other
hand, it is also necessary to improve the network to reduce as much as possible the amount of
FN and FP.
8 Innovative Applications in Smart Cities
References
Abdelaziz, A., Salama, A.S., Riad, A.M. and Mahmoud, A.N. 2019. A machine learning model for predicting of chronic
kidney disease based internet of things and cloud computing in smart cities. In Security in Smart Cities: Models,
Applications, and Challenges (pp. 93–114). Springer, Cham.
Adrian Rosebrock. Intersection over Union (IoU) for object detection. Machine Learning, Object Detection, Tutorials, 2016.
[Online]. Available: https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/?fb
clid=IwAR3ytgXlxqTNINKEgrU0JM3YeZPqFbNkdD8pSbOtGTC0c1D0bQA_LAcv0rc.
Chollet, F. 2018. Keras: The python deep learning library. Astrophysics Source Code Library.
Chougrad, H., Zouaki, H. and Alheyane, O. 2018. Deep Convolutional Neural Networks for breast cancer screening. Comput.
Methods Programs Biomed., 157: 19–30.
Cortés Antona, C. 2017. Herramientas Modernas En Redes Neuronales: La Librería Keras. Univ. Autónoma Madrid, p. 60.
Dubey, R.B., Hanmandlu, M. and Gupta, S.K. 2010. A comparison of two methods for the segmentation of masses in the
digital mammograms. Comput. Med. Imaging Graph., 34(3): 185–191.
Flores Gutiérrez, H., Flores, R., Benja, C. and Benoso, L. 2015. Redes Neuronales Artificiales aplicadas a la detección de
Cáncer de Mama.
Gulli, Antonio, and Sujit Pal. Deep Learning with Keras. Packt Publishing Ltd, 2017.
Guzman, M., Jose Mejia, Moreno, N., Rodriguez, P. 2018. Disparity map estimation with deep learning in stereo vision.
CEUR.
Hamidinekoo, A., Denton, E., Rampun, A., Honnor, K. and Zwiggelaar, R. 2018. Deep learning in mammography and breast
histology, an overview and future trends. Med. Image Anal., 47: 45–67.
Hay, A.M. 1988. The derivation of global estimates from a confusion matrix. International Journal of Remote Sensing, 9(8):
1395–1398.
Ian Goodfellow and Yoshua Bengio and Aaron Courville, Deep Learning, 2016.
Juan Antonio González García. PRUEBAS DIAGNÓSTICAS (II): VALORES PREDICTIVOS. Bitácora de Fisioterapia:
Noticias, comentarios, opiniones, quejas e inquietudes sobre fisioterapia, sanidad y ciencia., 2019.
Karianakis, N., Fuchs, T.J. and Soatto, S. 2015. Boosting Convolutional Features for Robust Object Proposals.
Kashif, M., Malik, K.R., Jabbar, S. and Chaudhry, J. 2020. Application of machine learning and image processing for
detection of breast cancer. In Innovation in Health Informatics (pp. 145–162). Academic Press.
La, N., Palomino, S. and Concepción, L.P. 2010. Watershed: un algoritmo eficiente y flexible para segmentación de imágenes
de geles 2-DE, 7(2): 35–41.
Lee, R.S., Gimenez, F., Hoogi, A., Miyake, K.K., Gorovoy, M. and Rubin, D.L. 2017. A curated mammography data set for
use in computer-aided detection and diagnosis research. Scientific Data, 4: 170177.
Lempka, S.F. and McIntyre, C.C. 2013. Theoretical analysis of the local field potential in deep brain stimulation applications.
PLoS One, 8(3).
Marinovich, M.L., Hunter, K.E., Macaskill, P. and Houssami, N. 2018. Breast cancer screening using tomosynthesis or
mammography: a meta-analysis of cancer detection and recall. JNCI: Journal of the National Cancer Institute, 110(9):
942–949.
Noh, H., Hong, S. and Han, B. 2015. Learning deconvolution network for semantic segmentation. Proc. IEEE Int. Conf.
Comput. Vis., vol. 2015 International Conference on Computer Vision, ICCV 2015, pp. 1520–1528.
OMS. “Cáncer de mama: prevención y control”, Cáncer de mama: prevención y control, 2019. [Online]. Available: https://
www.who.int/topics/cancer/breastcancer/es/.
Pandey, D. et al. 2018. Automatic and fast segmentation of breast region-of-interest (ROI) and density in MRIs. Heliyon,
4(12): e01042.
Pereira, D.C., Ramos, R.P. and do Nascimento, M.Z. 2014. Segmentation and detection of breast cancer in mammograms
combining wavelet analysis and genetic algorithm. Comput. Methods Programs Biomed., 114(1): 88–101.
Segmentation of Mammogram masses for Smart Cities Health Systems 9
Rafegas, I. and Vanrell, M. 2018. Color encoding in biologically-inspired convolutional neural networks. Vision Res.,
151(February 2017): 7–17.
Rahman, M.A. and Wang, Y. (2016, December). Optimizing intersection-over-union in deep neural networks for image
segmentation. In International symposium on visual computing (pp. 234–244). Springer, Cham.
Rathee, D.S., Ahuja, K. and Hailu, T. 2019. Role of Electronics Devices for E-Health in Smart Cities. In Driving the
Development, Management, and Sustainability of Cognitive Cities (pp. 212–233). IGI Global. Médicas en Cienfuegos
ISSN:1727-897X Medisur 2005; 3(5) Especial”, Guias buenas parcticas Clin., 3(5): 109–118, 2005.
Rezatofighi, H., Tsoi, N., Gwak, J., Reid, I. and Savarese, S. 2019. Generalized Intersection over Union : A Metric and A Loss
for Bounding Box Regression, pp. 658–666.
Singh, A.K. and Gupta, B. 2015. A novel approach for breast cancer detection and segmentation in a mammogram. Procedia
Comput. Sci., 54: 676–682.
Styner, Martin, et al. 2008. 3D segmentation in the clinic: A grand challenge II: MS lesion segmentation. Midas Journal 2008.
Villalba Gómez, J.A. 2016. Problemas bioéticos emergentes de la inteligencia artificial. Diversitas, 12(1): 137.
Wang, Z., Yu, G., Kang, Y., Zhao, Y. and Qu, Q. 2014. Breast tumor detection in digital mammography based on extreme
learning machine. Neurocomputing, 128: 175–184.
CHAPTER-2
A new report on childhood obesity is published every so often. The bad habits of food and the
increasingly sedentary life of children in a border society has caused an alarming increase in the
cases of children who are overweight or obese. Formerly it seemed a problem of countries with
unhealthy eating habits, such as the United States or Mexico in Latin-America, where junk food is
part of the diet during childhood. However, obesity is a problem that we already have around the
corner and that is not so difficult to fight in children. In the present research, the development of an
application that reduces the problem of the lack of movement regarding the children of a smart city
is considered a future problem. The main contribution of our research is the proposal of an improved
type of Serious Game, coupled with the achievement of an innovative model to practice an Olympic
sport without the complexity of moving physically in an outside space and having to invest in a
space with high maintenance costs, considering the adverse weather conditions such as wind, rain
and even a dust storm. We use Unity to model each Avatar associated with a set of specific sports,
such as Water polo, Handball, Rhythmic Gymnastics and others.
1. Introduction
The increase in childhood obesity, a problem of great importance in a smart city, determines the
challenges that must be addressed with respect to applications that involve Artificial Intelligence.
Computer games to combat childhood obesity are very important to reduce future problems in
our society. Children increasingly play less on the street and spend more time with video games
and computer games, so they lead a more sedentary life. This, together with bad eating habits,
increases the cases of obese children every year. What can parents do to avoid their children being
overweight? A bet that comes to us from the University of Western Australia, Liverpool John Mores
University and the University of Swansea in the United Kingdom is “exergaming”, an Anglicism
that comes from joining the word “exerdizze” in Turkish (exercise in English) with “gaming”
(game). These are games that run on consoles such as Xbox Kinect or Nintendo Wii in which
you interact through physical activity in tests in which you have to run, bike, play bowling or
jump fences. The researchers tested children who performed high and low-intensity exergaming and
measured their energy expenditure. The conclusion reached was that the exergaming generated an
energy expenditure compared to exercise of moderate or low intensity, depending on the difficulty
1
Universidad Autónoma de Ciudad Juárez, Av. Hermanos Escobar, Omega, 32410 Cd Juárez, Chihuahua, México.
2
Universiad Simón Bolívar, Sartenejas, 1080 Caracas, Distrito Capital, Venezuela.
* Corresponding author: david.roman@uacj.mx
Serious Game for Caloric Burning in Morbidly Obese Children 11
of the game. In addition, the game was satisfactory for the children, who enjoyed the activities they
did. It is a tool that parents can take advantage of to prevent children from spending so many hours
sitting in front of the console as it has been shown to offer long-term health benefits. In any case, it
must always be one of the means we can use to encourage children to do some physical activity but
not the only one. Going out the street to play, run, jump, must always be on the children’s agenda,
as is shown in Figure 1.
The Serious game represents a practical idea of how to solve problems associated with caloric
intake because they allow performing ludic aspects of a game and the regulations associated with
a specific sport, which is why the research conducted took into consideration a set of sports with
high mobility associated with a control group that has morbid child obesity. The remainder of this
chapter is structured as follows: In Section §2, the approach of a serious game for caloric burning is
presented. Methodological aspects of the implementation of serious games are presented in Section
§3, where psychological and technological factors are considered to guide their development.
Section §4 introduces the method for estimating caloric burning in the implementation of a serious
game. Technical aspects for modeling of avatars in a serious game for caloric burning are given in
Section §5. Finally, the analysis of results and the conclusions are presented in Sections §6 and §7,
respectively.
workers found that the Kinect Sports 200 m Hurdles video game generated increases in heart
rate and energy expenditure patterns consistent with intense physical activity in children,
which was also related to effects that can be considered beneficial on vascular function [5].
Gao et al. conducted a study to compare the effect of physical activity performed by children in
physical education classes using active video games and those who performed physical activity
in regular physical education classes. The authors concluded that the positive effects of a regular
physical education class can be achieved with a physical education class using active play in
children with light to vigorous physical activity behavior, with similar energy expenditures in both
class modalities [6]. Studies show a positive impact of active video game use on body mass index
in children and adolescents [7,8]. All this reveals the potentialities of the use of video games in the
control of obesity and the prevention of illnesses associated with this condition.
A remarkable aspect of active video games is the fun and entertaining nature of the video game
itself, which makes it attractive to children and teenagers and represents a motivating component for
physical activity. Research reveals that engaging in physical activity through an active video game
is significantly more enjoyable than other traditional physical exercises, such as just walking [9] or
using a treadmill [10]. On the other hand, a study showed that adolescents with obesity who were
included in a physical activity program using active games reported an increase in physical activity
and showed a high motivation intrinsic to the use of active video games [11].
The goal of the present research is the development of serious games based on active sports
video games for increasing the burning of calories in morbidly obese children. In addition to the
application of active games, the serious game incorporates metabolic equivalent analysis for the
estimation of caloric burning based on the metabolic intensity indices described in the most recent
compendium of physical activities for young people [12].
3. Methodological Aspects
There are many transcendental topics when considering to establish a specific serious game
associated to an aspect of the technology that allows to take the caloric control of the exercise
carried out as in:
Serious Game for Caloric Burning in Morbidly Obese Children 13
self-expression, experimentation, and communication. While the child plays, they learn about the
world and explore relationships, emotions and social roles. It also gives the child the possibility
to externalize his personal history, thus releasing negative feelings and frustrations, mitigating the
effects of painful experiences and giving relief from feelings of anxiety and stress [5]. The play
therapist is specialized, trained in play and uses professional technical therapeutic methods adapted
to the different stages of child development. The importance of a specialized Serious Game is to
capture and understand the child’s emotions, as well as to get involved in the child’s game to create
a true relationship for the expression and management of the child’s internal conflicts, an aspect
of great relevance and that favors Serious Games, as well as to download and understand their
emotions in order to properly manage them and not get trapped in them, making them able to
recognize and explore the problems that affect their lives [6]. A serious game does not replace the
play therapy of your therapist, but seeks to provide an auxiliary and also effective support tool for
those who are in contact with children who at some point show signs of moodiness. It could also be
used by health professionals as part of their therapeutic tools to control the emotions of their patients
and to be able to give a more adequate follow-up to their therapy, especially occupational therapy.
Figure 2: Design of a speed skating rink in an open and arboreal space to improve the ludic aspect of the practice of this sport.
3.6 A serious game associated with the appropriate heuristics for continuous
improvement
Artificial intelligence, using adequate heuristics, will allow to demonstrate the correct functioning
of the avatar in the built environment for learning, achieving an adequate link with the avatar
associated with the child, as can be seen in Figure 2.
during childhood. At the same time, for the same physical activity, a child has a higher energy
expenditure per body mass than an adult or adolescent.
Thus, the most recent compendium of physical activities for youth establishes a metric for youth
MET (METy) that is significantly different from that of adults, and that is age-dependent [12]. The
compendium presents the METy values of 196 physical activities commonly performed by children
and young people, for the following discrete age groups: 6–9, 10–12, 13–15 and 16–18 years. For
the calculation of the BMR, the Schofield equations according to age groups and sex are used:
Age BMR (kcal/min)
Boys
3–10 years [22.706 × weight (kg) + 504.3])/1440 (1)
10–18 years [17.686 × weight (kg) + 658.2])/1440 (2)
Girls
3–10 years [20.315 × weight (kg) + 485.9])/1440 (3)
10–18 years [13.384 × weight (kg) + 692.6])/1440 (4)
For the present work, it is proposed the use of the METy values of the young compendium for
the different physical activities that are implemented in the designed serious games, which belong to
the category of active full body video games [12]. Table 1 shows the different METy values for the
age groups 6–9, 10–12 and 13–15, for which the use of serious games is intended.
Table 1: METy values of active video games (full body) for the physical activities of the serious games [12].
Then, knowing the BMR and METy for age group, and duration of a physical activity, the
energy expenditure is calculated by:
EE = METy × BMR (kcal/min) × duration (min) (5)
For example, if a 10-year-old girl (37 kg), with a BMI greater than the first three quartiles
according to the population of her age, plays the serious game of Rhythmic Gymnastics (like Dance
in table 1, METy = 3.3) for 15 minutes twice a day, her daily caloric burning due to the practice of
that physical activity can be determined as follows:
• Using Schofield equation 3, BMR = [20.315 × 37 + 485.9]/1440 = 0.86 kcal/min.
• Total energy expenditure (EE) for this physical activity:
EE = 3.3 × 0.86 kcal/min × 30 min = 85 kcal
city of Paso Texas in the United States has shown a genuine interest in the project, for which it
mentioned providing the necessary support for its realization. In Mexico, the National System for
Integral Family Development has childcare programs with interesting and very efficient strategies
for reducing morbid obesity in overweight children. That is why it is intended to integrate this
intelligent tool in their support programs for these children. For this, it will soon be formalized by
both parties committed to supporting the project shown in this document. In future research, we try
to modify a game based on collaborative work in a group—we are choosing rugby seven—with high
intensity of pressure for each child and modify the importance related to the support of this type
of pressure related to the responsibility of a Collective activity, an approximation will be related
to what is implied for Water polo, as shown in Figure 3. A very relevant aspect is to consider that
if someone asks why he likes to use our Serious Game, this user will be able to respond: because
he has had a playful scope and of adequate selection with the avatar, so he could have empathy for
our proposal. By analyzing in more detail the group of people who used our Serious Games, we
determined that, like the role-play, it is a hobby that unites them and gives them opportunities to help
each other and their videogame community.
It is a safe environment in which you can experience social interactions, something fundamental
when the climate does not allow it, in Bw-type climates (according to the Köppen climate
classification scale) as the place of our study. This group of users of our Serious Game says that they
have witnessed the personal growth of individuals in terms of their self-esteem and the expansion
of their social interactions as a result of the game. This is just one of the benefits of the game.
Our research showed that it was discovered that everyone can find some hours a week to “save
the universe, catch the villains or solve mysteries” while learning to practice Water polo, and that
playing with the computer is as fun as any other activity in our research Playing our Serious Game
can strengthen a variety of skills such as math and reading online recommendations. Increase the
ability to think and speak clearly and concisely, when formulating and implementing their plans,
cooperating and communicating with others, as well as increasing the ability to analyze written and
verbal information. Placed on the market, our Serious Game will determine that players are cohesive
members of the group in multiplayer games, and it can help people develop leadership skills and
promote cooperation, teamwork, friendship, and open communication. In another study related to
this kind of Kinetic Serious Game, we try to compare with our colleagues of Montenegro who
propose and develop an innovative Serious Game which involves a model to Fencing practitioners,
as this sport is reaching high popularity in this society. A representative model can be shown in the
next Figure 4.
What would users expect from our proposal of a Serious Game in this Kinetic model to learn
and practice Water polo? Improve the mood of serious game users through the components that will
be used as background music, with the purpose of using music therapy techniques to lift the spirits
of the players as it develops [11]. Another element of the strategy and ploy in Serious Game is the
colors of the scenery. By taking into account the effect of color on mood, a better color experience
can be predicted to keep the player in a positive emotional state [10]. The third element is through
the sounds of the game at each stage of development. With every success and failure, the sounds
in your environment can represent the stage you are playing on. And the fourth element is the
recognition of achievements. Through badges, medals, trophies, scores, and appointments you want
the player to have a feeling of satisfaction with the recognition of each achievement [12].
Figure 3: Our Kinetic Serious Game Model using collaborative task to play Water polo.
18 Innovative Applications in Smart Cities
practice a sport together with the self-confidence generated in the players, which allows improving
their performance in other areas of their daily life through a model of emotional support in children,
which entails a commitment and intrinsic complexity in their motor development. Considering that
childhood is a vulnerable stage where they are also in full development and any event or occurrence
may be able to cause negative effects and may leave the child permanently marked by the rest of their
life [13, 15, 16, 17, 18, 19, 20], it is very important to focus more on how to obtain results associated
with group performance sponsored by the individual. That is why the future of the Serious Game
will require a deeper investigation that allows being of great impact by having an opportunity to help
children and youth who do not have access to sports for various reasons. In the future, the Serious
Games may present a natural opportunity due in large part to the acceptance that videogames have
in this age and even more so with the advantage that these generations (both Gen Z and now the
children of the so-called Alpha α generation) have easy access to technology. The implementation of
Unity in diverse virtual sports is presented in a Collage of them, as is shown in Figure 5.
Table 2: Multivariable analysis to determine the relationship between the timely improvement of some sports using our
Kinect proposal and an avatar associated with its performance coupled with an intelligent system for the control of heat burn
in morbidly obese children.
Figure 6: A serious game based on collective activities and related with the increase of social skills.
References
[1] Haddock, B.L. et al. 2012. Measurement of energy expenditure while playing exergames at a self-selected intensity.
Open Sports Sci. J., 5: 1–6.
[2] Graf, D.L., Pratt, L.V., Hester, C.N. and Short, K.R. 2009. Playing active video games increases energy expenditure in
children. Pediatrics, 124(2): 534–540.
[3] Siegel, S.R., Haddock, B.L., Dubois, A.M. and Wilkin, L.D. 2009. Active video/arcade games (Exergaming) and
energy expenditure in college students. Int. J. Exerc. Sci., 2(3): 165–174.
Serious Game for Caloric Burning in Morbidly Obese Children 21
[4] Barnett, A., Cerin, E. and Baranowski, T. 2011. Active video games for youth: a systematic review. J. Phys. Act. Heal.,
8(5): 724–737.
[5] Mills, A. et al. 2013. The effect of exergaming on vascular function in children. J. Pediatr., 163(3): 806–810.
[6] Gao, Z. et al. 2017. Impact of exergaming on young children’s school day energy expenditure and moderate-to-vigorous
physical activity levels. J. Sport Heal. Sci., 6(1): 11–16, Mar. 2017.
[7] KoenigHarold, G. 2018. Impact of game-based health promotion programs on body mass index in overweight/obese
children and adolescents: a systematic review and meta-analysis of randomized controlled trials. Child. Obes.
[8] Hernández-Jiménez, C. et al. 2019. Impact of active video games on body mass index in children and adolescents:
systematic review and meta-analysis evaluating the quality of primary studies. Int. J. Environ. Res. Public Health,
16(13): 2424, Jul. 2019.
[9] Moholdt, T., Weie, S., Chorianopoulos, K., Wang, A.I. and Hagen, K. 2017. Exergaming can be an innovative way of
enjoyable high-intensity interval training. BMJ open Sport Exerc. Med., 3(1): e000258–e000258, Jul. 2017.
[10] McDonough, D.J., Pope, Z.C., Zeng, N., Lee, J.E. and Gao, Z. 2018. Comparison of college students’ energy
expenditure, physical activity, and enjoyment during exergaming and traditional exercise. J. Clin. Med., 7(11): 433,
Nov. 2018.
[11] Staiano, A.E., Beyl, R.A., Hsia, D.S., Katzmarzyk, P.T. and Newton, R.L., Jr. 2017. Twelve weeks of dance exergaming
in overweight and obese adolescent girls: Transfer effects on physical activity, screen time, and self-efficacy. J. Sport
Heal. Sci., 6(1): 4–10, Mar. 2017.
[12] Butte, N.F. et al. 2018. A youth compendium of physical activities: activity codes and metabolic intensities. Med. Sci.
Sports Exerc., 50(2): 246.
[13] McArdle, W.D., Katch, F.I. and Katch, V.L. 2006. Essentials of exercise physiology. Lippincott Williams & Wilkins.
[14] Piercy, K.L. et al. 2018. The physical activity guidelines for Americans. Jama, 320(19): 2020–2028.
[15] Kozey, S., Lyden, K., Staudenmayer, J. and Freedson, P. 2010. Errors in MET estimates of physical activities using 3.5
ml· kg−1· min−1 as the baseline oxygen consumption. J. Phys. Act. Heal., 7(4): 508–516.
[16] Byrne, N.M., Hills, A.P., Hunter, G.R., Weinsier, R.L. and Schutz, Y. 2005. Metabolic equivalent: one size does not fit
all. J. Appl. Physiol., 99(3): 1112–1119.
[17] Ainsworth, B.E. et al. 2011. Compendium of Physical Activities: a second update of codes and MET values. Med. Sci.
Sport. Exerc., 43(8): 1575–1581.
[18] Wilms, B., Ernst, B., Thurnheer, M., Weisser, B. and Schultes, B. 2014. Correction factors for the calculation of
metabolic equivalents (MET) in overweight to extremely obese subjects. Int. J. Obes., 38(11): 1383.
[19] Chris Ferguson, Egon L. van den Broek, Herre van Oostendorp: On the role of interaction mode and story structure in
virtual reality serious games. Computers & Education 143 (2020) https://dblp.org/rec/journals/chb/LiuL20.
[20] Sa Liu and Min Liu. 2020. The impact of learner metacognition and goal orientation on problem-solving in a serious
game environment. Computers in Human Behavior 102: 151–165. https://dblp.org/rec/journals/csi/GarciaPLC20.
[21] Ivan A. Garcia, Carla L. Pacheco, Andrés León and José Antonio Calvo-Manzano. 2020. A serious game for teaching
the fundamentals of ISO/IEC/IEEE 29148 systems and software engineering—Lifecycle processes—Requirements
engineering at undergraduate level. Computer Standards & Interfaces 67.
[22] Salma Beddaou. 2019. L’apprentissage à travers le jeu (Serious game): L’élaboration d’un scénario ludo-pédagogique.
Cas de l’enseignement-apprentissage du FLE. (Learning through the game (Serious game): The development of a
play-pedagogical scenario. Case of FLE teaching-learning). Université Ibn Tofail, Faculté des Lettres et des Sciences
Humaines, Marocco 2019.
[23] Zhipeng Liang, Keping Zhou and Kaixin Gao. 2019. Development of virtual reality serious game for underground
rock-related hazards safety training. IEEE Access, 7: 118639–118649.
[24] Anna Sochocka, Miroslaw Solarski and Rafal Starypan. 2019. “Subvurban” as an example of a serious game examining
human behavior. Bio-Algorithms and Med-Systems, 15(2).
[25] David Mullor, Pablo Sayans-Jiménez, Adolfo J. Cangas and Noelia Navarro. 2019. Effect of a Serious Game (Stigma-
Stop) on Reducing Stigma Among Psychology Students: A Controlled Study. Cyberpsy., Behavior, and Soc. Networking
22(3): 205–211.
[26] Jonathan, D. Moizer, Jonathan Lean, Elena Dell’Aquila, Paul Walsh, Alphonsus Keary, Deirdre O’Byrne, Andrea
Di Ferdinando, Orazio Miglino, Ralf Friedrich, Roberta Asperges and Luigia Simona Sica. 2019. An approach to
evaluating the user experience of serious games. Computers & Education, 136: 141–151.
[27] Shadan Golestan, Athar Mahmoudi-Nejad and Hadi Moradi. 2019. A framework for easier designs: augmented
intelligence in serious games for cognitive development. IEEE Consumer Electronics Magazine, 8(1): 19–24.
CHAPTER-3
1. Introduction
Around the world, we can find data about food waste and some of its most important causes.
To mention some data, every year in the Madrid region, 30% of products destined for human
consumption are lost or wasted by improper handling in the food supply chain (CSA) comments
Gustavsson et al. (2011). A study in the United States by the Natural Resources Defense Council
(NRDC) found that up to 40% of food is lost from the producer’s farm to the consumer’s table,
Gunders (2012).
Losses of perishable products vary among countries around the world, in some countries, such
as in China, they even increase. Reports indicate that only 15% of fresh products are transported
under optimum temperature conditions, despite the knowledge that these types of products require
refrigerated handling (Pang et al., 2011). They also comment that fruits and vegetables are the most
affected type of food, where 50% of what is harvested is not consumed and this is mostly due to
insufficient temperature control. Approximately one-third of the world’s fruits and vegetables are
discarded because their quality has fallen and because of this it lacks acceptance and puts food
safety at risk.
1
Universidad Autónoma de Querétaro, Mexico.
2
Doctorado en Tecnologia, UACJ; Mexico.
* Corresponding author: luis.maldonado@uaq.mx
Intelligent Selection of Best Fresh Products 23
observed in this study that approximately 72% of losses occur between pre-harvest and distribution.
That is, in the early stages of the production chain and by taking them to their target market, in retail,
which could be the result of bad consumption habits.
Much of these losses are related to inadequate management of temperature control during CSA
processes (production, storage, distribution and transport, and at home) (Jedermann et al., 2014).
Similar studies have shown that, in many cases, food security is frequently affected by poor temperature
management (Zubeldia et al., 2016). Environmental conditions, mainly temperature, have a great
impact on the overall quality and shelf life of perishable foods, according to Do Nascimento Nunes
et al. (2014).
These are just some statistics that reveal a scenario where CSAs have deficiencies, in addition to
providing sufficient support to strengthen the importance of control and monitoring of the cold chain,
not only to solve the problem of food spoilage but also to address general challenges associated with
world food security. Good temperature management is the most important and easiest way to delay
the deterioration and waste of these foods (Do Nascimento et al., 2014).
Franco et al. (2017) comment that there is no doubt that our way of life depends on our ability
to cool and control the temperature of storage spaces and means of food distribution.
The 6 criteria to evaluate the refrigeration systems, each one of them was qualified for the
different turns where it is necessary, such as commercial air conditioning, domestic and mobile,
commercial and domestic refrigeration.
For commercial refrigeration, which is the focus of this work, Gauger et al. (1995) evaluated
refrigeration technologies from best to worst according to the criteria mentioned as shown in the
following Table.
The refrigeration technologies with the best qualification and, therefore, the most suitable for
application in the commercial sector are steam compression and absorption.
Steam compression technology is currently the most widely used refrigeration system for food
preservation and air conditioning, both for domestic, commercial and mobile use. This system uses
gases, such as chlorofluorocarbon (CFC) and hydrochlorofluorocarbon (HCFC), as cooling agents.
These types of gases have excellent thermodynamic properties for cooling cycles, as well as being
economical and stable (Gauger et al., 1995).
The favourable environment for the storage of fruits and vegetables is low temperature and
high humidity. This is reasonably achievable by steam compression cooling with relatively low
investment and lower energy consumption. Dilip (2007) reports that this type of refrigeration
24 Innovative Applications in Smart Cities
system achieves a favorable environment for the storage of fruits and vegetables since the shelf
life of perishable foods stored under these circumstances increases from 2 to 14 days compared
to storage at room temperature, so for CSA this technology is very favorable. However, the gases
used by this technology, such as the CFCs and HCFCs used as refrigerants for many years, have
depleted the ozone layer, while fluorocarbons (FC) and hydrofluorocarbons (HFCs) have a high
global warming potential (GWP) and cause global warming phenomena. For this reason, the use of
alternative technologies, such as absorption, Lychnos and Tamainot-Telto (2018), has been targeted.
The absorption cooling system is attractive for commercial refrigeration and air conditioning.
If levels of complexity and maintenance can be reduced, it could also be attractive for domestic
applications (Gauger et al., 1995). For Wang et al. (2013) this type of cooling is considered as a
green technology that can provide cooling for heating, ventilation and air conditioning, especially
when silica gel is adopted due to its great suitability in effective contributions to reduce greenhouse
gas emissions. Absorption systems use natural refrigerants, such as water, ammonia and/or alcohols,
that do not damage the ozone layer and have little or no impact on global warming (Lychnos et
al., 2018). However, certain drawbacks have become obstacles to their actual applications and
commercialization. For example, the discontinuous operation of the cycle, the large volume and
relative weight of traditional refrigeration systems, the low specific cooling capacity, the low
coefficient of performance, the long absorption/desorption time, and the low heat transfer efficiency
of the adsorbent bed, explain Wang et al. (2018).
On the other hand, Bhattad et al. (2018) reflect that one of the greatest challenges in today’s
world is energy security. There is a great need for energy in refrigeration and air conditioning
applications. Although, due to limited energy resources, research is being conducted in the area of
improving the efficiency and performance of thermal systems.
According to Zhang et al. (2017), today the economic growth and technological development of
each country depends on energy. Heating, ventilation, air conditioning and domestic and commercial
refrigeration consume a large amount of energy. The refrigeration system has great potential
for energy savings. Lychnos et al. (2018), present in their work the development of a prototype
with hybrid refrigeration systems that combines steam compression and absorption technologies.
Preliminary tests showed that it can produce a maximum of 6 kW of cooling power with both
systems running in parallel. It is designed as a water cooler with an evaporating temperature of 5ºC
and a condensing temperature of 40ºC.
For countries such as Mexico, promoting energy savings would become a competitive advantage,
even more so in the commercial sector for the micro-enterprise which, as mentioned above, often
have limited electricity supply at their points of sale and marketing of their perishable products.
Intelligent Selection of Best Fresh Products 25
2. Methodology
The design methodology to be used to carry out the project will be the “Double Diamond”. The
Design Council, an organization that advises the English government on the fundamental role of
design as a creator of value, argues that designers from all disciplines share perspectives at various
points during the creative process, which they illustrate as “The Double Diamond”. This design
process is based on a visual map that is divided into 4 stages: Discover, Define, Develop and Deliver.
The purpose of choosing this methodology is to discover the best solutions by testing and
validating several times since the creative process is iterative and with this, the weakest ideas are
discarded.
2.1 Discover
The first stage of this model describes how to empathize with users to deeply understand the problem
or problems they are seeking to solve. For this purpose, field visits were made with different micro-
enterprises within the food sector in the municipality of San Juan del Río, Querétaro, to which an
interview was conducted to obtain data on the management and marketing of their products.
In order to obtain relevant data for the application of this interview and to calculate the size of the
sample with an 80% confidence level, a visit was made to the Secretary of Economic Development
of San Juan del Río to investigate the number of micro-enterprises operating in the municipality;
it was found that there are no data on the number of micro-enterprises at either the municipal or
state level because they are such small enterprises that the vast majority are not registered with
the Ministry of Finance and Public Credit, making it very difficult to have reliable data on micro-
enterprises. Due to this, the interview was applied to 10 micro-enterprises in the municipality of San
Juan del Río, Querétaro. The questionnaire is presented below.
1. Company and business.
2. Place of operation of the company.
3. What kind of products do you sell?
4. Do you know at what temperature range they should be kept?
5. Do you produce or market the products you handle?
6. If you sell, do you receive the products at storage temperature?
7. What type of packaging do perishable products have?
8. How long does it take to move perishable products from where they are manufactured to where
they are exhibited or delivered?
26 Innovative Applications in Smart Cities
Figure 2: Conceptual diagram of the implementation of an intelligent system that determinates the greatest freshness in the
presentation threshold and color analysis of various sándwich issues in a stock of a store selling healthy products. Source
own preparation.
2.2 Define
For this stage, the objective was to carry out an analysis of the perception of the quality of perishable
products among consumers in the municipality of San Juan del Río, in the state of Querétaro, in
order to obtain data that will allow us to know how relevant the freshness, good presentation and
first instance perception of the quality of food products are for consumers, and whether this impacts
on the purchasing decision. As a measuring instrument, a questionnaire was designed to assess
whether the quality, freshness and presentation of food is relevant to people when purchasing raw
Intelligent Selection of Best Fresh Products 27
N° Questions Answers
1 Please tell us your age range 1) 18 – 28
2) 29 – 38
3) 39 – 48
4) 49 – 60
5) 60 or more
2 Please indicate your gender 1) Female
2) Male
3 Please tell us your highest grade 1) Secondary
2) High School
3) University
4) Posgraduate
4 Please tell us your occupation 1) Goverment employee
2) Private sector employee
3) Freelance
4) Entrepreneur/Independent
5 What is your level of concern about the quality of the food you eat? (1 not at all 1) not at all concerned
concerned to 5 rather worried) 2) somewhat worried
3) concerned
4) very concerned
5) rather worried
6 How do you value your confidence in the handling of the food you buy in food 1) very suspicious
businesses? Storage, transportation, refrigeration (1 very suspicious to 5 very 2) something suspicious
confident) 3) is indistinct
4) something trusting
5) very confident
7 Of the following persons or organizations that handle food, rate from 1 to 5 1) nothing informed
the degree of information you believe they have about food quality, in terms of 2) poorly informed
optimal temperature management for storage and transportation/distribution (1 3) informed
nothing informed to 5 well informed) 4) very informed
• Products or farmers (Producers of cheese, flowers, fruits and vegetables (food 5) well informed
business suppliers))
• Large food chains (Mcdonalds, Dairy Queen, Toks, etc.)
• Small food businesses (Food trucks, food carts, local bakeries, etc.)
• Food logistics companies (Uber eats, rappi, no apron)
• Chain of super markets (Soriana, walmart, etc.)
8 Rate from 1 to 5, 1 being irrelevant and 5 quite relevant, the aspects you consider 1) irrelevant
when buying a food for the first time. 2) not vey relevant
• Correct handling of the product during the distribution chain. 3) relevant
• Information about the product and how it was produced. 4) very relevant
• Affordable price. 5) quite relevant
• The quality of the product can be observed (colour, texture, etc.)
• The packaging of the product is not in poor condition (knocked, dented,
broken, torn, etc.)
9 1 being no preference and 4 total preference, value your preferred channel to buy 1) no preference
basic basket through the following ways: 2) low preference
• Internet/Application (Walmart online, rappi, etc.) 3) preference
• Local market (popular market, central stores, etc.) 4) total preference
• Super market (Comer, Soriana, Walmart, etc.)
• Small food businesses (corner shops, grocery stores, fruit shops, etc.)
10 1 being no preference and 4 total preference, value your channel of preference to 1) no preference
buy prepared food through the following ways: 2) low preference
• Internet/Application (Rappi, Uber Eats, Sindelantal, etc.) 3) preference
• Local Market (Food Area) 4) total preference
• Large food chains (Toks, McDonalds, Starbucks)
• Small food businesses (mobile businesses, foodtrucks, local bakeries, local
food businesses)
Source Own preparation
28 Innovative Applications in Smart Cities
and prepared foods. In the same way, the aim is to find out what consumers trust so much in the
different types of businesses that sell food as raw material for preparing dishes, which we call “basic
basket” in the questionnaire and the other type we call prepared foods, which are already prepared
dishes that sell the types of business. This questionnaire was applied online through the platform of
“google surveys” for consumers in the municipality of San Juan del Rio at random.
In order to determine the sample, INEGI data were taken, which indicates that the number of
inhabitants between 18 and 60 years of age in 2015 is 268,408, which we can consider as the total
number of potential consumers in our population. The calculation was made with a confidence level
of 90% and a margin of error of 10% so the result was 68 to obtain a reliable sample of the total
population.
The first part of the questionnaire consists of 4 questions to know the demographic data of
the participants which help us to categorize them according to their age range, gender, maximum
degree of studies and professional occupation. The second part of the questionnaire is made up
of 6 questions that focus on obtaining data that provided us with a panorama to better understand
whether there is consumer concern regarding the quality of the perishable products they buy, the
level of confidence in the businesses that commercialize these products, more relevant aspects for
the purchase decision, level of preference in the different businesses that commercialize both raw
and prepared foods. For each of the reagents that make up the previous survey, answers with Likert
scales were established to make the way of answering the respondents more dynamic.
2.3 Develop
For this area of the second diamond, a hybrid cooling system with steam compression and absorption
systems was tested to validate its operation. These tests were carried out in a laboratory of the
company Imbera, located in the municipality of San Juan del Río in the state of Querétaro, with a
controlled environment with a maximum temperature of 34°C, and a relative humidity of 59%. A
prototype was developed for the tests.
An evaluation protocol was developed for the tests to obtain the following information:
- Pull down time (time it takes the system to generate ice blocks for temperature conservation).
- Temperatures reached by the cooling system during pull down.
- Duration of the ice blocks without power supply.
- Temperatures reached by the cooling system without electrical energy.
The objective of this protocol was to delimit the categories of perishable products that can be
optimally conserved by a hybrid refrigeration system.
After this, a preliminary cost analysis of materials and manufacturing processes was carried out
in order to know which are the best adapted to the investment capacities of the users. In this exercise,
two specific manufacturing processes were analyzed: roto-moulding and thermoforming, since the
plastic material is the best option for the manufacture of the conservation tool due to the variety that
exists, and therefore the versatility of qualities that it offers. The following table describes prices of
materials and tools.
Table 4: Comparison of manufacturing processes.
The most appropriate manufacturing process is thermoforming because the estimated price is
within the range that users are willing to spend for the conservation tool. With this information, the
design of the conceptual proposal of the conservation tool was developed with the considerations
described above.
The concept of the conservation tool for small businesses is made of high-density polyethylene.
This plastic is commonly used in packaging, safety equipment, and construction, to offer lightness
to maneuver and resistance to withstand vibrations and shocks during distribution routes. The
dimensions of the equipment are 100 cm wide, 40 cm deep and 40 cm high, with a capacity of
74 liters. These dimensions make it easy to transport, i.e., it can be placed in any compact vehicle
or cargo vehicles.
As for the hybrid refrigeration system, it will be composed of a steam compression system
and an absorption system, the cooling agent of the absorption system will be water which will
be contained in two plastic tanks. Inside the plastic tanks passes a copper tube that is part of the
refrigeration system by steam compression to freeze the water. In order for the steam compression
system to work, it must be connected to the electrical energy during the night and the correct
formation of the ice blocks is guaranteed so that the absorption refrigeration system works correctly
during the distribution routes.
The internal walls of the equipment, as well as those of the water containers, have slots through
which metal separators can slide to generate different spaces for different product presentations. To
close this stage, a prototype was made that complies with the functionality of the concept in order to
be able to evaluate it in the next stage of the methodology.
2.4 Delivery
For this last stage of the double diamond model, the perishable food logistics strategy was validated
with the prototype of the conservation tool with the products developed in the Amazcala campus
of the Autonomous University of Querétaro and marketed in the tianguis that is established at the
central campus of the same university. The products that were placed to be evaluated within the
prototype of the conservation tool are containers with 125 ml of milk and a milk-based dessert
(custard), which are produced on that campus.
As can be seen in the Figure above, the samples are identified so that one is placed inside the
prototype of the conservation tool and the other outside it. For these milk samples, the following
initial data were taken before placing them in the prototype of the conservation tool.
In the case of the milk-based dessert, the measurement will be visual since this product will
present syneresis (expulsion of a liquid in a mixture), losing its texture if the cold chain is not
maintained correctly during distribution.
The validation protocol for the conservation tool prototype consists of the following steps:
- The conservation tool prototype is mounted on the truck at 8:30 am.
- The products to be marketed are mounted on the truck for a period of 30 minutes.
- Products leave the Amazcala campus for the point of sale at the downtown campus at 9:00 am.
- The van arrives around 10:15 am at the Centro campus, at the engineering faculty, unloads the
products at the point of sale. After this, it is withdrawn to make other deliveries to different
points of sale.
- At 1:00 pm the van returns to the engineering faculty point of sale to pick up the unsold product.
- At 3:15 pm the van arrives at the Amazcala campus, until then the samples are taken and
analyzed.
Before this validation protocol, the equipment was left connected the night before from 1:30
a.m. to 8:30 a.m., a period of 7 hours to ensure that the ice blocks were formed correctly.
Intelligent Selection of Best Fresh Products 31
Parameter Value
Acidity 16°D
pH 6.7
Temperature 4.1°C
Storage system temperature 5°C
Source Esau, Amazcala campus
The objective was to make comparisons between the use of the conservation tool prototype
and the dry box of the truck with which the products are transported from the Amazcala campus to
obtain data from the specialized tool and fine-tune the strategy and be able to launch it to the market.
3. Project Development
The micro-enterprises that were considered for research and application of the project are those that
are categorized as micro-enterprises, which are formed by no more than 15 workers including the
owner according to INEGI, within the trade sector in the branch of food and/or perishable products.
These companies will be approached with an interview questionnaire to learn about and analyze the
tools and procedures they use during their supply chain. With this information, it is sought that the
candidate companies to the project have the following or at least one of the following characteristics:
- They do not have a clear and established logistics for the distribution of their perishable
products.
- The tools with which they distribute and maintain the conservation of perishable products are
not adequate and mistreat the presentation of their products.
- Distribution logistics are complex, either because they don’t know the ideal tools for
conservation and optimal temperature ranges for their products, or because they don’t have the
economic justification to invest in specialized refrigeration transports.
32 Innovative Applications in Smart Cities
Having explained the process of selecting companies for the development of the project, below
is a list of the activities to be carried out in order to design a logistics strategy for perishable products
for the microenterprise:
1. A problem is discovered through the observation of the actors.
2. Review of literature on issues associated with the problem.
3. Application of interviews to population of micro-entrepreneurs on the application of CSA in
their business. (The sample population are businesses with turn in the commercialization of
prepared or perishable foods in the state of Querétaro, specifically in the municipality of San
Juan del Río).
4. Case studies and similar projects.
5. Analysis of the information collected.
6. Exploration of users’ needs and aspirations.
7. Definition of the design problem.
8. Definition of project specifications.
9. Stage of development of potential solutions.
10. Weighting of solutions to find the most feasible.
11. Conceptualization of the possible solution.
12. Design of support material for the communication of benefits of the strategy.
13. Prototyping of the product.
14. Execution of the strategy based on the prototype.
15. Validation of the strategy according to KPI.
- Quantity of perishable products that arrive in optimal conditions of presentation and
conservation.
- Time of handling and organization of the perishable products so as not to break the CF in the
CSA.
- Increase in sales due to adequate management of the conservation and presentation of
perishable products.
16. Analysis of results.
17. Conclusions.
- Smartphone for communication with users, as well as for taking videos and photographs as a
record.
- Various materials for prototype.
References
Bhattad Atul, Sarkar Jahar and Ghosh Pradyumna. 2018. Improving the performance of refrigeration systems by using
nanofluids: A comprehensive review. Renewable and Sustainable Energy Reviews, 82: 3656–3669.
Dilip Jain. 2007. Development and testing of two-stage evaporative cooler. Building and Environment, 42: 2549–2554.
Do Nascimento Nunes M. Cecilia, Nicomento Mike, Emond Jean Pierre, Badia Melis Ricardo and Uysal Ismail. 2014.
Improvement in fresh fruit and vegetable logistics quality. Philosophical Transactions of the Royal Society A, 372:
19–38.
Franco, V., Blázquez, J.S., Ipus, J.J., Law, J.Y., Moreno-Ramírez, L.M. and Conde, A. 2017. Magnetocaloric effect: from
materials research to refrigeration devices. Progress in Materials Science, 93: 112–232.
Gauger, D.C., Shapiro, H.N. and Pate, M.B. 1995. Alternative Technologies for Refrigeration and Air-Conditioning
Applications. National Service Center for Environmental Publications (NSCEP), 95: 60–68.
Gunders Dana. 2012. Wasted: How America is losing up to 40 percent of its food from farm to fork to landfill. NRDC Issue
Paper, 12: 1–26.
Gustavsson Jenny, Cederberg Christel, Sonesson Ulf, Van Otterdijk Robert and Meybeck Alexandre. 2011. Global food
losses and food waste. Food and Agriculture Organization of the United Nations, Rom., 92: 1–25.
Jedermann Reiner, Nicometo Mike, Uysal Ismail and Lang Walter. 2014. Reducing food losses by intelligent food logistics.
Philosophical Transactions of the Royal Society A, 1: 1–37.
Lychnos, G. and Tamainot-Telto, Z. 2018. Prototype of hybrid refrigeration system using refrigerant R723. Applied Thermal
Engineering, 134: 95–106.
Pang Zhibo, Chen Qianf and Zheng Lirong. 2011. Scenario-Based Design of Wireless Sensor System for Food Chain
Visibility and Safety. Advances in Computer, Communication, Control and Automation, 1: 19–38.
Wang, Dechang, Zhang, Jipeng, Yang, Qirong, Li, Na and Sumathy, K. 2013. Study of adsorption characteristics in silica
gel–water adsorption refrigeration. Applied Energy, 113: 734–741.
Wang, Yunfeng, Li, Ming, Du, Wenping, Ji, Xu and Xu, Lin. 2018. Experimental investigation of a solar-powered adsorption
refrigeration system with the enhancing desorption. Energy Conversion and Management, 155: 253–261.
Zhang, Wenxiang, Wang, Yanhong, Lang, Xuemei and Fan, Shuanshi. 2017. Performance analysis of hydrate-based
refrigeration system. Energy Conversion and Management, 146: 43–51.
Zubeldia, Bernardino, Jiménez, María, Claros M. Teresa, Andrés José Luis and Martin-Olmedo Piedad. 2016. Effectiveness
of the cold chain control procedure in the retail sector in Southern Spain. Food Control, 59: 614–618.
CHAPTER-4
1. Introduction
Mental workload is investigated in ergonomics and human factors and represents a topic of increasing
importance. In working environments, high-cognitive demands are imposed on operators, while
physical demands have decreased (Campoya Morales, 2019).
These figures make it possible to measure the serious public health problem that causes road
accidents in the world and in our country and the strong negative impact that it generates in the
society and the economy. Hence, in 2011, the WHO generated a program that is called the Decade
of Action for Security Vial 2011–2020, through which it summoned several countries to generate
actions with the purpose of mitigating this problem.
In our country, in 2011, the National Road Safety Strategy 2011–2020 and 2013 was promoted
within the National Development Plan, the Road Safety Specific Action Program 2013–2018
(PAESV). The goal of reducing the mortality rate caused by road accidents to 50% was proposed, as
well as minimizing injuries and disabilities through 6 strategies and 16 lines of action concentrated
in 5 main objectives:
1. To generate data and scientific evidence for the prevention of injuries caused by road accidents
2. To propose a legal framework on road safety that includes the main risk factors present in road
accidents
1
Universidad Autónoma de Querétaro
2
Universidad Autónoma de Ciudad Juárez
* Corresponding author: luis.maldonado@uaq.mx
Analysis of Metropolitan Bus Drivers Mental Workload 35
3. To contribute to the adoption of safe behaviors of road users to reduce health damage caused by
road accidents
4. To promote multisector collaboration at the national level for the prevention of road accident
injuries
5. To standardize prehospital medical emergency care of injuries.
3. Methodology
The development of this project will be based on a variant of the methodology of the Double
Diamond, proposed by the Green Dice company of the United Kingdom, which they called Triple
Diamond.
Figure 1: Methodology of the triple diamond proposed by the Green Dice company. Source: http://greendice.com/double-
diamond.
36 Innovative Applications in Smart Cities
Unlike the Double Diamond methodology, the Triple Diamond Methodology incorporates a
third intermediate stage, moving the stage of Development to the intermediate diamond and adding
two stages, the Distinction and the Demonstration. This methodology is selected because it integrates
a series of steps typical of the development and implementation of a project, an essential stage in any
innovation project: Distinction. In this stage, the aspects that show that the project has the character
of innovation are highlighted. Next, each of the stages of the Triple Diamond methodology process
are described in more detail.
Discovery
The discovery stage is the first in the methodology. Start with the idea initial and inspiration. In this
phase, the needs of the user are stated and the following activities are executed;
• Market research
• User research
• Information management
• Design of the research groups
Definition
In the definition stage, the interpretation of the user needs is aligned with the business or project
goals. The key activities during this phase are the following:
• Project planning
• Project Management
• Project closure
Development
During this phase is when the planned activities for the development of the project are performed,
based on the established plan, iterations and internal tests in the company or group of developers.
The key activities during this stage are the following:
• Multi-disciplinary work
• Visual Management of the project development
• Methods Development
• Tests
Distinction
The stage of distinction reveals and establishes the characteristics that distinguish the project
proposal of the rest of the proposals, also determines the strategy to continue to ensure that the
target customers will actually choose to seek the product or service that is developed in the project.
The key tasks during this stage are the following:
• Definition of critical and particular characteristics
• Definition of introduction strategy
• Development of market strategy
Demonstration
In the demonstration stage, prototypes are made to evaluate the level of fulfillment of the project’s
purpose and to ensure that the design meets the problem for which it was created. Unlike the iterations
Analysis of Metropolitan Bus Drivers Mental Workload 37
that are made in the stage of development, the tests that are carried out during the demonstration
stage are already made with the final customer. The key tasks in this phase are the following:
• Requirements compliance analysis
• Prototype planning and execution
• Execution of tests with end users
• Evaluation of the test results
• Definition of design adjustments
Delivery
The delivery is the last stage of the Triple Diamond methodology. During this stage, the product or
service developed is finalized and launched to the market and/or is delivered to the final customer.
The main activities that occur during this stage they are the following:
• Final tests, approval and launch
• Evaluation of objectives and feedback cycles
• Structuring of the database. It is taken as an initial part of the development as a strategic step to
optimize the development of the mobile application and its performance designed for the user
(data usage, optimization of device memory and processing resources)
• Development of the first mobile application prototype to evaluate the user’s experience during
its use through focus groups
• Definition of Story Boards for the mobile application
• Development of the consultation platform
• Platform performance tests
• Establishment of servers and information management methods.
5. Development
5.1 Nature of driver behavior
They will be analyzed in a general demographic factor that influences the behavior of the drivers.
• The problem from the perspective of public health.
The programs that have arisen worldwide and their adoption through programs to attack the
problem.
• Information Technology applied to Road Safety.
In this section the available tools of information and how they have been used in terms of road
safety will be reviewed.
• Big Data. A brief overview of the term and its application to the draft. It will also explain how
this concept is increasingly important in terms of commercial value and business opportunity.
• Mobile applications focused on driver assistance and prevention of accidents.
We will give a tour of the main applications currently available in the market and their
contribution to the solution of same problem.
Analysis of Metropolitan Bus Drivers Mental Workload 39
Figure 2: Visual representation of the analyzed sample, characterizing the diverse socio-economic aspects, the mobbing
including the social blockade and the reflected labor performance.
40 Innovative Applications in Smart Cities
57 (F: 14; M: 33) and Sample 4—Metropolitan Area of Milan: 167 (F: 23; M: 44). In the case
of Queretaro bus drivers (See Figure 2), which is the group that presents greater differences in
the salary relation concerning the working day and this lies in its place of origin, the bus drivers
who suffer more mobbing come from Oaxaca, Guerrero, and Veracruz; an intermediate group
of bus drivers from Coahuila, Zacatecas and Durango try to group to negotiate with the majority
group and finally the children of the most recent wave coming from the Federal District, State
of Mexico and Morelos, they even become intimidators in their respective routes of transport,
because they have the greatest social capital of the group and tend to be accustomed to longer
working days, so this group can be considered completely heterogeneous in its relations with
the majority group.
Through public policies of the European Union, it is very easy to identify that the group of
Milano bus drivers is first found, but it is very dispersed in the Salvador de Bahia samples where the
work stress is greater, in Queretaro where the work-life relationship is not the most appropriate and
in the little or no recognition that exists for the bus drivers in Palermo, which implies that there are
more strikes than in the rest of the groups.
positive impact on the road culture of future generations at the steering wheel, as is proposed to a
Smart City in Figure 3.
Figure 3: Comparative of technology use in a model of Smart City to improve the Mental health in bus drivers of four
societies including the human factor.
Of all these applications, the most common worldwide is the so-called “Waze”, which refers to
the ways that a user can take establishing its destination within the application, taking into account
the conditions of said route determined mainly by reports of other users of the same application.
Recently, Volvo became the first automaker to launch radar-based security systems in automobiles
with the launch of the XC90 Hybrid in India in 2016. Features such as Airbags and ABS have already
begun to become a standard safety feature and the future; we expect more of these characteristics
to become standard. The most specific implementation is associated when generally more than one
vehicle is moving at different speeds, and the collision points can be higher than two, as can be seen
in Figure 4.
In the end, the facility that exists today to generate applications with various purposes, and the
great boom that the use of mobile phones has had in different societies, allows us to visualize these
types of conditions to use them as an excellent source for data mining. This data was provided to
map them over the street map view of the city. We found that around 3,000 total accidents in the city
Figure 4: Model of IoT based on sensors for the control of the car and a radar added to identify obstacles in the proximity
of up to one kilometer to identify automatic braking and implementation of a multi-objective security model for a smart city.
42 Innovative Applications in Smart Cities
involve at least one motorcycle vehicle in them. Using a Kriging model, it is possible to determine
the correct tendencies map in the future, as is shown in Figure 5.
Figure 5: A kriging model used to determine changes in an Ecological model and its Visual representation associated with
the modeling of loss of forest in a landscape. Source: http://desktop.arcgis.com/es/arcmap/10.3/tools/3d-analyst-toolbox/
how-kriging-works.htm.
We fetched exclusively this data to map it in QGIS. To make all this information fit in the map
in an ordered way, we took the spreadsheet file and randomly generated the latitude and longitude
of the points of the map. All the other data fields were preserved as they came. Also, the spreadsheet
software we used, LibreOffice Calc, to generate the random positions needed to be limited to all the
points to fit in the area of Juarez. Calc doesn’t generate decimal point random numbers, required
for the cartographic precision in QGIS. To solve this problem, we created an algorithm to create
positions. We used two different equations which represent each possible point in our model, as is
possible to show in Equation 1 and Equation 2.
rand* (norther–souther)
latitude = (1)
10000
rand* (wester–easter)
longitude = (2)
10000
The formulas above are the integer random generator for the positions of the points. The
actual cardinal limits for the city are in the order of the same integer number (31.5997 south,
31.7845 north, –106.3077 east, –106.5475 west). To solve the problem, we multiplied all numbers
by 10,000 to open the range and to let the Calc random algorithm to calculate with a more flexible
grid the position of the points. After the number is calculated, the number is divided by 10,000 again.
Once all the points were calculated, the spreadsheet file was parsed as a CSV (comma separated
values) file, so the QGIS software can read the data of the records and print the layer of the locations.
Once this was done, the Kriging method was applied. QGIS has a method to perform this by itself.
Then the points were manually positioned to the closest road one by one. Also, some of the points
were in zones that compared to OPM were outside the area of Juarez. These were allocated inside
the desired area and sometimes deleted. As is shown in Figure 6.
Once this was done, the Kriging method was applied. QGIS has a method to perform this by
itself. The kriging method was applied to adjust the grid, the area size, removing the OSM beneath,
so the new calculated layer would not be big. As they are still too small, indicated by the absolute
Analysis of Metropolitan Bus Drivers Mental Workload 43
positions of the points. The new kriging calculated layer was difficult to adjust, as is shown in
Figure 7.
Figure 7: Final analysis of each point where a traffic accident will occur involving deaths associated with Italian scooter.
The layer beneath the yellow dots (reddish colors) is the kriging generated layer that shows the most probable places in
lighter colors. Some sites outside the city were shaded in the light; these could be errors in the algorithm.
44 Innovative Applications in Smart Cities
If we compare this map above with the OSM, we can see the areas where the motorcycle
accidents are frequent. Besides, we are proposing a tool for decision making using ubiquitous
computing to help parents in locating children during the tour in case they had forgotten some object
that is required for the class, as can be seen in Figure 8:
Figure 8: Representation of each node and their visualization in the streets related with our Kriging Model.
In the Google Maps city screenshot of traffic on Friday in the afternoon one can see that the
places in dark red, red and orange are the places where traffic is worse than the areas in green, which
we can say are fluid and empty. If one compares the areas in reddish with the kriging prediction
in Figure 4, for example, the already mentioned Pronaf section and also the crossing bridge to the
neighboring city, El Paso, which are very crowded due to security protocols in the United States. In
Figure 9, we showed all avenues where the majority of traffic accidents related to Italian Scooters
occur.
The kriging applied to the map does not directly predict traffic, but traffic it is directly related
to accidents.
7. Conclusions
• Develop a mobile application (first stage) that serves as the interface for capturing reports of
risk incidents. Citizens will be able to go through this application installed on their mobile
phones, report risk situations that are observed during their daily journeys.
• Design a scheme based on gamification that motivates users to make constant captures of reports
of incidences of risk of transit. This scheme must allow a constant feeding of the database.
• Configure a database (second stage) that contains relevant parameters for the definition of
specific patterns of traffic risk.
• Process the information contained in the database to present it in a geo-referenced way, classified
by type of incident, schedules, type of vehicle and incidence coordinates through a consultation
platform (third stage), graphically representing the red dots on a map, as in (Jiang et al., 2015).
• Implementation of a predetermined warning function, which allows to notify other registered
users directly about situations that they can put them or other users of public roads at risk, and
that have probably gone unnoticed before the notification (conditions of lights, tires, etc.).
Analysis of Metropolitan Bus Drivers Mental Workload 45
Figure 9: Google Maps traffic map over Juarez area. The green colors are fluid, and orange, red and dark red are jammed
sites.
This innovative application can be used in other smart cities, such as Trento, Kansas City, Paris,
Milan, Cardiff, Barcelona, and Guadalajara.
8. Future Research
Through the implementation of this project it is intended to generate a platform of consultation that
can provide accurate and relevant data in the study of urban mobility research, in such a way as to
allow those in charge of generating proposals for improvement in this area to have the information
available for its analysis. The configuration of the database will contain the data that are most
representative and important to describe the risk factors: date, time, type of vehicle (through the
capture of the license plates), coordinates and weather conditions. The information will be presented
in the consultation platform in a georeferenced manner. This format will make it possible to identify
areas of conflict and a specific analysis can be made through the implementation of filters that help to
visualize the specific parameters of study in the map of a specific area. The platform will be limited
to providing statistical information, as a way that can be easily tropicalized and adapted to local
systems and legislation. Concerning users of the mobile application, to allow a constant feeding
of the data, the system based on the gamification will motivate continuously, generating a status
scheme within the application and giving them access to various benefits for their collaboration.
46 Innovative Applications in Smart Cities
References
Campoya Morales, A.F. 2019. Different Equations Used During the Mental Workload Evaluation Applying the NASA-TLX
Method. SEMAC 2019. Ergonomía Ocupacional. Investigaciones y Aplicaciones. Vol 12 2019. México. Recuperado
de: http://www.semac.org.mx/images/stories/libros/Libro%20SEMAC%202019.pdf.
Casillas, Zapata. 2015. La influencia de la Infraestructura Vial del Área Metropolitana de Monterrey sobre el Comportamiento
del Automovilista. México.
Dominguez, Karaisl. 2013. Más allá del costo a nivel macro: los accidentes viales en México, sus implicaciones
socioeconómicas y algunas recomendaciones de política pública. México.
Hijar, Medina. 2014. Los accidentes como problema de salud pública en México. México.
Jain, S. 2017. What is missing in the Double Diamond Methodology? Recuperado de: http://green-dice.com/double-diamond
Jiang, Abdel-Aty, Hu and Lee. 2015. Investigating macro-level hotzone identification and variable importance using big data:
A random forest models approach. Estados Unidos.
List, Schöggl. 2000. Method for Analyzing the Driving Behavior of Motor Vehicles. US6079258. Austria.
Organización Panamericana de la Salud. 2011. Estrategia Mexicana de la Seguridad Vial. México.
Ponce Tizoc. 2014. Diseño de Política Pública: Accidentes de Tránsito Ocasionados por el uso del Teléfono Celular en la
Delegación Benito Juárez. México.
CHAPTER-5
In 2015 alone, 23.1 million people were diagnosed with diabetes, according to data gathered by
the Centers for Disease Control and Prevention (CDC). This sum of people joined the estimated
415 million people living with diabetes in the world. The fast rise of diabetes prevalence and its
life-threatening complications (kidney failure, heart attacks, strokes, etc.) has made healthcare and
technology professionals find new ways to diagnose and monitor this chronic disease. Among its
different types, type 2 diabetes is the most common found in adults and elderly people. Anyone
diagnosed with diabetes requires a strict treatment plan that includes constant monitoring of
physiological data and self-management. Ambient intelligence, through the implementation of
clinical dashboards and mobile applications, allows patients and their medical team to register and
to have access to the patient’s data in an organized and digital way. This paper aims to find the most
efficient mobile application for the monitoring of type II diabetes through a multicriteria analysis.
1. Introduction
For a disease to be considered chronic, it has to have one or more of the following characteristics:
They are permanent, leave residual disability, are caused by irreversible pathological alteration,
require special training of the patient for rehabilitation, or may be expected to require a long period
of supervision, observation, or care [1]. The last characteristic involves self-management and
strict monitoring of the patient’s health data, to avoid the development of life-threatening
complications [2].
Diabetes checks all these characteristics; therefore, it is considered a chronic metabolic disorder
characterized by hyperglycemia caused by problems in insulin secretion or action [3]. Type II
diabetes, previously known as non-insulin-dependent diabetes or adult-onset diabetes, accounts for
about 90% to 95% of all diagnosed cases of diabetes [4]. Over the years, its prevalence has been
increasing all over the world and as a result, it is becoming an epidemic in some countries [5] with
the number of people affected expected to double in the next decade.
Furthermore, as it was previously mentioned, diabetes, as a chronic illness, requires constant
monitoring of a patient’s health parameters. Patient monitoring can be defined as “repeated or
1
Universidad Autónoma de Ciudad Juárez, Av. Hermanos Escobar, Omega, 32410 Cd Juárez, Chihuahua, México.
2
Universidad Nacional Autónoma de México, Ciudad Universitaria, 04510, CDMX, México.
* Corresponding author: mayra.elizondo@comunidad.unam.mx
48 Innovative Applications in Smart Cities
continuous observations or measurements of the patient and their physiological function, to guide
management decisions, including when to make therapeutic interventions, and assessment of those
interventions” [6]. However, most patients have difficulty adhering to the self-management patterns
that diabetes monitoring requires.
This current problematic, as well as the rise of other chronic diseases, has forced the healthcare
system to transform how it manages and displays information to meet its needs [7]. New emerging
systems for diabetes care have the potential to offer greater access, provide improved care
coordination, and deliver services oriented to higher levels of need. The development of tools and
implementation of ambient intelligence has brought technical solutions for doctors, as wells as help
for the patients in the management of their health [8].
Some of the barriers that can be found in the use of mobile applications for diabetes management
are cost, insufficient scientific evidence, not being useful in certain populations, data protection,
data security, and regulatory barriers. In the first one, the cost is a potential barrier for any new
medical technology. It can be hard for some patients to use these technologies due to the high cost of
smartphones and lack of internet services. Regarding the insufficient scientific evidence, there aren’t
enough studies that show the effectiveness of these apps. On not being useful in certain populations,
it’s mentioned that a lot of these apps may not be useful for the elderly, non-English speakers and
physically challenged. Also, the protection of the information uploaded to an application, as well as
the proper use of this software and digital tools are important [16].
Application domain
The different devices that exist in ambient technology, such as mobile apps that ease the monitoring
of diabetes in everyday life, are helpful for patients and their medical team. Each software has
different features that make one better than the other, in the following work seven different mobile
applications will be analyzed to determine which one has the best performance.
Problem statement
When searching on their phone, a person with type II diabetes must choose among a large number
of applications. To choose the best one, four evaluation criteria (functionality, usability, user
information, and engagement with user), which were then divided into subsequent sub-criteria, were
considered for the selection between seven different mobile applications available in online mobile
stores.
2. Methodology
There are different mobile applications (apps) that are used to keep track of the important parameters
of a patient with diabetes, as well as encourage self-monitoring.
The seven chosen to be compared by a multicriteria analysis (the analytical hierarchy process)
are shown in Table 1.
Diabeto Log is a mobile app developed by Cubesoft SARL, and it allows the user to see their
blood glucose tests and medications intake in a single screen (Figure 1). This app was designed
specifically to help the user see evolutions and compare data from one day to another. It also
registers parameters such as the number of hypoglycemias, number of hyperglycemias, estimated
A1c, number of insulin units used.
Table 1: Mobile apps reviewed using multicriteria analysis.
Mobile Applications
Name Developer
Diabetes Tracker: Diabeto Cubesoft SARL
Log
gluQUO: Control your QUO Health SL
Diabetes
BG Monitor Gordon Wong
Bluestar Diabetes WellDoc, Inc.
OnTrack Diabetes Vertical Health
Glucose Buddy+ for Azumio Inc.
Diabetes
mySugr mySugr GmbH
50 Innovative Applications in Smart Cities
BG Monitor is a diabetes management app that has a clean user interface and filtering system
that allows the user to find what they are looking for (Figure 3). It provides statistics that show
blood glucose levels and can help to identify trends and make adjustments to insulin dosages. This
app also has reminders to check blood or give insulin and creates reports so the user can email them
from within the app.
Bluestar diabetes is advertised as a digital health solution for type 2 diabetes. It provides
tailored guidance driven by artificial intelligence (Figure 4). It collects and analyses data to provide
precise, real-time feedback that supports healthier habits in users and more informed conversations
Multicriteria analysis of Diabetes Clinical Dashboards 51
Figure 3: BG Monitor.
during care team visits. In addition to glucose, it tracks exercise, diet, lab results, symptoms and
medication.
Ontrack Diabetes is a mobile app developed by Vertical Health. Some of its features are that it
tracks blood glucose, haemoglobin A1c, food, and weight. It generates detailed graphs and reports
that the user can share with their physician. It allows to easily keep track of a user’s daily, weekly
and monthly glucose levels (Figure 5).
Glucose Buddy + helps in the management of diabetes by tracking blood sugar, insulin,
medication,and food (Figure 6). The user can get a summary for each day as well as long term
trends. It can be accessed as a mobile application, but it is also available for desktop and tablet.
MySugr is an application that allows a digital logbook, shows personalized data analysis such
as the estimated A1C (Figure 7). It also has Bluetooth data syncing to glucometers.
in the diagram of Figure 8. The information is structured with the goal of the decision at the top,
through the intermediate levels (criteria on which subsequent elements depend) to the lowest level
(which is usually an asset of the alternatives).
When searching about these mobile applications, each one has different parameters in the sub-
criteria mentioned above. The values for each criterion are shown in Tables 3–6, where each letter
represents an application: A is Diabeto Log, B gluQUO, C is OnTrack Diabetes, D is Bluestar
Diabetes, E is BG Monitor, F is Glucose Buddy+, and G is mySugr.
Multicriteria analysis of Diabetes Clinical Dashboards 53
Figure 7: mySugr.
Criteria Sub-criteria
Functionality • Operating System
• Interfaces
• Capacity that it occupies in memory
• Update
• Rank
Usability • Languages Available
• Acquisition Costs
• Target user groups
User information • User rating
• Number of consumer ratings
• Number of downloads
Engagement with user • Reminders/alerts
• Passcode
• Parameters that can be registered
In the next step, with the information available it is possible to construct a set of pairwise
comparison matrices. To construct a set of pairwise comparison matrices, each element in an upper
level is used to compare the elements in the level immediately below concerning it.
To make comparisons, we need a scale of numbers that indicates how many times more
important or dominant one element is over another element concerning the criterion or property
for which they are compared [18]. Each number represents the intensity of importance, where 1
represents equal importance and 9 extreme importance (Table 7).
With this scale of numbers, it is possible to construct the pairwise comparison matrices for the
criteria and each sub-criteria, this is showed in Table 8.
Once the comparative matrix of the criteria is done, the following step is to obtain the weight
trough a standardized matrix. This is done by calculating the sum of each column, then the
normalization of the matrix by diving the content of each cell by the sum of its column, and finally
the calculation of the average rows, see Table 9.
54 Innovative Applications in Smart Cities
Criterion: Functionality
OS Int. Memory Rank Update
A iOS Yes 55.8 MB 1,290 2019
B Both Yes 49.1 MB 1,355 2019
C Android Yes 1.8 MB N/A 2018
D Both Yes 45MB 381 2019
E Android No 2.8 MB N/A 2017
F iOS No 193.6 MB 103 2019
G Both Yes 109 MB 132 2019
Criterion: Usability
Languages Acquisition Target User
Available Costs Groups
A 5 Freeware Patient
B 2 Freeware Patient
C 1 Freeware Both
D 1 Freeware Both
E 1 $119 MX Both
F 31 $39 MX Patient
G 22 Freeware Patient
Multicriteria analysis of Diabetes Clinical Dashboards 55
Table 8: Pairwise comparison matrix of the main criteria with respect to the goal.
Engagement
User info.
with user
Usability
Criteria
Functionality 1 3 6 5
Usability 1/3 1 5 4
User info. 1/6 1/5 1 1/3
Engagement 1/5 1/4 3 1
Total 1.7 4.45 15.00 10.33
56 Innovative Applications in Smart Cities
Table 9: Normalized pairwise comparison matrix and calculation of the weight criteria.
As a conclusion, the user information is the least important criterion with a 0.060479 weight,
followed by the engagement with user with a 0.11765 weight, then usability with a 0.28530 weight,
and finally functionality is the most important criterion with a 0.53656 weight (Figure 9).
To know if the comparison done and the weights obtained are consistent it is important to check
the system consistency. The first step of this check is to calculate the weight sums vector:
{Ws} = {M} . {W}
The weight sums vector is in Table 10.
4.36 + 4.33 + 4.07 + 4.06
λ max = = 4.205 (1)
4
λmax
Consistency Index = CI = = 0.068 (2)
n–1
CI 0.068
Consistency Ratio = = = 0.06 (3)
RI 0.99
The value of consistency index is 0.068, while the value of consistency ratio is 0.06. The
consistency ratio is CR < 0.1, thus it is acceptable, and the system is consistent.
M.W
2.34353
1.23710
0.24617
0.47769
Multicriteria analysis of Diabetes Clinical Dashboards 57
Now, to choose the best mobile application among the seven alternatives we must obtain the
weight for each of the seven alternatives in each criterion. The first criterion analyzed is functionality
(Table 11 and Table 12).
Table 11: Pairwise comparison matrix for the sub-criteria with respect to functionality.
Functionality
Subcriteria O/S Int. Mem. Update Rank
O/S 1 6 3 5 7
Interfaces 1/6 1 1/6 5 1/6
Memory 1/3 6 1 6 7
Update 1/5 1/5 1/6 1 1/4
Ranking 1/7 6 1/7 4 1
Total 1.84 19.20 4.48 21.00 15.42
Table 12: Normalized matrix of the pairwise comparison of the sub-criteria with respect to functionality.
With the values obtained, the next step is to obtain the prioritization of the functionality sub-
criteria (Table 13 and Figure 10).
For the sub-criteria of usability, the same steps as before are used (Tables 14–16 and Figure 11).
Usability
Subcriteria Language Cost Target Group
Language 1 6 5
Cost 1/6 1 7
Target Group 1/5 1/7 1
Total 1.36 7.14 13
The same steps are followed for the user information criterion (Tables 17–19 and Figure 12).
User Information
Sub-criteria Rating No. Rating No. Downloads
Rating 1 7 5
No. Rating 1/7 1 4
No. Down. 1/5 1/4 1
Total 1.34 8.25 10
Multicriteria analysis of Diabetes Clinical Dashboards 59
Table 18: Normalized matrix of the comparison matrix of the user information criterion.
Figure 12: Prioritization obtained of each sub-criterion of the user information criterion.
Finally, for the criterion of user information the weights and prioritization are obtained
(Tables 20–22 and Figure 13).
Once the priority values are obtained the next step is to get the weight of each mobile application
for each sub-criteria. The weight of each alternative is obtained following the same steps as before.
Table 20: Pairwise comparison matrix for the sub-criteria with respect to the engagement with user.
Table 22: Prioritization of the sub-criteria with respect to the engagement with user.
Figure 13: Prioritization of each sub-criterion of the engagement with user criterion.
The criterion of functionality has the sub-criteria of operating systems (Table 23 and
Figure 14), interfaces (Table 24 and Figure 15), capacity that it occupies in memory (Table 25 and
Figure 16), last time it was updated (Table 26 and Figure 17), and the ranking it has among other
medical applications (Table 27 and Figure 18).
Table 23: Comparative matrix of the alternatives with respect to the operating system.
Figure 14: Weight of each alternative with respect of the operating system.
Multicriteria analysis of Diabetes Clinical Dashboards 61
Table 24: Comparative matrix of the alternatives with respect to the interfaces.
Sub-criteria: Interfaces
Alt. A B C D E F G Weight
A 1 1 1 1 7 7 1 0.1891
B 1 1 1 1 7 7 1 0.1891
C 1 1 1 1 7 7 1 0.1891
D 1 1 1 1 7 7 1 0.1891
E 1/7 1/7 1/7 1/7 1 1 1/7 0.0270
F 1/7 1/7 1/7 1/7 1 1 1/7 0.0270
G 1 1 1 1 7 7 1 0.1891
Table 25: Comparative matrix of the alternatives with respect to the memory it occupies.
Figure 16: Weight of each alternative with respect of the capacity it occupies in memory.
62 Innovative Applications in Smart Cities
Table 26: Comparative matrix of the alternatives with respect to the last update.
Figure 17: Weight of each alternative with respect of the last update.
Table 27: Comparative matrix of the alternatives with respect to the ranking.
Sub-criteria: Ranking
Alt. A B C D E F G Weight
A 1 3 8 1/6 8 1/7 1/7 0.0897
B 1/3 1 8 1/6 8 1/7 1/7 0.0735
C 1/8 1/8 1 1/8 1 1/8 1/8 0.0176
D 6 6 8 1 8 1/4 1/4 0.1593
E 1/8 1/8 1 1/8 1 1/8 1/8 0.0176
F 7 7 8 4 8 1 3 0.3210
G 7 7 8 4 8 3 1 0.3210
The next criterion is usability, it has the sub-criteria of languages available (Table 28 and Figure
19), acquisition costs (Table 29 and Figure 20), and target user groups (Table 30 and Figure 21).
Table 28: Comparative matrix of the alternatives with respect to the languages available.
Figure 19: Weight of each alternative with respect of the languages available.
Table 29: Comparative matrix of the alternatives with respect of the acquisition costs.
Figure 20: Weight of each alternative with respect of the acquisition costs.
Table 30: Comparative matrix of the alternatives with respect of the target user groups.
Figure 21: Weight of each alternative with respect of the target user groups.
The user information criterion has the sub-criteria of user rating (Table 31 and
Figure 22), number of user ratings (Table 32 and Figure 23), and number of downloads (Table 33
and Figure 24).
Multicriteria analysis of Diabetes Clinical Dashboards 65
Table 31: Comparative matrix of the alternatives with respect of the rating.
Sub-criteria: Rating
Alt. A B C D E F G Weight
A 1 7 6 4 2 1 1 0.2228
B 1/7 1 1/3 1/6 1/6 1/7 1/7 0.0247
C 1/6 3 1 1/5 1/6 1/7 1/7 0.0365
D 1/4 6 5 1 1/4 1/5 1/5 0.080
E 1/2 6 6 4 1 1/3 1/3 0.1366
F 1 7 7 5 3 1 1 0.2495
G 1 7 7 5 3 1 1 0.2495
Table 32: Comparative matrix of the alternatives with respect of the user ratings.
Figure 23: Weight of each alternative with respect to the number of user ratings.
66 Innovative Applications in Smart Cities
Table 33: Comparative matrix of the alternatives with respect of the number of downloads.
Figure 24: Weight of each alternative with respect of the number of downloads.
For the engagement with user criterion, the sub-criteria to be analysed are the following: if the
application send alerts or reminders to the user (Table 34 and Figure 25), if it allows a passcode
before seeing the information that the user uploads on the mobile application (Table 35 and
Figure 26), and the number of parameters that can be registered to maintain a management of
diabetes (Table 36 and Figure 27).
As the last step of the analytic hierarchy process, the values obtained before, shown in
Table 37, are analyzed. Each sub-criterion is numbered by the order of appearance in Table 2.
Table 38 presents the priority values of each sub-criterion.
Table 34: Comparative matrix of the alternatives with respect of the reminders.
Sub-criteria: Reminders/alerts
Alt. A B C D E F G Weight
A 1 1 7 1 1 1 1 0.1627
B 1 1 7 1 1 1 1 0.1627
C 1/7 1/7 1 1/7 1/7 1/7 1/7 0.0235
D 1 1 7 1 1 1 1 0.1627
E 1 1 7 1 1 1 1 0.1627
F 1 1 7 1 1 1 1 0.1627
G 1 1 7 1 1 1 1 0.1627
Multicriteria analysis of Diabetes Clinical Dashboards 67
Table 35: Comparative matrix of the alternatives with respect of the passcode.
Sub-criteria: Passcode
Alt. A B C D E F G Weight
A 1 1 1 1 7 7 1 0.1891
B 1 1 1 1 7 7 1 0.1891
C 1 1 1 1 7 7 1 0.1891
D 1 1 1 1 7 7 1 0.1891
E 1/7 1/7 1/7 1/7 1 1 1/7 0.0270
F 1/7 1/7 1/7 1/7 1 1 1/7 0.0270
G 1 1 1 1 7 7 1 0.1891
Table 36: Comparative matrix of the alternatives with respect of the number of parameters.
Figure 27: Weight of each alternative with respect of the number of parameters.
Sub-criteria Prioritization
Operating System 0.2379
Interfaces 0.0459
Memory 0.1562
Update 0.0236
Ranking 0.0726
Languages 0.1590
Costs 0.0649
User Groups 0.0593
User rating 0.0793
No. user ratings 0.0237
No. downloads 0.0105
Reminders 0.0047
Passcode 0.0125
Parameters 0.0524
Multicriteria analysis of Diabetes Clinical Dashboards 69
In Table 39 are the values obtained on the ultimate prioritization, each value was calculated
with the weight the alternative for each sub-criterion and with the priority values of that same sub-
criterion.
Table 39: Ultimate prioritization for each alternative.
Therefore, using the values obtained with the analytical hierarchy process as reference (see
Figure 28), the best alternative for a mobile application used to manage diabetes is mySugr, followed
by Bluestar, then Glucose Buddy+. Meanwhile, the least preferred option is BG Monitor.
B) Grand Prix
The Grand Prix model is a tool that allows choosing amongst an array of alternatives considering
the human and economic factor. It can be applied to the selection of the best mobile application for
the monitoring of diabetes.
As mentioned before, according to the AHP method, the best mobile app is mySugr with a score
of 0.2144. The second-best option is Bluestar with a 0.1620 score, followed by Glucose Buddy+
with a 0.1539 score. It can be observed that, for these three options, the values obtained aren’t far
apart from each other. These three options are shown in Figure 29.
Considering the economic factor, mySugr and Bluestar are freeware, whilst Glucose Buddy+
is a paid option. Analysing this information, it can be observed that mySugr is still the best option
from an economic standpoint.
On the other hand, considering the human factor, we sought the application that is easy to use
for a wide demographic, especially older adults who are the main population affected by this disease.
Also, it is available in a variety of languages and in both operating systems, so that it can reach a
bigger amount of people. From these three options, the one that fits these characteristics is mySugr.
70 Innovative Applications in Smart Cities
C) MOORA Method
The MOORA (Multi-Objective Optimization on the basis of Ratio Analysis) is a method introduced
by Brauers and Zavadskas. It consists of two components: The Ratio System and Reference Point
approach. The basic idea of the ratio system part of the MOORA method is to calculate the overall
performance of each alternative as the difference between the sums of its normalized performances
which belongs to benefit and cost criteria.
The first step of the MOORA method is to construct a decision matrix of each alternative and
the different criteria:
x11 x1n
X = xij (4)
mxn
xm1 xmn
This decision matrix shows the performance of different alternatives concerning the various
criteria (Table 40).
Next, from the decision matrix, the normalized decision matrix is obtained (Table 41). The
following equation is used to obtain the values.
xij
xij = ; i =1,2, ..., m and j = 1,2, ..., n (5)
m2
∑ i =1
xij
A7 A6 A5 A4 A3 A2 A1
2 1 1 2 1 2 1 C1
2 1 1 2 2 2 2 C2
109 193.6 2.8 45 1.8 49.1 55.8 C3
132 103 5000 381 5000 1355 1290 C4
2019 2019 2017 2019 2018 2019 2019 C5
22 31 1 1 1 2 5 C6
0 39 119 0 0 0 0 C7
1 1 2 2 2 1 1 C8
4.8 4.8 4.6 4.2 3.6 3.3 4.8 C9
284 30 94 143 6088 10 14 C10
8520 900 2820 4290 182640 300 420 C11
2 2 2 2 1 2 2 C12
2 1 1 2 2 2 2 C13
10 7 4 7 6 7 7 C14
A7 A6 A5 A4 A3 A2 A1
0.500000 0.250000 0.250000 0.50000 0.250000 0.500000 0.250000 C1
0.426401 0.213201 0.213201 0.426401 0.426401 0.426401 0.426401 C2
0.456861 0.811453 0.011736 0.188613 0.007545 0.205797 0.23388 C3
0.018018 0.014059 0.682481 0.052005 0.682481 0.184952 0.17608 C4
0.378045 0.378045 0.37767 0.378045 0.377857 0.378045 0.378045 C5
0.572443 0.806625 0.02602 0.02602 0.02602 0.052040 0.130101 C6
0.000000 0.311432 0.950268 0.0000000 0.000000 0.000000 0.000000 C7
0.2500000 0.250000 0.5000000 0.5000000 0.500000 0.250000 0.250000 C8
0.418151 0.4118151 0.400728 0.365882 0.313613 0.287479 0.418151 C9
0.0465793 0.00492030 0.0154171 0.0234536 0.998504 0.001640 0.002296 C10
0.0465793 0.0049203 0.0154171 0.0234536 0.998504 0.001640 0.002296 C11
0.400000 0.4000000 0.4000000 0.4000000 0.200000 0.400000 0.400000 C12
0.426401 0.213201 0.213201 0.426101 0.426401 0.426401 0.426101 C13
0.536056 0.375239 0.214423 0.375239 0.321634 0.375239 0.375239 C14
72 Innovative Applications in Smart Cities
A7 A6 A5 A4 A3 A2 A1
0.1188 0.0594 0.0594 0.1188 0.0594 0.1188 0.0594 C1
0.019572 0.009786 0.009786 0.019572 0.019572 0.019572 0.019572 C2
0.071362 0.126749 0.009786 0.029461 0.001178 0.032146 0.036532 C3
0.000425 0.000332 0.001833 0.001227 0.016107 0.004365 0.004155 C4
0.027446 0.027446 0.0274189 0.027446 0.0274325 0.027446 0.027446 C5
0.091018 0.128253 0.004137 0.004137 0.004137 0.008274 0.020686 C6
0.000000 0.0202120 0.0616724 0.0000000 0.000000 0.0000000 0.000000 C7
0.014825 0.014825 0.02965 0.02965 0.02965 0.014825 0.014825 C8
0.033159 0.033159 0.031778 0.029014 0.02487 0.022797 0.033159 C9
0.0011039 0.0001166 0.0001166 0.0005559 0.0236645 0.0000389 0.0000544 C10
0.0004891 0.0000517 0.0000517 0.0002463 0.0104843 0.0000172 0.0000241 C11
0.00188 0.00188 0.00188 0.00188 0.00094 0.00188 0.00188 C12
0.00533 0.002665 0.002665 0.00533 0.00533 0.00533 0.00533 C13
0.028089 0.019663 0.011236 0.019663 0.016854 0.019663 0.019663 C14
Where, g and (n-g) are the number of criteria to be maximized and minimized, respectively.
And wj xij is the weight of the criterion. The results are in Table 43 and Figure 30.
g n yi Ranking
∑wjxij ∑wjxij
j=1 j=g+1
A1 0.1642678 0.0784590 0.085808 4
A2 0.2248360 0.0503174 0.174518 3
A3 0.1341652 0.1054533 0.027811 5
A4 0.2828456 0.0041372 0.278708 1
A5 0.0656681 0.1921217 -0.12675 7
A6 0.2311133 0.2134249 0.017688 6
A7 0.3268881 0.086612 0.210276 2
3. Multivariate Analysis
Cluster analysis groups individuals or objects into clusters so that objects in the same cluster are
more like one another than they are to objects in other clusters. The attempt is to maximize the
homogeneity of objects within the clusters while also maximizing the heterogeneity between
clusters.
Cluster analysis classifies objects on a set of user-selected characteristics. The resulting clusters
should exhibit high internal homogeneity and high external heterogeneity. Thus, if the classification
is successful, the objects within clusters will be close together when plotted geometrically, and
different clusters will be far apart.
The process followed is to start with all observations as their cluster, use the similarity measure
and combine the two most similar clusters into a new cluster, repeat the clustering process and
continue combining the two most similar clusters into a new cluster.
The results in the hierarchical clustering can be represented as a dendrogram as shown in the
Figure 31, which uses a single link (re-escalated cluster combination).
4. Discussion
There is a wide array of health mobile applications on the market. Since diabetes continues to rise
in numbers, mobile apps for management of this disease are popular. Among all these apps seven
alternatives were analysed using the AHP and Grand Prix method.
From the first set of criteria, using the AHP method, it was concluded that functionality is the
most important factor when looking for a mobile application, while the user information wasn’t
as important. These values were also used to find the priority values of each sub-criterion. From
the functionality sub-criteria, the operating system had the higher values, thus it was deemed as
the most important. From the usability sub-criteria, languages had the higher value. On the user
74 Innovative Applications in Smart Cities
information criteria, the user rating sub-criterion was the most important. Furthermore, from the
engagement with user criterion, the parameters that could be registered were the most important.
Of each alternative, mySugr was considered the best alternative for the management of diabetes,
Bluestar the second and Glucose Buddy+ the third option. The alternative with the lowest value was
BG Monitor.
The MOORA method was also used to analyse which of the seven alternatives is the best option.
The results from this method are like the ones obtained through the AHP method. The number one
option from AHP is the second option using MOORA, while option number two from AHP is the
first and best option from MOORA.
References
[1] Kollman, A., Kastner, P. and Schreier, G. 2007. Chapter, X, Utilizing mobile phones as patient terminal in managing
chronic diseases, in web mobile- bases applications for healthcare management. IGI Global, pp. 227–257.
[2] Papatheodorou, K., Papanas, N., Banach, M., Papazoglou, D. and Edmonds, M. 2015. Complications of Diabetes,
Journal of Diabetes Research.
[3] American Diabetes Association, Diagnosis and Classification of Diabetes Mellitus, Diabetes Care, 31, 2009, pp. 62–67.
[4] Centers for Disease Control and Prevention, Diabetes [Online]. URL: https://www.cdc.gov/media/presskits/aahd/
diabete s.pdf. [Accessed 10 September 2019].
[5] Tabish, S. 2007. Is diabetes becoming the biggest epidemic of the twenty-first century? International Journal of Health
Sciences, 1(2): V.
[6] Gardner, R. and Shabot, M. 2006. In Biomedical Informatics, Springer, p. 585.
[7] World Health Organization, Integrated chronic disease prevention and control, Available: https://www.who.int/chp/
about/integrated_cd/en/. [Accessed 12 September 2019].
[8] Iakovidis, I., Wilson, P. and Healy, J. (eds.). E-health: current situation and examples of implemented and beneficial
e-health applications. Vol. 100. Ios Press, 2004.
[9] Cook, D.J., Augusto, J.C. and Jakkula, V.R. 2009. Ambient intelligence: Technologies, applications, and opportunities.
Pervasive and Mobile Computing, 5(4): 277–298.
[10] Dey, N. and Ashour, A. 2017. Ambient intelligence in healthcare: a state-of-the-art. Global Journal of Computer Science
and Technology, 17(3).
[11] Tang, P. and McDonald, C. 2006. Electronic Health Record Systems, in Chapter 12: Biomedical Informatics, Springer,
New York, NY, pp. 447–475.
[12] Panteli, N., Pitsillides, B., Pitsillides, A. and Samaras, G. 2006. Chapter IV: An e-Healthcare Mobile Applicationin Web
Mobile-Based Applications for Healthcare Management (Editor Dr L. Al-Hakim), Book chapter, Idea Group, accepted
for publication.
[13] Dowding, D., Randell, R., Gardner, P., Fitzpatrick, P., Dykes, P., Favela, J. and Hamer, S. 2015. Dashboards for
improving patient care: review of the literature. International Journal of Medical Informatics, 84(2): 87–100.
[14] Nouri, R., Niakan, S., Ghazisaeedi, M., Marchand, G. and Yasini, M. 2018. Criteria for assessing the quality of mHealth
apps: a systematic review. Journal of the American Medical Informatics Association, 25(8): 1089–1098.
[15] Arnhold, M., Quade, M. and Kirch, W. 2014. Mobile applications for diabetics: a systematic review and expert-based
usability evaluation considering the special requirements of diabetes patients age 50 years or older. Journal of Medical
Internet Research, 16(4): e104.
[16] Shah, V., Garg, S., Viral, N. and Satish, K. 2015. Managing diabetes in the digital age. Clinical Diabetes and
Endocrinology, 1(1): 16.
[17] Taherdoost, H. 2017. Decision making using the Analytic Hierarchy Process (AHP): A step by step approach,
International Journal of Economics and Management Systems, 2: 244–246.
[18] Saaty, T. 2008. Decision Making with the analytics hierarchy process. International Journal of Services Sciences,
1(1): 83–98.
CHAPTER-6
Color blindness is a condition that affects the cones in the eyes; it can be congenital or acquired and
is considered an average disability that affects about 10% of the world’s population. Childrens with
color blindness have particular difficulties when entering an educational environment with materials
developed for people with normal vision. This work focuses on modifying the Ishihara test to apply
it to preschool children. The proposed test helps to identify children who suffer from color blindness
so that the teacher who guides them in the school can attend them.
1. Introduction
The sense of sight in human beings as in other organisms depends on their eyes; these use two types
of cells for the perception of images, i.e., rods and cones (Richmond Products, 2012).
The rods are used to identify the luminosity, i.e., the amount of light received from the
environment and the cones are used to identify the color or frequency in the spectrum of light
received (Colorblindor, 2018).
In most people there are three types of cones, each one to perceive a basic color, these can be
red, green or blue, and the other colors that are generated are the result of various combinations that
are received from amounts of light in tune with the frequencies of these basic colors (Deeb, 2004).
The world around us is designed to work with the colors that are perceived with three cones,
since most people can perceive the environment with three basic colors, i.e., they are trichromats,
however, there are data of people with a fourth type of cone, which allows them to perceive more
colors than the average person visualizes, however, these people often have problems describing the
environment and tones they perceive, since the world is not made with their sensory perceptions in
mind (Robson, 2016).
1
Universidad Politécnica de Aguascalientes, Calle Paseo San Gerardo, Fracc. San Gerardo, 20342 Aguascalientes, Aguas-
calientes, México.
2
Universidad Autónoma de Aguascalientes. México.
3
Universidad Juárez Autónoma de Tabasco. México.
4
Universidad Autónoma de Juárez. México.
* Corresponding author: jorge.rodas@uacj.mx
76 Innovative Applications in Smart Cities
On the other hand, there are also cases of people with a lower color perception, this condition
is called color blindness and is considered a medium disability, since the colors with trichromat
perception are used in various activities, such as to identify objects in a conversation, to identify
dangerous situations, know when to advance in a traffic light, to decide what clothes to buy and to
enjoy art forms such as painting or photography (Kato, 2013).
Color blindness can be catalogued in four variants according to the cones available to perceive
the environment, these variants can be anomalous trichromacy, dichromacy, and monochromacy or
achromatopsia (Colorblindor, 2018).
The most common variant of color blindness is anomalous trichromacy, in which there are all
the cones for color perception, but there is a deficiency in some of them. Anomalous trichromacy
can be separated depending on its severity into mild anomalous trichromacy, medium or strong
and depending on the color in which the deficiency is presented, trichromacy can be divided into
deuteranomaly or deficiency in the perception of green, protanomaly in the perception of red and
tritanomaly in the perception of the blue (Huang et al., 2011).
Another variant of color blindness that occurs less frequently, but more severely, is dichromacy.
In this, one type of cone is absent, i.e., the person cannot perceive one of the basic colors, which
causes problems with all colors that have this tone in their constitution, for example, a person who
has problems with the green cone will have problems with all forms of green, but also with yellow
and brown because they are constituted with the color green as a base. Dichromacy can also be
classified depending on the absent cone, it is deuteranopia when the green cone is absent, protanopia
when the red cone is absent and tritanopia when the blue cone is absent (Colorblindor, 2018).
Monochromacy, or achromatopsia, is the rarest condition of color blindness, in this, all the
cones are absent, therefore, the environment is only perceived in gray scales or luminosity and,
although it is very rare, it represents a major difficulty for people who suffer from it, since they
cannot live a normal life without the assistance of healthy people. These people cannot drink a liquid
from a bottle without first looking for a label confirming its contents, they cannot identify if a food
is in good condition before eating it, they cannot choose their clothes or identify a parking place,
among other difficulties (Colorblindor, 2018).
About 10% of people suffer from some deficiency or blindness to color, that is, about 700
million people suffer from color blindness, considering that the world population exceeds 7,000
million inhabitants. Table 1 shows the percentages of incidence in the world for men and women
with each of the variants of color blindness (Colorblindor, 2018).
As often as in adults worldwide, 1 in 10 children is born colorblind, facing a world that is not
designed for him, generating various difficulties even in their learning and school performance
(Pardo Fernández et al., 2003). Mexico has a similar situation in color blindness, as detailed in the
study of the prevalence of color blindness in public school children in Mexico (Jimenéz Pérez et al.,
2013).
Children with visual difficulties associated with color blindness may have school problems
when performing activities that involve color identification, such as using educational material with
colored content, for example, relating content in their textbooks and participating in games, among
other difficulties related to visual tasks (Pardo Fernández et al., 2003).
Despite all the difficulties that children with color blindness are exposed to, this condition is
one of the vision anomalies that take longer to be detected by parents and teachers, because by the
intelligence or support they receive from other people they manage to get ahead with their education
(Pardo Fernández et al., 2003).
Teaching materials are often designed for people with normal vision, so it is important to
detect vision impairments before school age so that children receive appropriate assistance in their
educational processes (Jimenéz Pérez et al., 2013).
The detection of color blindness can be done through the application of various tests that are
designed according to the variant that is presented and the colors that are confused in this condition,
for this, it is important to reproduce the perception of a colorblind people so that the tests be correctly
designed. Several algorithms adjust the model of a digital image represented with red, green and
blue parameters (RGB), to simulate color blindness and then allow people with a normal vision to
see as people with this medium disability do (Tomoyuki et al., 2010).
Figure 1 shows the color spectrum seen by an average trichromate, Figure 2 by a person with
protanopia, Figure 3 by deuteranopia, and Figure 4 by tritanopia, all of them obtained applying color
blindness simulation models.
The most common test used by ophthalmologists, based on the principle of detecting
confusing colors, is the Farnsworth-Munsell test, in which the patient is presented with discs
with colors of the entire color spectrum and is asked to make their arrangement in the correct
order, this test is highly accurate in identifying any variant of color blindness and the severity with
which it is presented, however, its application is highly complicated and in order to carry it out
correctly, specific ambient conditions are required, such as a specific brightness of 25 candles at
6740 K, which describe the lighting conditions at midday (Cranwell et al., 2015). Figure 5 shows a
simplified Farnsworth-Munsell test manufactured by Lea Color Vision Testing.
The most commonly used test for the detection of color blindness are Ishihara plates, designed
for the detection of protanopia, protanomaly, deuteranopia and deuteranomaly, however, any other
variant of color blindness cannot be detected by Ishihara plates as they are designed to detect the
colors that confuse people with these variants of color blindness (Ishihara, 1973). Figure 6 shows
plate 42 that is seen by people with protanopia or strong protanomaly as a 2 and by deuteranopia
and strong deuteranomaly as a 4.
The tests to perform the Ishihara test are widely known and simple to evaluate, since the
difficulties to perceive the color are observable when the individual who is submitted to the test,
cannot see the number inside or it is difficult to visualize it, in addition, depending on the plate
with which you have problems, it is identified which variant of color blindness is presented.
Table 2 is used as a comparison list to identify the type of color blindness variant presented with the
Ishihara test.
The initial problem with this type of test is the purchase of the same, however, currently, there
are several websites of organizations which allow a rapid assessment when there are suspicions of
color blindness.
An age-appropriate color blindness test (Jouannic, 2007) includes the possibility of detecting
color blindness in toddlers when dealing with figures; however, given the options presented for each
plate, the test can still be complicated for a preschooler. One of the images presented in this test is
Figure 5: Farnsworth-Munsell Test manufactured by Lea Color Vision Testing (Vision, 2018).
Detection of Color Blindness in Children 79
Table 2: Checklist for evaluation with the Ishihara test of 17 plates, where the X mark that cannot be read (Ishihara, 1973).
shown in Figure 7. In this slide, the child is asked whether he sees a letter B behind the mesh, a star
behind the mesh, or the mesh itself.
Developing a test that can be done at the preschool level with a group of children would make
it possible to raise awareness of the difficulties presented by some of the peers in that group and
allow the preschool teacher to identify students who have problems with color perception, in order
to adjust the activities to children who might face this type of condition in a preschool group.
The aim of this chapter is to review the issue of color blindness, the difficulties, and guidelines
when present in preschool children, while proposing images that can be identified through sight
by groups of healthy children and disagree with those perceived by people with some form of
80 Innovative Applications in Smart Cities
color blindness, in this chapter focusing primarily on the identification of protanopia, protanomaly,
deuteranopia, and deuteranomaly, to recommend that the child’s parent go to a specialist to confirm
the evaluation and present a specialist test such as Farnsworth’s test. The tests are also mounted
on an application so that they can be taken from home by the preschool-age child’s parent or tutor.
2. Backgrounds
The concern for color blindness in children is not an issue that has begun to be studied recently and
there are several papers that are presented around this average disability.
Several of these papers focus on identifying the incidence of color blindness in children in
certain parts of the world, such as at (Jimenéz Pérez et al., 2013), where color blindness is detected
in school-age children in Mexico. The same incidence is being studied in eastern Nepal (Niroula
and Saha, 2010). In (Moudgil et al., 2016) a similar study is conducted on children between the ages
of 6 and 15 in Jalandhar, in all cases using Ishihara plates. Another group of works seeks to identify
problems that children with color blindness have and their detection. One of these studies shows that
children with color blindness problems often have difficulties in identifying colors, but these are less
than those expected in the color blindness model, as indicated in (Lillo et al., 2001).
The work proposed in (Nguyen et al., 2014) shows the development of a computer interface
for school-age children, it uses images suitable for children in a game that can be solved correctly
by children with normal vision, however, they need instructions and supervision, making it difficult
when working with a group of children.
3. Development
Considering that the most used diagnostic tests and simple to evaluate are linked to the identification
of confusing colors, images are designed with these colors using similar plates to those used in the
Ishihara test, but instead of using numbers and figures, it is proposed to use drawings similar to those
recognized at an early age.
Another difficulty that should be kept in mind is to keep the indications simple, asking the child
to confirm if what he or she sees is correct or not. For this purpose, the Microsoft Net Assembly
speech interface is used to make the computer tell the child what it would have to see in the
developed images. When opening the application, the first thing shown is a description addressed to
the applicator or the teacher, which indicates what color blindness is, the purpose of the test and the
instructions to follow in the application.
Once the applicator clicks the button that starts the test, the images are presented to the children
for three seconds, this following the indications of the original Ishihara test, since it is difficult for
Detection of Color Blindness in Children 81
children with color blindness to use the contrast and brightness conditions to try to identify the
images; at the same time, with the speaker, the child is told what it should see and these indicate to
the applicator whether or not they could see the figure.
It is hoped that, with this, the applicator or teacher can identify which children have problems
with the colors, inform the parents and take the child to a specialist in order to assess the child’s
condition, furthermore, the teacher should consider the special condition of the affected children
when preparing the material for their classes.
4. Results
The plates developed based on the colors used in the Ishihara plates, with designs that are known to
preschool children are shown in Figure 8 to Figure 13.
Table 3 shows the images obtained using a dichromacy color blindness simulation model
(protanopia, deuteranopia, and tritanopia), they show how each plate looks like in each of the most
critical color blindness variants.
Table 3: Perception of proposed pseudochromatic plates for children with different variants of dichromacy
The diagrams of the application generated at each of the moments of the test are shown from
Figure 14 to Figure 20. Initially, instructions are shown when opening the application to start the
test.
When the applicator or teacher clicks the start test button, images begin to be shown (Figure 15
to Figure 20) as the sound produced with Microsoft Net Assembly tells the children which image
they should see and then the applicator selects from the drop-down list whether the plate was viewed
correctly or incorrectly.
Figure 14: Instructions shown in the application when opening the test.
Figure 15: Face shown in the application when the Microsoft Voice Assistant says “You should see a Face”
Detection of Color Blindness in Children 85
Figure 16: Tree shown in the application when the Microsoft Voice Assistant says “You should see a Tree”
Figure 17: Candy shown in the application when the Microsoft Voice Assistant says “You should see a Candy”.
Figure 18: Ship shown in the application when the Microsoft Voice Assistant says “You should see a Ship”.
86 Innovative Applications in Smart Cities
Figure 19: Sun shown in the application when the Microsoft Voice Assistant says “You should see the Sun”.
Figure 20: House shown in the application when the Microsoft Voice Assistant says “You should see a House”.
When the test is completed, a chart is received indicating that a doctor should be visited visually
and by audio, in case of failure in the identification of the plates. The user has the possibility to do
the test again.
6. Conclusions
In this work, we manage to design plates with figures that preschool children can identify, as well
as using them in an application that presents an audio aid that tells children what they should see
on each plate. So, with the help of an applicator that could well be the teacher, children can take
Detection of Color Blindness in Children 87
Figure 21: Conclusion of the test indicating that there is deficiency in color perception and a doctor should be visited.
the test. The design of each of the plates is intended to make it difficult for people with problems
in the different variants of color blindness that are detected by the Ishihara test, i.e., protanopia,
protanomaly, deuteranopia, and deuteranomaly. The graphs show different figures depending on the
variants of color blindness that are presented as shown in the results section.
References
Colorblindor. 2018. Color Blind Essentials. Recuperado de https://www.color-blindness.com/color-blind-essentials.
Cranwell, M.B., Pearce, B., Loveridge, C. and Hurlbert, A.C. 2015. Performance on the farnsworth-munsell 100-hue
test is significantly related to nonverbal IQ. Investigative Opthalmology & Visual Science, 56(5): 3171. https://doi.
org/10.1167/iovs.14-16094.
Deeb, S.S. 2004. Molecular genetics of color-vision deficiencies. Visual Neuroscience, 21(3): 191–196. Recuperado de
http://www.ncbi.nlm.nih.gov/pubmed/15518188.
Huang, C.-R., Chiu, K.-C. and Chen, C.-S. 2011. Temporal color consistency-based video reproduction for dichromats. IEEE
Transactions on Multimedia, 13(5), 950–960. https://doi.org/10.1109/TMM.2011.2135844.
Ishihara, S. 1973. Test for Colour-Blindness, 24 Plates Edition, Kanehara Shuppan Co. Ltd., Tokyo.
Jimenéz Pérez, A., Hinojosa García, L., Peralta Cerda, E.G., García García, P., Flores-Peña, Y., M-Cardenas, V. and Cerda
Flores, R.M. 2013. Prevalencia de daltonismo en niños de escuelas públicasde México: detección por el personal de
enfermería. CIENCIAUANL, 16(64): 140–144.
Jouannic, J. 2007. color blindness test (free and complete). Recuperado el 17 de diciembre de 2019, de http://www.opticien-
lentilles.com/daltonien_beta/new_test_daltonien.php.
Kato, C. 2013. Comprehending Color Images for Color Barrier-Free Via Factor Analysis Technique. En 2013 14th ACIS
International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed
Computing (pp. 478–483). IEEE. https://doi.org/10.1109/SNPD.2013.39.
Lillo, J., Davies, I., Ponte, E. and Vitini, I. 2001. Colour naming by colour blind children. Anuario de Psicologia, 32(3): 5–24.
Moudgil, T., Arora, R. and Kaur, K. 2016. Prevalance of Colour Blindness in Children. International Journal of Medical and
Dental Sciences, 5(2): 1252. https://doi.org/10.19056/ijmdsjssmes/2016/v5i2/100616.
88 Innovative Applications in Smart Cities
Nguyen, L., Lu, W., Do, E.Y., Chia, A. and Wang, Y. 2014. Using digital game as clinical screening test to detect color
deficiency in young children. En Proceedings of the 2014 conference on Interaction design and children - IDC ’14 (pp.
337–340). New York, New York, USA: ACM Press. https://doi.org/10.1145/2593968.2610486.
Niroula, D.R. and Saha, C.G. 2010. The incidence of color blindness among some school children of Pokhara, Western Nepal.
Nepal Medical College journal : NMCJ, 12(1): 48–50. Recuperado de http://www.ncbi.nlm.nih.gov/pubmed/20677611.
Pardo Fernández, P.J., Gil Llinás, J., Palomino, M.I., Pérez Rodríguez, A.L., Suero López, M.I., Montanero Fernández, M.
and Díaz González, M.F. 2003. Daltonismo y rendimiento escolar en la Educación Infantil. Revista de educación,
ISSN 0034-8082, No 330, 2003, págs. 449–462, (330), 449–462. Recuperado de https://dialnet.unirioja.es/servlet/
articulo?codigo=624844.
Richmond Products. 2012. Color Vision Deficiency: A Concise Tutorial for Optometry and Ophthalmology (1a ed.).
Richmond Products. Recuperado de https://pdfs.semanticscholar.org/06bf/712526f7e621e7bc7a09e7f9604c5bae6899.
pdf.
Robson, D. 2016. Las mujeres con una visión superhumana. BBC News Mundo. Recuperado de https://www.bbc.com/
mundo/noticias/2014/09/140911_vert_fut_mujeres_vision_superhumana_finde_dv.
Tomoyuki, O., Kazuyuki, K., Kajiro, W. and Yosuke, K. 2010. Proceedings of SICE Annual Conference 2010 (pp. 18–21).
Taipei, Taiwan: Society of Instrument and Control Engineers. Recuperado de https://ieeexplore.ieee.org/abstract/
document/5602422.
Vision, L.C. 2018. Color Vision Test Color Vision Test Clinical Evaluation of Color Vision. Recuperado el 23 de diciembre
de 2018, de www.good-lite.com1.800.362.38601.888.362.2576Faxwww.good-lite.com.
CHAPTER-7
This chapter presents a Cognitive Innovation Model that formalizes the basic components and the
interactions between them for the establishment of a Cognitive Architecture (CA). The convenience
of walking in the sense of achieving an archetype as support for the implementation of innovative
intelligent solutions in Smart Cities and having the client as a means of convenient validation of
the representation and processing of the knowledge expressed in the CA in comparison with those
executed by humans in their daily activities.
1. Introduction
Smart cities are a vital issue in this constantly changing world, dynamically shaped by science,
technology, nature, and society, which implies people still face many challenges, both individual
and social, where innovation plays a vital role. These constant challenges are now addressed more
frequently through Cognitive & Innovative Solutions (CgI-S) which establish new schemes—
innovation—of how to address them. Our current technological world uses a lot of pieces of
knowledge, this means high valuable information, useful to solve a problem or satisfy a particular
need and, of course, to drive innovation. Thus, the satisfaction of who has the problem or need is
achieved when the knowledge is capitalized by CgI-S. Hence, the importance of finding out how to
use and take advantage of as much of the creative expertise as possible, including imagination, even
though its use in a systematic way is a complex challenge, even if only to share it through traditional
ways, requires it to be made explicit. Even though dominating the challenge is a fundamental key for
the cognitive era to progress and its artificial intelligence technologies, machine learning, cognitive
computing, etc., coexist daily with humans.
In the cognitive era, Cognitive Architects (Cg.Ar) together with specialists, from the domain
to be treated, make up the Cognitive & Innovative Solution’s Architects & Providers team (CgI-
1
Universidad Autónoma de Ciudad Juárez, Av. Hermanos Escobar, Omega, 32410 Cd Juárez, Chihuahua, México.
2
Digital transformation at ITESM. México.
3
Stragile Co., Interactive Technology & Solutions Group.
4
Applied Artificial Intelligence Research Group.
5
Department of BioSciences, Rice University, Houston, TX 77005, USA.
* Corresponding author: jorge.rodas@uacj.mx
90 Innovative Applications in Smart Cities
SAP team) to provide CgI-S using highly specialized information, experience, creativity, coming
from an ad hoc Collaborative Network (ahCN); which allows the team to do an adequate job even
with innovation. Also, the CgI-SAP team applies science and technology to take advantage of
this knowledge in order to achieve the Capitalization of Experience or Knowledge in solutions or
innovation. It is undeniable that the above represents a complex situation [1, 2] since it requires
a complete orchestration of the process, on the part of the Cg.Ar, which results in a CgI-S which
requires technological developments and changes in the processes of the organization where the
Cg.Ar must work side-by-side with ahCN. This arduous labour must be supported by a Cognitive
Architecture, particularly apt, when cognitive approaches are required to meet the challenges of the
cognitive era.
This document is an effort to match situations or needs that should be faced with intelligent
technologies and innovation processes, at times when the environment is extremely dynamic being
this characteristic very typical within what is now called a cognitive era. The above motivates
to provide a Conceptual Model of Cognitive-Innovation (CgI-M) as an Archetype that has formal
support that consists of the Systematic Process for Knowledge Management (KMoS-REload); which
formalizes the interaction between an ahCN, a Cognitive Architecture, and the CgI-S implementation
process or particular treatment. The remainder of this chapter is structured as follows: In
Section §2, sensitive concepts and related work to the subject are described. A general proposal
Conceptual Model of Cognitive Innovation is presented in Section §3 where the ad hoc Collaborative
Network, the Cognitive Architecture, and the dynamics of the Systematic Process for Knowledge
Management (KMoS-REload) and its main characteristics are also presented. As an application,
Section §4 introduces the start-up of the KMoS-REload process through a client study to describe
the benefits of using the Conceptual Model of Cognitive Innovation and then presents the results of
this study. A brief discussion is given in Section §5. Finally, the conclusion and future challenges
are presented in Section §6.
Domain (ISD). An ISD is a complex domain that can be described by characteristics of how its data,
information, and knowledge are, and how are the representation and communication between them
in the following way:
• heterogeneous data and information; specialized knowledge with a high degree of informality,
partial and non-homogeneous; and
• knowledge that is mostly tacit and without structure.
Besides, the ISD interacts with an ahCN that must understand the problem, need or business,
identify application opportunities and obtain the knowledge requirements of this intricate knowledge
ecosystem to propose a convenient, viable and valuable CgI-S. In Figure 1, an ISD is characterized
by exemplifying the context or environment of whoever requires a CgI-S.
Finally, in the context of the ISD, in particular, the External Knowledge under the business
concept must include the market and consumers; it is very important to understand from the
beginning of the user experience under the integration approach of a value chain.
The pace of business development, the amount of data and knowledge they handle from their
clients and the need to insist on the concept of strategic adaptation—as opposed to the traditional
strategic planning approach—have forced companies to think about a new approach that is different
from the traditional “B2B”.
We know the importance of the consumer-oriented approach (Business to Client, B2C), we
also know how the focus changes when we talk about collaboration between companies (Business
to Business, B2B); however, under the current optics of handling artificial intelligence, machine
learning, and cognitive technologies, it is necessary to evolve the last concept to a new one: Business
to Business to Client (B2B2C).
Under this concept, the need to understand the biometric profiles of final consumers adds an
additional element to the appropriate handling of data or knowledge and its impact on the development
of more efficient knowledge or predictive models. Companies supplying goods and services to other
companies must now insist and collaborate in understanding the factors that motivate the choice of
the offer of one company or another. That is, to be able to add value in a value chain, it is now not
only necessary to understand the dynamics of the companies that are served, but the factors that
motivate their respective market niches.
92 Innovative Applications in Smart Cities
• Adapting to change: In the technological world, where the environment changes drastically,
change is inevitable and innovation is the means, not only to keep a company afloat, but also to
ensure that it remains relevant and profitable.
• Maximization of globalization: Innovation is a necessity to solve the needs and challenges and
take advantage of the opportunities that open markets around the world.
• Be in competition: Innovation can help establish or maintain the vanguard of a company,
compete strategically within a dynamic world and make strategic moves to overcome the
competition.
• Evolution of the dynamics of the workplace: Innovation is essential for the use of demographic
data in the workplace, which constantly change, and ensure the proper functioning of the
product, service or process.
• Knowing the changing desires and preferences of clients: Currently, clients have a wide variety
of products, services or processes at their disposal and are well informed to make their choices.
Therefore, it is imperative to keep up with changing tastes and also forge new ways to satisfy
the clients.
Cognitive System. Set of entities, definitions, rules or principles that when interrelated in an orderly
manner contribute to formalizing a cognitive process, at least the irreducible set of components that
are used to explain or carry it out.
Figure 3: General overview of the KMoS-REload process is represented by activities flow diagram.
Cognitive Innovation Archetype for Smart Cities Applicaitons 97
1. The tacit identification of knowledge, where a discourse analysis is carried out with the objective
of identifying the knowledge that is hidden behind the linguistic indicators as presuppositions;
2. The capture and updating of specialized knowledge of the matrix, throughout the process,
knowledge is associated with those involved in the domain, which forms a matrix that captures
experience in the domain; and
3. The assumption record, a phenomenon that occurs when learning a new domain is to associate
our mental scheme with new concepts and relationships, therefore, false assumptions will be
made clear as the process progresses, these assumptions must be recorded to facilitate the
learning of new members in the project.
The process begins with an initial interview between the Solution’s Architects and Providers
(CgI-SAP team) and the Internal or External Knowledge (Domain Specialists) in a session where
socialization predominates. Then, the Tacit Knowledge Identification, the Expert Matrix Update
and the Assumptions Record are developed in parallel by C.Ar (or the CgI-SAP team)—a Cognitive
Analysis is done by them in a socialized way—in order to verify the artifacts and decide if they
should continue with the following phases or require a validation of them. In fact, under a lean
or agile innovation approach, living iterative processes exist that allow adding value from the
validation of their elements and proposals. The validation requires that the CgI-SAP team explain
the models in order to validate the knowledge. The process-in-turn generates more knowledge,
then the cycle starts again, and the process may end when all those involved in the CgI-M Model
reach an agreement. Finally, the process makes the team aware that, in order to develop a CgI-S,
it is necessary to understand and formally define the knowledge requirements and the domain that
circumscribes them [6]. The details of the KMoS-REload process application can be found in [15].
where companies allocate resources to encourage start-ups or small businesses to develop new
concepts, indeed much more economically accessible.
Cognitive Innovative-Solution Architects & Providers (CgI-SAP) team. This is a team of human
talent that performs consultation and analysis of information technology systems, intelligent and
cognitive. The CgI-SAP team supports all its activities, within the CgI-M model, in a Systematic
Process for Knowledge Management (KMoS-REload) to develop cognitive and, therefore, innovative
solutions that bring great value to clients. It is well known how engineers or scientists become
obsessed with past solutions and how the process of scientific discovery and the engineering design
process can lead them to new solutions.
However, there is still much to understand about the cognitive and innovative processes,
particularly with respect to the underlying natural cognitive processes. Behind the KMoS-REload
process, there are theories and methods of several disciplines related to cognition and knowledge,
such as cognitive psychology, social psychology, knowledge representation, machine learning to
analyze, structure and formalize the complex cognitive processes that occur in the real world, the
world of the Informal Structure Domains. It implies that the CgI-SAP team is highly trained to
be empathetic and solve problems of a given Informally Structured Domain. Consequently, there
are two essential roles carried out by this team: as an architect of solutions, the team must have a
balanced combination of technical, social and business skills; as a supplier, the team must offer
solutions based on any combination of technologies, processes, analysis, commercialization, internal
organizational environment or consulting. Such solutions can be customized for your clients; or, it
can provide solutions based on existing products or services.
Regardless of the roles played by the CgI-SAP team, the core of its activity is the interaction
with the elements of the triplet of Equation (2) and applying science and technology advances to
take advantage of all the knowledge that exists around to achieve the Capitalization of Experience
or Knowledge and to provide a CgI-S. It is undeniable that the above represents a complex situation
[1, 2], but also an excellent opportunity for the CgI-SAP team.
Cognitive Analysis (CgAn). The CgAn is a process of examining in detail a given ISD in order to
understand it or explain it. Commonly, one or several strategies or processes are used that enable
the knowing and formalizing of the existing relationship between certain types of functions, actions
and concepts related to this domain. The main objectives of performing the CgAn in a given ISD are:
(a) to obtain the best view of your own internal processes, e.g., in a business domain this could be
how the market receives its products and services, customer preferences, how customer loyalty
is generated or other key questions where precise answers are used to provide a company with
a competitive advantage; and
(b) to set up the cognitive architecture established by the semantic base and the components of the
appropriate cognitive system.
It is worth mentioning that the CgAn often focuses on the realization of a predictive analysis,
where the extraction of data and other cognitive uses of the data can generate business and
commercial predictions. Therefore, the practical problems surrounding such analyses involve the
precise methods used to collect and store data in a special location, as well as the tools used to
interpret this data in various ways. Solution Cognitive Architects & Providers can provide analysis
services and other useful help, but in the end, the practical use of the analysis depends on the people
who are part of the domain, where they not only need to know how to collect data but also how to
use it correctly.
Cognitive Innovation Archetype for Smart Cities Applicaitons 99
• Five complex tasks that include each one’s activities: Basic specifications establishment,
building characteristics analysing, Air circulation patterns analysing, Appropriate components
selection, and Control system analysing;
• Non-organized and incomplete data is present in it;
• Determination of the criteria and decision making about the achievement of the project is
carried out under the umbrella of an ahCN; and
• The project has a unique design and solves or addresses a particular situation.
To deal with the challenges of obtaining the knowledge requirements of an HVAC project,
typified as belonging to an ISD, the company FLUTEC uses an empirical guide—DNA document—
composed of general attributes that gather the necessary basic information for each project. This
document should be a guide to obtain the knowledge requirements that would allow a good design
of an HVAC module. However, being an empirical and, therefore, informal document, it was a very
flimsy communication bridge between the ahCN within FLUTEC. In addition, there were often
delays, reworkings and high-cost problems arising from this DNA document and the additional
processes of the FLUTEC’s processes related to the realization of a project.
Characterization of the CgI-M model through the determination of the peculiar attributes and
additional activities related to the HVAC project. Once the Flutec’s environment relative to the
HVAC’s design process has been identified as an ISD domain, the Cognitive Architect starts the
KMoS-REload process to characterize and, consequently, establish the CgI-M model:
• Distributed Tacit Knowledge: Tacit distributed technical knowledge, heterogeneous, diverse
degrees of specificity;
• Incomplete data: Unorganized and Incomplete Data of all the processes related to the HVAC
and that should be used in the development of the Cognitive Architecture Specification;
• ad hoc Collaborative Network: composed of multiple specialists in the Flutec’s domain, the
CgI-SAP team, decision-makers; and
• any other problems, in particular, must always be addressed when developing an HVAC;
therefore, it is a unique project that requires a CgI-S.
Results of the use of the CgI-M Model. In order to provide FLUTEC with an adequate cognitive
solution (CgI-S), the CgI-SAP team identified the elements of the HVAC’s process that needed to
be improved and established a consistent model that would give it the corresponding support from
the following:
• Analysis of the DNA guide document. As mentioned above, the DNA guidance document is
empirical and lacks the overall vision of the project. Therefore, the analysis should describe the
significant assumptions and conceptual relationships of all FLUTEC knowledge. Consequently,
the DMP confirmed that the DNA had the following deficiencies:
− Disorganization;
− Incomplete: Missing essential information for the proper development of the HVAC project;
− Incorrect: Existence of fake attributes;
− Irrelevant information: Informal descriptions had been recorded;
− Ambiguous information. The initial and basic knowledge requirements were not well
described; and
− Time lost due to searching, as often as necessary, of missing or poorly recorded information.
The analysis allowed the obtaining of knowledge, the formalization of empirical DNA domain,
and its transformation into a new and formal solution.
Cognitive Innovation Archetype for Smart Cities Applicaitons 101
• Specialized Explicit Training. Before applying the KMoS-REload process, it was already
difficult for FLUTEC engineers to understand the importance of a formal process of obtaining
knowledge requirements and, consequently, there was great ignorance about certain elements or
concepts belonging to the domain of the project. Once the process was used in the project, the
FLUTEC’s specialists were trained by the domain modelling phase and were able to assimilate
(make tacit) new explicit knowledge, reduce their own ignorance and ambiguity and, as a result,
improve the quality of work in ahCN learning that:
− Knowledge-Requirements Elicitation could be carried out systematically;
− The CgI-M Model transfers knowledge; and
− FLUTEC has preconceived and tacit ideas or expectations of the project and when they turn
explicit the redesigns are usually avoided on the project’s post-delivery time.
• Improvement of HVAC-DNA Process. The CgI-M Model was carried out through KMoS-REload
process, as a result, the models will be established and, therefore, the HVAC-DNA process will
be renewed:
− HVAC project concepts, attributes, relationships between concepts and basic integrity
restrictions were formalized, e.g., HVAC design and budget project properties. The
externalization, transfer, and consensus are activities carried out within the ahCN with its
knowledge in order to integrate a set of pieces of explicit knowledge that minimizes the
symmetry of ignorance. Thus, the learning curve about the HVAC domain was reduced from
a couple of months to a couple of weeks. In addition, the CgI-SAP team noticed that the DNA
document was not useful during the project process, especially, since it requires a lot of time
to be filled and does not meet the goal it is supposed to achieve.
− The process view as a stream of decisions from the ahCN allowed the CgI-SAP team to
obtain a Cognitive Architecture with support of the KMoS-REload process.
− The set of knowledge-requirements are derived and integrated into the CgI-S’s specification
document.
A CBR to support a fast delivery of proposals. The cognitive architecture from the knowledge,
acquired and managed, also allowed to constitute as a part of the CgI-S: a robust Case-Base, textual
files and all necessary to implement a CBR prototype in the jCOLIBRI tool [7, 8]. This tool provides
a standard platform for developing CBR applications through specialized methods using several
Information Retrieval or Information Extraction libraries as Apache Lucene, GATE, etc. Thus, an
important goal achieved by the CBR was to demonstrate if FLUTEC could do a time reduction in
obtaining the matches between the expectations of the clients with the HVAC project design blue-
prints and, consequently, a fast delivery of budget proposals.
In summary, the establishment of an adequate cognitive architecture, using the KMoS-REload
process, manages to capitalize the knowledge of the ahCN and its expertise, explicitly and formally,
to allow: a clear understanding of the project’s ISD; its assimilation by the CgI-SAP team; give a
CgI-S; and characterize, as a whole, the CgI-M model—to a total customer satisfaction—whose
remarkable products were a new DNA guide and the CBR prototype.
5. Discussion
We live in a world that changes minute by minute, for better or for worse, due to advances in
science and technology strongly framed in artificial intelligence, machine learning, and cognitive
computing. The assimilation of the advances is not a trivial issue and, consequently, the companies,
the individuals, and the society must find the way to survive at the great speed with which “the
future and the present are amalgamated”. This issue is not trivial because changes that happen
too quickly can often produce a disconnection between a scientific or technological advance and
102 Innovative Applications in Smart Cities
the understanding of its potential by the providers of technological solutions. Scientifically and
technologically speaking, there are many examples throughout history of wrong judgments about
what the future holds. For example, when business owners introduced electricity into their factories,
they stayed with older models of how to organize the use of their machines and industrial processes
and lost some of the productivity improvements that electricity enabled. In 1977, the president of
Digital Equipment Corporation, the largest computer company of that time, saw no market for home
computers. Thirty years later, in 2007, Steve Ballmer, CEO of Microsoft, predicted that the iPhone
would not take off [9]. From these examples, it is possible to infer that there are essential reasons
for justifying the investment of time, money and effort required to develop a successful bridge to
knowledge and technology.
In addition, the problems or needs that belong to an Informally Structured Domain must be
solved by a solution that will come to innovate, either because it modifies the procedure to address
the problem or by itself is a new solution. So, this innovative solution will be cognitive because, in
order to obtain it, it must have extracted persistent knowledge; or, of the existing cognitive process
or that the solution by itself is of the scope of the Artificial Intelligence, the Machine Learning or
Cognitive Computation.
There will be occasions when the problem or need can be addressed without further ado by a
product or tool of Artificial Intelligence, Machine Learning or Cognitive Computing; or, in most
cases, the cognitive solutions will be tailored to the situation of problem or need. Trivializing tasks
of modeling and analyzing the problems derived from “time pressure” or the well-known phrase “we
no longer have time” can translate into a real loss of time, making bad decisions and opportunities
that are going away or that will never come.
Who can identify the right type of solution for each real situation that arises?
In Cognitive Architecture: Designing for How We Respond to the Built Environment, Ann Sussman
and Justin B. Hollander review the novel trends (2014) in psychology and neuroscience to support
architects and planners, of the current world of construction, to better understand the clients,
like the sophisticated mammals they are, and in response to a constantly evolving environment.
In particular, they describe four main principles that observe a relationship with the cognitive
processes of the human being: people are a thigmotactic species, that is, they respond to the touch
or surface of external contact; visual orientation; preference for bilateral symmetrical forms; and
finally, narrative inclinations, unique to the human being. The authors emphasize that the more we
understand human behaviour, the better we can design for it and suggest the obligation to carry out
activities of analysis, “preparation of the cognitive scaffolding”, before carrying out construction
and anticipating the future experience of the client [10].
Similarly, Portillo-Pizaña et al. suggest the importance of considering four stages for the
implementation of a process of conscious innovation in an organization: consciousness, choice,
action, and evolution. This conscious process of corporate innovation initially implies that every
human being who integrates an innovation effort within an organization understands that any
process of change or transformation begins with a state of consciousness where the existing gaps
between the current situation are identified and the desired situation under the perspective of a user;
subsequently, a decision-making process must be faced that allows an agile iteration, in order to move
on to a real commitment to innovation and entrepreneurship—with all the necessary characteristics
required of a true entrepreneur—and conclude then with a disposition to agile evolution, without
sticking to ideas that are not well received by the market [11]. Thus, to identify the right kind of
cognitive solution in a real situation it is highly convenient to have a cognitive architect. A Cg.Ar
is a role with multidisciplinary knowledge in areas that should be treated as they are: Artificial
Intelligence, Machine Learning, Cognitive Computing, logic, cognitive processes, psychology,
sociology, philosophy to mention a few.
Cognitive Innovation Archetype for Smart Cities Applicaitons 103
becomes more important for a simple reason: it seems that when a model is proposed for complex
situations, for example, to build the scaffolding of a cognitive architecture, it is due to our inability
to deal with complexity in its entirety.
In the meantime, CgI-M reduces the complexity of what is being addressed, focusing on
only one aspect at a time. It is important to highlight that CgI-M intends to formalize, because if
informality were allowed, it would generate products that could not clearly address the domain of
the solution to be implemented.
In conclusion, and in spite of some solution providers, the more complex the domain and the
problem to be addressed, the more it is imperative to use a model. Solution providers are already
faced with situations where they have developed simple CgI-S without starting from any model
and, after a short time, notice that the domain grew in complexity nullifying the effectiveness of the
solution to the detriment of the quality of its service and the loss of the client.
Finally, the CgI-M model after being used in real cases that its components as a whole can, de
facto, respond through a cognitive collision to situations that occur within the domains of informal
structure. There is a lot of work to be done on the subject of obtaining and representing common-
sense information; for existing frames of representation must evolve and be integrated with other
frameworks in order to enhance representation and, consequently, reasoning with common sense
information. In general, the results obtained by CgI-M suggest that the knowledge obtained from it
is highly congruent with that expressed by ahCN when validated by the client and the results of the
solutions provided by it.
However, it is also clear that it is not possible to explain the complete cognitive process of ahCN
exclusively in the current terms of the CgI-M model. Consequently, the model is open and dynamic
for the improvement of its components and to better explain the harmonization and integration of
different types of cognitive processes that are supposed to coexist in a perspective of heterogeneous
representation, for which additional research and collaboration among those we approach is needed.
In particular, in our opinion, such improvements should be oriented to (i) in which cases the
components of the CgI-M model play a more relevant role in establishing the scaffolding necessary
to develop a particular cognitive solution (ii) or cases where they are not at all evoked by a cognitive
system, since the need to react in real time is more urgent and, therefore, (iii) accelerate the activities
proposed by the model. Since there is no clear answer to such questioning, these aspects will imply,
in our opinion and in congruence with [12], the future research agenda of cognitive psychology and
the cognitive—artificial—systems research.
process for knowledge management KMoS-REload provided by the CgI-M represents an adequate
way to integrate different knowledge acquisition and representation mechanisms, it is still not clear
if they are sufficient and robust. Therefore, it is still an open question of what and what kind of
processes, techniques or elements should be part of a general architectural mechanism and if it is
worth implementing them in the processes of the model to operate their conceptual structures. As
mentioned above, answers to questions or efforts will require a joint research effort on the part of
cognitive psychology and the community of cognitive models and processes, cognitive computation,
machine learning, and artificial intelligence.
References
[1] Kamsu-Foguem, B. and Noyes, D. 2013. Graph-based reasoning in collaborative knowledge management for industrial
maintenance, in: Computers in Industry, pp. 998–1013.
[2] Santa, M. and Selmin, N. 2016. Learning organization modelling patterns. Knowledge Management Research &
Practice, 14(1): 106–125.
[3] Camarinha-Matos, L. and Afsarmanesh, H. 2006. Collaborative networks value creation in a knowledge society. In:
Proceedings of PROLAMAT’06, Springer, pp. 15–17.
[4] Rosenbloom, P., Demski, A. and Ustun, V. 2015. The sigma cognitive architecture and system: Towards functionally
elegant grand unification. Journal of Artificial General Intelligence, 7(1).
[5] Rodas-Osollo, J. and Olmos-Sánchez, K. 2017. Knowledge management for informally structured domains: Challenges
and proposals. In: Mohiuddin, M. (Ed.). Knowledge Management Strategies and Applications, InTech, Rijeka, 2017,
Ch. 5. doi:10.5772/intechopen.70071. URL https://doi.org/10.5772/intechopen.70071.
[6] Bjørner, D. Domains: Their Simulation, Monitoring and Control—A Divertimento of Ideas and Suggestions, Vol. 6570
of Computer Science, Springer, Berlin, Heidelberg, 2011, Ch. Domains: Their Simulation, Monitoring and Control—A
Divertimento of Ideas and Suggestions.
[7] Finnie, G. and Sun, Z. 2003. R5 model for case-based reasoning, Knowledge-Based Systems, 16: 59–65.
[8] Recio-García, J., González, C. and Díaz-Agudo, B. 2014. jcolibri2: A framework for building case-based reasoning
systems, Science of Computer Programming, 79(1): 126–145.
[9] Ito, J. and Howe, J. Whiplash: How to Survive Our Faster Future, Hachette Book Group USA, 2016.URL https://books.
google.com.mx/books?id=HtC6jwEACAAJ.
[10] Sussman, A. and Hollander, J. 2014. Cognitive Architecture: Designing for How We Respond to the Built Environment,
Routledge. URL https://books.google.com.mx/books?id=3TV9oAEACAAJ.
[11] Portillo-Pizaña, J., Ortíz-Valdes, S. and Beristain-Hernández, L. 2018. Applications of Conscious Innovation in
Organizations, IGI Global. URL https://www.igi-global.com/book/appli...organizations/182358.
[12] Lieto, A., Lebiere, C. and Oltramari, A. 2018. The knowledge level in cognitive architectures: Current limitations
and possible developments, Cognitive Systems Research 48: 39–55, cognitive Architectures for Artificial Minds.
doi:https://doi.org/10.1016/j.cogsys.2017.05.001.
URL http://www.sciencedirect.com/science/article/pii/ S1389041716302121.
9 Taylor & Francis
Taylor & Francis Group
http://taylorandfra ncis.com
PART II
Applications to Improve a
Smart City
CHAPTER-8
This chapter provides a summarized, critical and analytical point of view of the data-centric solutions
that are currently applied for addressing urban problems in cities. These solutions lead to the use of
urban computing techniques to address their daily life issues. Data-centric solutions have become
popular due to the emergence of data science. The chapter describes and discusses the types of urban
challenges and how data science in urban computing can face them. Current solutions address a
spectrum that goes from data harvesting techniques to decision making support. Finally, the chapter
also puts in perspective families of strategies developed in the state of the art for addressing urban
problems and exhibits guidelines that can lead to a methodological understanding of these strategies.
1. Introduction
The development of digital technologies in the different disciplines, in which cities operate, either
directly or indirectly, is altering expectations among those in charge of the local administration.
Every city is a complex ecosystem with subsystems to make it work such as work, food, clothes,
residence, offices, entertainment, transport, water, energy, etc. With the growth of cities, there is more
chaos and most decisions are politicized, there are no common standards and data is overwhelming.
The intelligence is sometimes digital, often analogue, and almost inevitably human.
1
University Grenoble Alpes, CNRS, Grenoble INP, LIG, France.
2
Universidad Nacional Autónoma de México, Mexico.
3
Fundación Universidad de las Américas Puebla, Mexico.
4
University of Lyon, LIRIS, France.
5
French Mexican Laboratory of Informatics and Automatic Control.
Emails: Genoveva.vargas-solar@liris.cnrs.fr, sagrariocastillo@comunidad.unam.mx
* Corresponding author: sagrariocastillo@hotmail.com
108 Innovative Applications in Smart Cities
Urban computing [36] is a world initiative leading to better exploit resources in a city to offer
higher-level services to people. It is related to sensing the city’s status and acting in new intelligent
ways at different levels: people, government, cars, transport, communications, energy, buildings,
neighbourhoods, resource storage, etc. A vision of the city of the “future”, or even the city of the
present, rests on the integration of science and technology through information systems.
Data-centric solutions are in the core of urban computing that aims at understanding events
and phenomena emerging in urban territories, predict their behaviour and then use these insights
and foresight to make decisions. Data analytics and exploitation techniques are applied in different
conditions and using ad hoc methodologies using data collections of different types. Today important
urban computing centres in metropolises, have proposed and applied these techniques in these
cities for studying real state, tourism, transport, energy, air, happiness, security and wellbeing. The
adopted strategies have to do with the type of context in which they work.
This chapter provides a summarized, critical and analytical point of view of the data-centric
solutions that are currently applied for addressing urban problems in cities leading the use of urban
computing techniques to address their daily life issues. The chapter puts in perspective families
of strategies developed in the state of the art for addressing given urban problems and exhibits
guidelines that can lead to a methodological understanding of these strategies. Current solutions
address a spectrum that goes from data harvesting techniques to decision making support. The
chapter describes them and discusses their main characteristics.
Accordingly, the chapter is organised as follows. Section 2 characterises urban data and
introduces data harvesting techniques used for collecting urban data. Section 3 discusses approaches
and strategies for indexing urban data. Section 4 describes urban data querying. Section 5
summarizes data and knowledge fusion techniques. Finally, Section 6 discusses the research and
applied perspectives of urban computing.
distance to certain reference points or axes, division-based models using a geometric or semantic-
based division of space, and linear models with relative positions along with linear reference
elements, such as streets, rivers and trajectories. Finally, the third urban data property, object,
refers to physical and abstract entities having a certain position in space (e.g., vehicles, persons
and facilities), temporal properties, for objects existing in a certain period (i.e., event), and spatio-
temporal properties, which are objects with a specific position in both space and time.
Besides time, space and object properties, Yixian Zheng et al. [25] identify six types of data
that can be harvested and represent the types of entities that can be observed within urban territories
according to the urban context they refer to, i.e., human mobility, social network, geographical,
environmental, health care and divers.
Human mobility data enables the study of social and community dynamics based on different
data sources like traffic, commuting media, mobile devices and geotagged social media data.
Traffic data is produced by sensors installed in vehicles or specific spots around the city (e.g., loop
sensors, cameras). These data can include vehicles’ positions observed recurrently at given intervals.
Using these points (positions), it is then possible to compute trajectories which are spatiotemporally
time-stamped and can be associated with instant speed and heading directions. Traffic occupation
inroads can be measured with loops that compute, within given time intervals, which vehicles travel
across two consecutive loops. Using this information, it is possible to compute travel speed and
traffic volume on roads. Ground truth traffic conditions are observed using surveillance cameras
that generate a huge volume of images and videos. Extracting information such as traffic volume
and flowrate from these images and videos is still challenging. Therefore, in general, these data only
provide a way to monitor citywide traffic conditions manually.
People’s regular movement data are produced by personalized RFID transportation cards for
buses or metro that they tap in station entries to enter/exit the public transportation system. This
generates a huge amount of records of passenger trips, where each record includes an anonymous card
ID, tap-in/out stops, time, fares for this trip and transportation type (i.e., bus or metro). Commuting
data recording people’s regular movement in cities can be used to improve public transportation and
to analyze citywide human mobility patterns.
Records of exchanges like phone calls, messages, internet, between mobile phones and cell
stations collected by telecom operators are data that contain communication information, people’s
locations based on cell stations. These data offer unprecedented information to study human mobility.
Social Networks Data. Social networks posts (e.g., blogs, tweets) are tagged with geo-information
that can help to better understand people’s activities, the relations among people and the social
structure of specific communities. User-generated texts, photos and videos, contain rich information
about people’s interests and characteristics, that can be studied from a social perspective. For
example, evolving public attention on topics and spreading of anomalous information. The major
challenges with geo-tagged social network data lie in their sparsity and uncertainty.
Finally, data refer to points of interest (POI) to depict information of facilities, such as
restaurants, shopping malls, parks, airports, schools and hospitals in urban spaces. Each facility is
usually described by a name, address, category and a set of geographical coordinates.
Environmental data. Modern urbanization based on technology has led to environmental problems
related to energy consumption and pollution. Data can be produced by monitoring systems observing
the environment through different variables and observations (e.g., temperature, humidity, sunshine
duration and weather conditions), air pollution data, water quality data and satellite remote sensing
data, electricity and energy consumption, CO2 footprints, gas. These data can help to provide insight
regarding consumption patterns, on correlations among actions and implications and foresight about
the environment.
110 Innovative Applications in Smart Cities
Divers data. Other data are complementary to urban data, particularly those concerning social and
human aspects, such as health care, public utility service, economy, education, manufacturing and
sports.
Figure 1 summarizes the urban data types considered in urban computing: environmental
monitoring data that concern meteorological data, mobile phone signals used for identifying
behaviours, citywide human mobility and commuting data for detecting urban anomalies, city’s
functional regions and urban planning, geographical data concerning points of interest (POI), land
use, traffic data, social networks data, energy data obtained from sensors, and economies regarding
city economic dynamics like transaction records of credit cards, stock prices, housing prices and
people’s income.
Commuting data
Geographical data
Traffic monitoring and prediction, Urban
planning, routing, and energy consumption
analysis, POI, land use
Traffic data
Loop sensors, surveillance cameras, and floating cars,
floating car data
Urban data can be harvested from different sources and using different techniques. These
aspects are discussed next.
1
https://www.geospatialworld.net/article/geo-life-health-smart-city-gis/
From Data Harvesting to Querying for Making Urban Territories Smart 111
social networking service which aims to understand trajectories, locations and users, and mine the
correlation between users and locations in terms of user-generated GPS trajectories. In [17] a new
vision has been proposed regarding the smart cities’ movement, under the hypothesis that there
is the need to study how people psychologically perceive the urban environment, and to capture
that quantitatively. Happy Maps uses crowdsourcing and geo-tagged pictures and the associated
metadata to build alternative cartography of a city weighted for human emotions. People are more
likely to take pictures of historical buildings, distinctive spots and pleasant streets instead of car-
infested main roads. On top of that, Happy Maps adopts a routing algorithm that suggests a path
between two locations that is the shortest route and maximizes the emotional gain.
different data sources, like hybrid indexing structure, which combines a spatial index, hash tables,
sorted lists, and an adjacency list.
[8,9], convoy [10,11], swarm [14], traveling companion [21,22], and gathering [34,35,36,25]. These
“group patterns” can be distinguished based on how the “group” is defined and whether they require
the periods to be consecutive. For example, a flock is a group of objects that travel together within
a disc of some user-specified size for at least k consecutive timestamps [10]. Li et al. [14] relaxed
strict requirements on consecutive periods and proposed the pattern swarm, which is a cluster of
objects lasting for at least k (possibly non-consecutive) timestamps.
Bibliography
[1] Aigner, W., Miksch, S., Schumann, H. and Tominski, C. 2011. Visualization of time-oriented data. Springer Science &
Business Media.
[2] Andrienko, G. and Andrienko, N. 2008. Spatio-temporal aggregation for visual analysis of movements. In Visual
Analytics Science and Technology, 2008. VAST’08. IEEE Symposium on pages 51–58. IEEE.
[3] Andrienko, G., Andrienko, N., Bak, P., Keim, D. and Wrobel, S. 2013. Visual analytics of movement. Springer Science
& Business Media.
[4] Andrienko, G., Andrienko, N., Hurter, C., Rinzivillo, S. and Wrobel, S. 2011. From movement tracks through events to
places: Extracting and characterizing significant places from mobility data. In Visual Analytics Science and Technology
(VAST), 2011 IEEE Conference on, pages 161– 170. IEEE.
[5] Chen, Y., Jiang, K., Zheng, Y., Li, C. and Yu, N. 2009. Trajectory simplification method for location-based social
networking services. In Proceedings of the 1st ACM GIS Workshop on Location-based Social Networking Services.
ACM, 33–40.
[6] Chen, Z., Shen, H.T., Zhou, X., Zheng, Y. and Xie, X. 2010. Searching trajectories by locations: An efficiency study. In
ACM SIGMOD International Conference on Management of Data. ACM, 255–266.
[7] Douglas, D. and Peucker, T. 1973. Algorithms for the reduction of the number of points required to represent a line or
its caricature. Canadian Cartographer, 10(2): 112–122.
[8] Gudmundsson, J. and Kreveld, M.V. 2006. Computing longest duration flocks in trajectory data. In Proceedings of the
14th International Conference on Advances in Geographical Information Systems. ACM, 35–42.
[9] Gudmundsson, J., Kreveld, M.V. and Speckmann, B. 2004. Efficient detection of motion patterns in spatio-temporal
data sets. In the Proceedings of the 12th International Conference on Advances in Geographical Information Systems.
ACM, 250–257.
[10] Jeung, H., Yiu, M., Zhou, X., Jensen, C. and Shen, H. 2008a. Discovery of convoys in trajectory databases. Proceedings
of the VLDB Endowment, 1(1): 1068–1080.
[11] Jeung, H., Shen, H. and Zhou, X. 2008b. Convoy queries in spatio-temporal databases. In Proceedings of the 24th
International Conference on Data Engineering. IEEE, 1457–1459.
[12] Keogh, E., Chu, J., Hart, S.D. and Pazzani, M.J. 2001. An on-line algorithm for segmenting time series. In Proceedings
of the International Conference on Data Mining. IEEE, 289–296.
116 Innovative Applications in Smart Cities
[13] Krumm, J. and Horvitz, E. 2006. Predestination: Inferring destinations from partial trajectories. In Proceedings of the
8th International Conference on Ubiquitous Computing. ACM, 243–260.
[14] Li, Z., Ding, B., Han, J. and Kays, R. 2010. Swarm: Mining relaxed temporal moving object clusters. Proceedings of
the VLDB Endowment, 3(1-2): 723–734.
[15] Lou, Y., Zhang, C., Zheng, Y., Xie, X., Wang, W. and Huang, Y. 2009. Map-matching for low-sampling-rate GPS
trajectories. In Proceedings of the 17th ACM SIGSPATIAL Conference on Geographical Information Systems. ACM,
352–361.
[16] Maratnia, N. and de By, R.A. 2004. Spatio-temporal compression techniques for moving point objects. In Proceedings
of the 9th International Conference on Extending Database Technology. IEEE, 7.
[17] Quercia, Daniele, Rossano Schifanella and Luca Maria Aiello. 2014. The shortest path to happiness: Recommending
beautiful, quiet, and happy routes in the city. Proceedings of the 25th ACM conference on Hypertext and social media.
ACM.
[18] Song, R., Sun, W., Zheng, B., Zheng, Y., Tu, C. and Li, S. 2014. PRESS: A novel framework of trajectory compression
in road networks. In Proceedings of 40th International Conference on Very Large Data Bases.
[19] Theodoridis, Y., Vazirgiannis, M. and Sellis, T.K. 1996. Spatio-temporal indexing for large multimedia applications. In
Proceedings of the 3rd International Conference on Multimedia Computing and Systems. IEEE, 441–448.
[20] Tang, L.A., Zheng, Y., Xie, X., Yuan, J., Yu, X. and Han, J. 2011. Retrieving k-nearest neighbouring trajectories by a set
of point locations. In Proceedings of the 12th Symposium on Spatial and Temporal Databases. Volume 6849, Springer,
223–241.
[21] Tang, L.A., Zheng, Y., Yuan, J., Han, J., Leung, A., Peng, W.-C., Porta, T.L. and Kaplan, L. 2013. A framework of
travelling companion discovery on trajectory data streams. ACM Transaction on Intelligent Systems and Technology.
[22] Tang, L.A., Zheng, Y., Yuan, J., Han, J., Leung, A., Hung, C.C. and Peng, W.C. 2012. Discovery of travelling companions
from streaming trajectories. In Proceedings of the 28th IEEE International Conference on Data Engineering. IEEE,
186–197.
[23] Wang, L., Zheng, Y., Xie, X. and Ma, W.Y. 2008. A flexible spatio-temporal indexing scheme for large-scale GPS track
retrieval. In Proceedings of the 9th International Conference on Mobile Data Management. IEEE, 1–8.
[24] Wang, F., Chen, W., Wu, F., Zhao, Y., Hong, H., Gu, T., Wang, L., Liang, R. and Bao, H. 2014. A visual reasoning
approach for data-driven transport assessment on urban roads. In Visual Analytics Science and Technology (VAST),
2014 IEEE Conference on, pages 103–112. IEEE.
[25] Wang, Z., Lu, M., Yuan, X., Zhang, J. and Wetering, H.v.d. 2013. Visual traffic jam analysis based on trajectory data.
Visualization and Computer Graphics, IEEE Transactions on, 19(12): 2159–2168.
[26] Wei, L.Y., Zheng, Y. and Peng, W.C. 2012. Constructing popular routes from uncertain trajectories. In Proceedings of
the 18th SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, 195–203.
[27] Wu, Y., Liu, S., Yan, K., Liu, M. and Wu, F. 2014. Opinion flow: Visual analysis of opinion diffusion on social media.
Visualization and Computer Graphics, IEEE Transactions on, 20(12): 1763–1772.
[28] Xu, X., Han, J. and Lu, W. 1999. RT-tree: An improved R-tree index structure for spatio-temporal databases. In
Proceedings of the 4th International Symposium on Spatial Data Handling, 1040–1049.
[29] Xue, A.Y., Zhang, R., Zheng, Y., Xie, X., Huang, J. and Xu, Z. 2013. Destination prediction by sub-trajectory synthesis
and privacy protection against such prediction. In Proceedings of the 29th IEEE International Conference on Data
Engineering. IEEE, 254–265.
[30] Yuan, J., Zheng, Y., Zhang, C., Xie, W., Xie, X., Sun, G. and Huang, Y. 2010. T-Drive: Driving directions based on taxi
trajectories. In Proceedings of ACM SIGSPATIAL Conference on Advances in Geographical Information Systems.
ACM, 99–108.
[31] Zheng, Y., Xie, X. and Ma, W.Y. 2008. Search your life over maps. In Proceedings of the International Workshop on
Mobile Information Retrieval, 24–27.
[32] Zheng, Y. and Xie, X. 2010. GeoLife: A collaborative social networking service among user, location and trajectory.
IEEE Data Engineering Bulletin, 33(2): 32–40.
[33] Zheng, Y., Liu, Y., Yuan, J. and Xie, X. 2011. Urban computing with taxicabs. In Proceedings of the 13th International
Conference on Ubiquitous Computing. ACM, 89–98.
[34] Zheng, Y., Liu, F. and Hsieh, H.P. 2013. U-Air: When urban air quality inference meets big data. In Proceedings of 19th
SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, 1436–1444.
[35] Zheng, K., Zheng, Y., Yuan, N.J., Shang, S. and Zhou, X. 2014. Online Discovery of Gathering Patterns over
Trajectories. IEEE Transactions on Knowledge Discovery and Engineering.
[36] Zheng, Yu, et al. 2014. Urban computing: concepts, methodologies, and applications.” ACM Transactions on Intelligent
Systems and Technology (TIST), 5.3: 38
CHAPTER-9
This article aims to make a simulation model of an avalanche that occurred at a Rugby football match
due to the panic caused by riots between fanatical fans of the teams that were playing. To carry out
this model, the specific Menge simulation tool is used, which helps us to evaluate the behavior of
people who consciously or unconsciously affect the contingency procedures established at the place
of the event, to define them preventively to reduce deaths and injuries. From the definition of the
factors, an algorithm is developed from the combination of the Dijkstra tool and the simulation tool
that allows us to find the route to the nearest emergency exit, as well as the number of people who
could transit safely. Additionally, Voroni diagrams are used to define perimeter adjacency between
people.
1. Introduction
Thousands of deaths have happened in different parts of the world where football is like a religion.
The very serious disturbances that occurred after a football match, avalanches caused by panic,
riots between fanatical fans, landslides in poor condition, overcapacity, are just a few examples of
events that generate deaths in the stadiums. The tragedies have been numerous, and the main causes
occur when people enter a panic, which unfortunately causes an imbalance in their thinking, failing
to have control over their actions and causing the agglomerations with catastrophic consequences.
Some historical events with the greatest consequence in deaths during football games are
described in Table 1:
As can be seen on the Table 1, most of the eventualities presented here have their origin in
the disturbances incited by the same fans, causing stampedes wherein, due to closed doors, people
become pressed against bars or meshes causing human loss by severe blows and asphyxiation. This
type of agglomeration is not exclusive to football. In the article “Innovative data visualization of
collisions in a human stampede occurred in a religious event using multiagent systems” [1], the author
analyzes this type of phenomenon, but focused on religious events, where large concentrations of
people come together. In this example, an analysis is made about the tragedy that occurred in Mecca
in 2015, where 2717 people died and 863 were injured as a result of the largest human stampede
ever recorded.
1
Universidad Autónoma de Ciudad Juárez, México.
2
The University of Adelaide, Australia.
* Corresponding author: al183244@alumnos.uacj.mx
118 Innovative Applications in Smart Cities
In the case of this study, the simulation exercise will be carried out in the Rugby Stadium of the
Australia Adelaide City, known as Oval Stadium. Its characteristics are described below:
The city of Adelaide is in southern Australia and is characterized as a peaceful city, where
eventualities due to fighting or aggressions are unusual. Historically, there has been a fight raised
on August 25, 2018, where two fans in a Rugby match between Port Adelaide and Essendon AFL
started a fight. The fans themselves tried to intervene to avoid this quarrel. It is noted that the
Detection Tools in a Human Avalanche 119
actions of these two individuals was an isolated element among a crowd of more than 39,000 fans.
The realization of this exercise will be carried out simulating an avalanche in the Oval stadium,
provoked by the panic caused by riots among fanatical fans. The result will help us to define the best
alternatives of preventive solutions to avoid possible catastrophes.
The Figure 2, shows a distribution graph of the Oval Adelaide stadium.
The anthopometry
Anthropometry is considered as the science in charge of studying the physical characteristics and
functions of the human body, including linear dimensions, weight, volume, movements, etc., in
order to establish differences between individuals, groups and races [2]. This science turns out
to be a guideline in the design of the objects and spaces necessary for the environment of the
human body and that, therefore, must be determined by their dimensions [3]. By knowing
these data, the minimum spaces that human needs to function daily are known, which must be
considered in the design of his environment. Some factors that define the physical complexion
of the human being are race, sex, diet, age. The reference plane distributes 3 imaginary flat
surfaces that cross the body parts and are used as a reference in taking body dimensions (See
Figure 3). Sports fans have seen the evolution and development of professional players and how it
has been shocking in recent years. A rugby defender of 80 kg, or 160 pounds, which was previously
considered enough, now looks less heavy with not enough weight to take the job of a restorer.
Dimensional standards and spatial requirements must be constantly adequate. The need to establish
standards that guarantee the adaptation of the interior spaces for sports practices to the human
dimension and the dynamics of people on the move constitutes, today, a potential threat to the
safety of the participants. The lack of this kind of regulation not only involves a serious threat to
120 Innovative Applications in Smart Cities
the physical integrity of the users, but also makes the client and the designer potentially legally
liable in the event of an accident with injury or death. The inference of the human body-interior
space not only influences the comfort of the first but also in public safety. The size of the body is
the fundamental measurement reference for dimensioning the width of doors, corridors and stairs
in any environment, whether public or private. Every precaution is little in the use and acceptance
of existing methods or empirical rules to establish critical clearances without questioning their
anthropometric validity, even for those likely to be part of affected codes and ordinances. In short,
certain dimensions and clearances that guarantee public safety must be defined. Public spaces must
be designed so as not to hinder their use for people outside a standard, such as children, small
people, overweight people. The designs of the different attachments and accessories will also have
the reach of these people; the stairs, seats, hallways, open spaces among others.
Horizontal space
Two measures are important to consider in a space of the people movement: (1) Body dimensions and
(2) larger people. Slacks should be considered for both measures Figure 4 shows two fundamental
projections of the human body, which include the critical dimensions of the 95th percentile. A
tolerance of 7.6 cm (3 inches) has been included for width and depth. The final dimension with
the tolerance included is 65.5 cm (28.8 inches); The critical anthropometric dimension to be used
during a massive agglomeration is the body width. The diagram representing the body ellipse and
the lower Table 2 have proven utility in the design of circulation spaces. The latter is an adaptation
of a study of the movement and formation of pedestrian queues, prepared by Dr. John Fruin, whose
purpose was to set the relative levels of service based on the density of pedestrians. The basic unit
is the human body, which is associated with an elliptical shape or ellipse body of 45.6 x 61 cm
(18 x 24 inches).
The panic
Panic attacks, also known as crisis of distress, are usually accompanied by various manifestations
of somatic nature, such as tachycardia, sweating, tremor, choking sensation, chest tightness, nausea,
Detection Tools in a Human Avalanche 121
Table 2: Analysis of the circulation space for the human being “density of queues”.
dizziness, fainting, hot flashes, feeling of unreality and loss of control [4]. This can happen when the
person experiences the sensation of being near imminent death and has an imperative need to escape
from a feared place or situation (aspect congruent with the emotion that the subject is feeling in the
perceived imminent danger). The fact of not being able to physically escape the situation of extreme
fear in which the affected person is greatly accentuates the symptoms of panic [5]. Taking this in
consideration, the relationship with possible triggers of panic attacks can be classified as:
• Unexpected. The beginning of the episode does not match manifest triggers.
122 Innovative Applications in Smart Cities
aspects in a MABS: (i) Agent behavior that deals with agent modeling of the deliberative process
(their minds). (ii) The environment that defines the different physical objects in the simulated world
(the situated environment and the physical body of the agents) and the endogenous dynamics of
the environment. (iii) The programming that deals with the modeling of the passage of time and
the definition of programming policies is used to execute the behaviors of the agents. (iv) The
interaction focuses on the result of the modeling of the actions and interactions between agents
at a given time. Our approach broadens these different perspectives to integrate related multilevel
aspects.
In the next chart (Table 3) a comparison between some different methodologies for the use of
multiagent systems with Mathematical models, are showed:
Photographs 2: Access stairs to the Oval stadium and back side of the open stadium.
Photographs 3: Oval stadium exits. Photo 1 and 2 external part. Photo 3 and 4 internal central part. Photos 5 and 6 internal
electrical second floor.
these two points are a focus of attention for a possible crash problem or agglomeration of people is
greater than in the rest of the exits.
3. Methodological Approximation
For the development of this simulation exercise, two pedestrian equations developed in a
study by Ochoa et al. (2019) “Innovative data visualization of collisions in a human stampede
Detection Tools in a Human Avalanche 127
occurred in a religious event using multiagent systems” will be used as a reference, where
it is considered a catastrophic incident with critical levels of concentration of people at a
religious event in Mecca. These equations will be used to simulate the movement of people
within a stampede and determine the probability of their survival. The equation is based on
the BDI methodology, which involves 3 fundamental factors that define the result: (1) Desires,
(2) Beliefs, (3) Intentions.
Equipmet
Equipment description used during the simulation trials:
Machine name: DESKTOP-G07PBE6
Operating System: Windows 10 Pro, 64-bits
Language: Spanish (Regional Setting: Spanish)
System Manufacturer: Lenovo
System Model: ThinkPad
Product ID: 8BF29E2C-5A1A-4CA2-92E8-BE228436613D
Processor: Intel (R) Core (TM) i5-2520M CPU @2.50 GHz.
Memory: 10.0 GB
Available OS Memory: 9.89 GB usable RAM
Disk Unit ID: ST320LT007-9ZV142
Hard Disk: 296 GB
Page File: 243 GB used; 53.5 GB available
Windows Dir: C:\WINDOWS
DirectX Version: 10.0.17134.1
Software
Menge: A framework for modular pedestrian simulation for research and development, free code.
Unity: A multiplatform video game engine created by Unity Technologies; a free personal account
was used.
Sublime Text 3: A sophisticated text editor to encode, used a free trial evaluation.
Git Bash: An emulator used to run Git from command line, a code software
Layout definition
Taking into consideration the exit door shown in Photo no. 4, where the width of the space of the
tunnel that leads to the exit has a dimension of 2.4 meters, begins with the formalization of the
layout. For the realization of the layout, the set of seats located on the left side and right side of the
tunnel will be considered, making an initial group of 304 people who could leave through this exit
door.
The initial distribution is as follows:
1. Number of people located on the left side of the exit tunnel: 80 people.
2. Number of people located above the exit tunnel in three groups: 8, 32, 8 people.
3. Number of people located on the right side of the exit tunnel: 80 people.
4. Number of people located under the exit tunnel in two group: 48 people each. The distribution
are 48 people on the left side and 48 people on the right side.
128 Innovative Applications in Smart Cities
To make the distribution of the layout, the coordinates of the dimensions of the seats as well as
stairs are considered, making a distribution of coordinates which have been handled in excel for it
to define a preliminary space according to the following Figure 6:
Figure 6: Layout of the scenario exit taking in consideration 304 persons for this evacuation simulation.
Performing the first run using the coordinates of the scenario defined, as well as using the
pattern that people will follow during the evacuation, the image that is defined during the run of
simulation [9] in menge shows, as a result, the following Figures 8 and 9.
Figure 8: First trial simulation on menge for the 304 people evacuation.
Figure 9: First trial simulation on menge for the 304 people evacuation (simulation advance).
Figure 9 shows how the agglomeration of agents causes a bottleneck at the entrance to the
tunnel. This agglomeration is due to the narrow dimension of the roadway to the tunnel.
The elements used during the development of this scenario run are shown in the following
Table 4, achieving a total time of 762,994 seconds during the evacuation, a time considered very
high for an evacuation process.
Table 4: elements used during the first simulation evacuation trial.
Common
max_angle_vel= max_neighbors= obstacleSet= neighbor_dist= r = class= pref_speed= max_speed= max_accel=
90 10 1 5 0.19 1 1.34 2 50
Full frame scene update scene draw buffer swap simulation time
(avg): 976.929 ms in (avg): 927.179 ms in (avg): 47.0648 ms in (avg): 2.21781 ms in 762.994
762 laps 763 laps 763 laps 763 lasp
Table 5: Elements used during the experiment to define the best elements condition.
Full frame scene update scene draw buffer swap simulation time
1 (avg): 563.187 md in (avg): 514.112 ms in (avg): 48.0976 m in 390 (avg): 2.25161 ms in 390 35.8
357 laps 358 laps laps laps
2 (avg): 577.609 ms uin (avg): 526.616 ms in (avg): 48.3162 ms in 360 (avg): 2.27213 ms in 360 36
359 laps 360 laps laps laps
3 (avg): 572.743 ms in (avg): 522.278 ms in (avg): 47.8306 ms in 360 (avg): 2.3383 ms in 360 36
359 laps 360 laps laps laps
4 (avg): 551.692 ms in (avg): 554.692 ms in (avg): 47.5738 ms in 369 (avg): 2.26544 ms in 369 36.9
368 laps 368 laps laps laps
5 (avg): 544.659 ms in (avg): 495.334 ms in (avg): 46.699 ms in 369 (avg): 2.26265 ms in 369 36.9
368 laps 369 laps laps laps
6 (avg): 560.557 ms in (avg): 510.916 ms in (avg): 46.8925 ms in 309 (avg): 2.29343 ms in 309 30,9001
308 laps 309 laps laps laps
Figure 10: New improved layout considering the change in the tunnel entrance dimension.
Table 6: Elements defined to be use for the best simulation time 30.9.
Common
max_angle_vel= max_neighbors= obstacleSet= neighbor_dist= r = class= pref_speed= max_speed= max_accel=
60 2 1 3 0.19 1 1.5 5 80
Full frame scene update scene draw buffer swap simulation time
(avg): 560.557 ms in (avg): 510.916 ms in (avg): 46.8925 ms in (avg): 2.29343 ms in 30,9001
308 laps 309 laps 309 laps 309 lasp
Figure 11: Second trial simulation on menge for the 352 people evacuation.
Figure 12 shows the second run of simulation with the increase to 352 agents as well as the
improvements included. It is possible to appreciate the increase in the dimension of the entrance of
the tunnel.
To facilitate and appreciate the movements of the agents, after including the improvements
proposed during the run of the simulation, Figure 13 shows the agents that are separated in 7 groups
and different colors, assigned to each one of them.
132 Innovative Applications in Smart Cities
Figure 12: Second trial simulation on menge for the 352 people evacuation (simulation advance).
Figure 13: Third trial simulation on menge for the 352 people evacuation defined on 7 groups.
Figure 14: Third trial simulation on menge for the 352 people evacuation for 7 groups (Tunnel view).
Figure 14 shows the increase in the number of agents, which rises from 304 to 352, separating
into 7 groups and identifying them are different colors. This will allow us to see the improvement in
terms of the decrease in the agglomeration of agents at the entrance of the tunnel.
In Figure 15, can be appreciated and see the improvement in width dimension at the entrance of
the tunnel, which greatly facilitates the exit of agents, avoiding collisions between them.
Detection Tools in a Human Avalanche 133
Figure 15: Third trial simulation on menge for the 352 people evacuation for 7 groups (simulation advance).
Figure 16: Third trial simulation on menge for the 352 people evacuation for 7 groups (Final simulation).
5. Conclusions
The use of the menge tool for the development of this simulation exercise allows us to perform
the simulation with different scenarios, considering the changes in the factors that impact on the
outcome, facilitating the alternative evaluation, thereby seeking the preservation and safety of the
agents involved. In this exercise, it was demonstrated that the changes of these elements during the
development of the simulation allow us to have a clearer view of the potentially catastrophic results
that may occur in a real eventuality. These results will give us indicators that can be determined for
real decision making, which allows us to generate preventive actions.
For future studies, it is necessary to continue with the development of runs considering
changes in the elements that affect the behavior of the agents’ travel, to improve the travel at higher
speed, without agglomerations or generation of bottlenecks. The simulation must consider the
improvements in the use of the software and databases currently available, such as UNITY, Visual
Studio, among others, which will allow us to find and make proposals for the solution of potential
problems of human avalanches.
Bibliography
[1] Alberto Ochoa-Zezzatti, Roberto Contreras-Massé, José Mejía. 2019. Innovative Data Visualization of Collisions in a
Human Stampede Occurred in a Religious Event Using Multiagent Systems.
[2] Antropometría, FACULTAD DE INGENIERÍA INDUSTRIAL, 2011-2. Escuela Colombiana de Ingeniería,https://
www.escuelaing.edu.co/uploads/laboratorios/2956_antropometria.pdf.
[3] Rosmery Nariño Lescay, Alicia Alonso Becerra, Anaisa Hernández González, 2016. DOI: https://doi.org/10.24050/
reia.v13i26.799.
134 Innovative Applications in Smart Cities
[4] Lic. Lorena Frangella/Lic. Monica Gramajo, MANUAL PSICOEDUCATIVO PARA EL CONSULTANTE, Fundacion
FORO; www.fundaciónforo.com Malasia 857 – CABA.
[5] Jorge Osma, Azucena García-Palacios y Cristina Botella, Anales de Psicología, 2014, vol. 30, no 2 (mayo), 381– 394
http://dx.doi.org/10.6018/analesps.30.2.150741.
[6] https://www.psicologosmadridcapital.com/blog/causas-ataques-panico/.
[7] https://confidencialhn.com/psicologo-explica-el-salvajismo-en-la-estampida-que-dejo-cinco-muertos-en-estadio-
capitalino/.
[8] Nicolas Gaud, Stéphane Galland, Franck Gechter, Vincent Hilaire, Abderrafiâa Kouka, 2008. 1569-190X/$ - see front
matter _ 2008 Elsevier B.V. All rights reserved; doi:10.1016/j.simpat.2008.08.015.
[9] Michel, F. 2004. Formalism, Tools and Methodological Elements for the Modeling and Simulation of Multi-Agents
Systems’, Ph.D. Thesis, Montpellier Laboratory of Informatics, Robotics and Microelectronics, Montpellier, France,
December 2004.
[10] Michela Milano and Andrea Roli, ‘MAGMA: A Multiagent Architecture for Metaheuristics’, IEEE TRANSACTIONS
ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2004.
[11] Olfa Beltaief, Sameh El Hadouaj, Khaled Ghedira, 2011; Psychophysical studies, DOI: 10.1109/
LOGISTIQUA.2011.5939418.
[12] Jan Dijkstra, Joran Jessurun, Bauke de Vries, Harry Timmermans, 2006, Agent Architecture for Simulating Pedestrians
in the Built Environment, International Joint Conference on Autonomous Agents and Multiagent Systems; 5 (Hakodate):
2006.05.08-12 (pp. 8-16). New York, NY.
CHAPTER-10
Floods are natural disasters resulting from various factors, such as poor urban planning, deforestation
and climate change, to provide just a couple of examples. The consequences of such disasters are
often devastating and bring with them not only losses of millions of dollars but also of human
lives. The purpose of this work is to offer a first approximation of people’s reactions in the case
of an evacuation due to a hypothetical flood in an area located in the Colonia Bellavista of Ciudad
Juárez that is adjacent to the Río Bravo, the Acequia Madre, and the Díaz Ordaz viaduct, which
are plausible to be subject to overflow or flooding after heavy torrential rains in a scenario where
climate change has seriously affected the city’s climate.
1. Introduction
According to [1]“A flood is referred to when usually dry areas are invaded by water; there are two
possible causes of why this type of disaster occurs, the first reason is related to natural phenomena
such as torrential rains and rainy seasons, for the second cause there is talk of human actions that
largely induce natural disasters; ...”. Among the factors associated with human intervention are
deforestation, elimination of wetlands, high CO2 emissions that cause climate variations [2,3], bad
urban planning, etc. [1]. On the other hand, floods can be of two types according to [1]: sudden/
abrupt and progressive/slow. In addition, floods may occur in urban or rural areas.
The environment of cities is greatly affected by climate change due to flooding [4]. The authors
point out that, in general, public spaces do not adapt well to abrupt changes in the environment and
that is why their design must be well worked out to avoid problems in the event of a disaster. One
of the main problems affecting urban and rural populations is flooding. Table 1 shows the greatest
floods in Europe during the 90’s decade and their effects.
The characteristics of Ciudad Juárez, as well as its climate, make it propitious to carry out a
study referring to sudden floods, since these are characterized by the precipitation of a large volume
of water in a short time, causing a rapid accumulation of water in conurbation areas; this because of
the rupture of dams, torrential rains or overflowing of basins or rivers [1]. In addition, according to
[2] an increase in torrential rainfall is expected that can cause the type of floods mentioned above
is expected. In the case of Ciudad Juárez, this is characterized by the existence of the Río Bravo as
well as a desert climate with torrential rains, which have caused severe flooding as in 2013 [6], in
addition to the fact that the infrastructure and urban planning of the city are additional also factors
Universidad Autónoma de Ciudad Juárez, Av. Hermanos Escobar, Omega, 32410 Cd Juárez, Chihuahua, México.
* Corresponding author: aztlan.ba@cdjuarez.tecnm.mx
136 Innovative Applications in Smart Cities
Table 1: Heavy Floods in the EU and Neighboring Countries, 1991–2000, and Their Effects on the Population [5].
that lead to flooding in the rainy season. Under this scheme, it is imperative to create scenarios
of possible evacuations in case torrential rains can cause flooding in areas that are more prone to
overflows and water stagnation.
The objective of this work is to make a first approximation, by using a simulation of the behavior
of people who live in an area susceptible to flooding as well as to analyze two possible scenarios
where people may be located during the incident. All this assumes that climate change could alter
the amount of water that falls in the rainy season and cause an overflow of the Río Bravo as well as
floods in the Díaz Ordaz viaduct and the Acequia Madre.
2. Mathematical Models
According to [7] there is a form to estimate the velocity of pedestrians by an Equation. “The
pedestrian equation is based on the BDI methodology where the factors used are affected by the
desires, beliefs, and intentions of the individuals” [7]. The velocity of an agent is dictated by
Equation 1:
Vi(t) = [v + h + nc)/(a + d]* f * imc * s (1)
Where:
• xi is the velocity of agent i at time t.
• To solve Vi(t) determines the position of an agent with respect to time.
• v is the average pedestrian speed for all agents.
Humanitarian Logistics and the Problem of Floods in a Smart City 137
3. Materials
The following are the specifications of the equipment, software and materials used for the
implementation of the simulations.
Computer equipment to run the simulation
System information
Machine name: DESKTOP-FJF469O
Operating System: Windows 10 Home Single Language 64-bit
Language: Spanish (Regional Setting: Spanish)
System Manufacturer: Dell Inc.
System Model: Inspiron 15 7000 Gaming
138 Innovative Applications in Smart Cities
Software
Menge: A framework for modular pedestrian simulation for research and development, free
code [9].
4. Scenarios
The stage is located as shown in the red polygon, as shown in Figure 1. These are houses adjacent
to the Río Bravo in Ciudad Juarez’s downtown neighborhood. The chosen scenario is interesting
because the area is trapped between possible sources of flooding. The Río Bravo is located in the
northeastern part while the Acequia Madre is in the southwestern, a natural stream; the Díaz Ordaz
viaduct is located in the northwestern part. A satellite view of the stage can be seen in Figure 2.
It is estimated that the people closest to the edges through which water flows, and therefore the
first to experience flooding when there is torrential rain, will be the first to react to try to evacuate
the area, while the more distant people will do so with some delay, which is why the model estimates
that there will be an agglomeration of individuals trying to get out that will cause congestion by the
exit routes.
Figure 1: View of the stage in the Colonia Bellavista located in the center of Ciudad Juárez [10]. It can be observed that the
group of houses is located between the Río Bravo, the Díaz Ordaz Viaduct and the Acequia Madre.
Humanitarian Logistics and the Problem of Floods in a Smart City 139
Figure 2: Satellite view of the stage in the Colonia Bellavista located in the center of Ciudad Juárez. It can be observed that
the group of houses is located between the Río Bravo, the Díaz Ordaz Viaduct and the Acequia Madre [10].
5. Simulation
The simulation was performed using Menge software by modifying an open-source example called
4square.xml [9] and adapted to the conditions of the scenario as well as the objectives or goals to be
achieved during the simulation of a flood evacuation in the borders surrounding the scenario. The
stage was set by making a slight rotation of about 16º as it is shown in Figure 3 to be able to carry
out the layout of the streets more easily, but for practical purposes, this is not an issue for the final
results.
In one of the scenarios, an evacuation of only people was contemplated, where 610 people
intervened in groups of 10 people distributed in different locations of the scenario as shown in
Figure 3. An estimate of 122 homes with families of approximately 5 people in equal conditions to
mobilize during the disaster was estimated. Besides, it was estimated that the size of the people was
the same, with a radius of 1 m for each of them.
140 Innovative Applications in Smart Cities
Figure 3: The image shows the distribution of people in groups of 10. Each red dot in the image represents an individual.
Only one group of them, the one at the bottom of the stage, has 20 people.
In this part of the simulation, it was contemplated that the only objective of the people was to
move from their initial location to a point located in the coordinate (250,0) as shown in Figure 4.
The simulation considers an objective or goal, which must be reached by people. This
objective is declared as a point from which displacement vectors which will be a reference of
the speed direction to be maintained by the pedestrians are traced. This type of goal is simple,
but in terms of simulation, it makes mobility difficult when the pedestrians encounter an obstacle
that prevents their mobility in the direction of the displacement vector. That is why, during the
simulation, they advance slowly in the “y” direction when there is an obstacle that prevents
them from advancing in the “x” direction. Figure 5 shows the evacuation of the pedestrians
towards the goal. The simulation takes place over a time of about 200 seconds (the program
time is marked as 400 cycles) and is the maximum simulation time, so in order to facilitate the
simulation, the movement speed of most pedestrians was chosen to be increased to 8 m/s, almost
5 times faster than the normal speed of a person who is moving freely [11].
According to actual dimensions and without considering obstacles in the path, a person running
at an average speed of 1.5 m/s should complete the path indicated in Figure 6, with a length of 375
m, in 250 seconds or 4.16 min. However, it must be considered that the speed of the pedestrians
will be affected by the environment, surely flooded, which would imply a reduction of their speed to
Figure 4: The blue dot represents the coordinates of the goal that people have to reach during the evacuation, which is
located at the coordinate (250,0) according to the frame of reference.
Humanitarian Logistics and the Problem of Floods in a Smart City 141
Figure 5: Displacement of the pedestrians who congregate at the evacuation point at (250.0).
about 0.5 m/s, which would imply that by moving in a straight line it would take not about 4 minutes
but a little more than 12 minutes to complete the journey.
Another interesting situation to analyze is one in which people do not escape to an evacuation
point, but gradually gather in the geometric center of the stage. Figure 7 shows this point in blue,
which is located at the coordinate (41, -2). This situation is less realistic than the previous one
because one would expect to escape from the flood sources located at the top, bottom and left of
the scenario, however, it can be thought that, in the confusion, people decide to go in the opposite
Figure 7: The pedestrians move from their respective locations to the center of the stage at point (41, -2).
142 Innovative Applications in Smart Cities
direction from the nearest flood source. However, the behavior of the people located to the right of
the scenario would not have to move and agglomerate with all the others.
Figure 9: Evacuation simulation with vehicles and people. Vehicles are marked as blue dots while people are red dots.
Humanitarian Logistics and the Problem of Floods in a Smart City 143
Figure 10: Closer view of the simulation. Vehicles (blue dots) interact with people (red dots) and serve as obstacles during
displacement.
Figure 11: The pedestrians move from their respective locations to the center of the stage at point (41, -2), blue dots
represent vehicles while red dots represent people.
Figure 12: Interactions between pedestrians and vehicles cause congestions that block movement.
It is important to indicate that the CgI-M model is open, constantly revised, enriched, updated,
and that it is currently implemented as a modus operandi of a Cognitive Architects team to build
Cognitive Solutions. Subsequent subsections give a review of the parts of the model.
Besides the above, these types of models can help in urban planning at the time of building lots
of houses, especially in areas susceptible to floods or overflows. In this sense, as in this hypothetical
case, it could be observed that in the face of the flooding of the Río Bravo, the Acequia Madre
and the flooding of the Díaz Ordaz viaduct, there are only a few evacuation routes for the people
who live in the area. Also, the evacuation time will depend on several factors, but in the best-case
scenario, as analyzed in Figure 6, it would take just over 4 minutes without considering severe
obstacles or flooding that impede mobility, i.e., moving at a constant speed of approximately 1.5 m/
sec. However, this speed could be reduced by two-thirds due to terrain conditions and obstacles, so
that time could easily be tripled to 12 minutes, which would put people’s lives at risk. In the case
of the more complex scheme, which involves the presence of vehicles, these play an important role
in the mobility of individuals since they function as obstacles on the stage because they are less
versatile than people, which will result in a much longer evacuation time than if it were only people.
That is why this simulation represents a first step in the elaboration of protocols and evacuation
routes in case this type of flooding occurs in the future.
As future work, the idea would be to modify the simulation to establish possible emergency
routes in case of floods, which would be used by people depending on their location in the area.
Besides, the simulation can be improved by adapting menge to new platforms, such as Unity, which
can be used to create more realistic scenarios and objects.
References
[1] Reyes Rubiano, L. 2015. Localización de instalaciones y ruteo de personal especializado en logística humanitaria post-
desastre - caso inundaciones. Univ. La Sabana.
[2] Mpacts, I., Daptation, A. and Ulnerability, V. 2002. Climate change 2001: Impacts, Adaptation, and Vulnerability,
39(6).
[3] Schaller, N. et al. 2016. Human influence on climate in the 2014 southern England winter floods and their impacts. Nat.
Clim. Chang., 6(6): 627–634.
[4] Silva, M.M. and Costa, J.P. 2018. Urban floods and climate change adaptation: The potential of public space design
when accommodating natural processes. Water (Switzerland), 10(2).
[5] Bronstert, A. 2003. Floods and climate change: interactions and impacts. Risk Anal., 23(3): 545–557(13) ST-Floods
and Climate Change: Inter.
[6] González Herrera, M.R. and Lerma Legarreta, J.M. 2016. Planificación Y Preparación Para La Gestión Sustentable
De Riesgos Y Crisis En El Turismo Mexicano. Estudio Piloto En Ciudad Juárez, Chihuahua. Eur. Sci. Journal, ESJ,
12(5): 42.
[7] Ochoa Zezzatti, A., Contreras-Masse, R. and Mejia, J. 2019. Innovative Data Visualization of Collisions in a Human
Stampede Occurred in a Religious Event using Multiagent Systems, no. Figure 1, pp. 62–67.
[8] Curtis, S., Best, A. and Manocha, D. Menge: a modular framework for simulating crowd movement. Collect. Dyn.,
1: 1–40.
[9] Curtis, S., Best, A. and Manocha, D. MENGE, 2013. [Online]. Available: http://gamma.cs.unc.edu/Menge/developers.
html. [Accessed: 29-Oct-2019].
[10] Google Maps. [Online]. Available: https://www.google.com.mx/maps/@31.7483341,-106.4920319,17.66z. [Accessed:
28-Oct-2019].
[11] Fruin, J.J. 1971. Designing for pedestrians: A level of service concept. Highw. Res. Rec., 355: 1–15, 1971.
CHAPTER-11
1. Introduction
Due to the frequency of natural disasters and political problems, interest in humanitarian logistics
among academics and politicians has been increasing. In the literature, studies that analyze trends in
humanitarian logistics were found to focus more on how to deal with the consequences of a disaster
than on its prevention [1]. Simulation can be useful to pose different scenarios and be able to make
decisions about strategies to help avoid stampedes when a natural or man-made disaster occurs. This
helps to define a preventive strategy in the eventuality of a disaster. As a case study, we present a
model to simulate crowds, based on a building of a college in Ciudad Juárez, Mexico: the Instituto
Tecnológico de Ciudad Juárez (ITCJ).
Ciudad Juárez is in the northern border area of Mexico. It is a city that has had a population
growth due to a migratory process of great impact, receiving a significant number of people from the
center and south of the country in search of better opportunities, which has resulted in many cases in
the settlement of areas not appropriate for urban development, a situation that has been aggravated
as the natural environment has changed negatively [2]. Recently, the migratory flow has also come
from countries in Central and South America in the form of caravans seeking asylum in the United
States.
In 2016, the municipal government of Ciudad Juárez made an list of natural and anthropogenic
risks. As for geological risks, the document mentions that in 2014 there were some earthquakes that
measured up to 5.3 on the Richter scale, it is mentioned that: the province has a tectonic activity is
an internally active zone and will have seismic activity sooner or later [2].
The ITCJ, founded on October 3rd 1964, is ranked number 11 among the National System of
Technological Institutes [3]. The institution is located at 1340 Tecnológico Ave, in the northern part
of the city. Figure 1 shows the satellite location obtained through Google Maps. Ciudad Juárez has a
total of 27 higher education schools, and the ITCJ ranks third with a total of 6510 students enrolled
[2]. To date, the institution offers 12 bachelor’s degrees, three master’s degrees, a doctorate and an
open and distance education program [3].
1
Universidad Autónoma de Ciudad Juárez.
2
Instituto Tecnológico de Cd. Juárez.
* Coresponding author: al183255@alumnos.uacj.mx
146 Innovative Applications in Smart Cities
Figure 1: Satellite view of the ITCJ obtained through Google Maps [4].
Over the years, the institution has grown and new buildings have been built, with the Ramón
Rivera Lara being the oldest and most emblematic. Figure 2 shows photographs of the building. To
model the simulation, classrooms were taken into account in the aforementioned building, since it is
the oldest building in the institute and it is where the majority of students are concentrated.
This work presents a model and simulation based on the Ramón Rivera Lara building of
the ITCJ. The objective is to evaluate the feasibility of using the Menge framework to simulate
the evacuation of students and teachers in the event of a disaster. In the context of humanitarian
logistics, simulations help to plan strategies before the occurrence of a natural or anthropogenic
disaster. As future work, the aim is to model the whole institution and contrast the simulation against
the simulacrums that are carried out in the school. Finally, to develop an informatic tool based on
Menge and Unity so that decision-makers can evaluate different distributions in the classrooms
and through the simulation, to be able to evaluate if it is possible to obtain a more efficient one that
minimizes the risks in case of a disaster.
Simulating Crowds at a College School in Juarez, Mexico: A Humanitarian Logistics Approach 147
2. Related Work
This section presents a brief review of the literature from previous work related to the topic presented.
It is divided into three subsections: humanitarian logistics, crowd simulation, and mathematical
models.
3. Materials
The following describes the hardware and software used to perform the simulation.
3.2 Software
As far as software is concerned, the materials used are listed below.
• Operative System. Windows 10 Home.
• Operative System type. 64-bit operating system, x64 based processor.
• IDE. Microsoft Visual Studio community 2019. In order to compile y generate menge.exe
application.
• Text Editor. Visual Studio Code, version 1.38.1.
• Menge software. A framework for modular pedestrian simulation for research and development,
free code [13].
• Windows Command Prompt. Used to run simulations.
4. Methodology
To model the rooms and the section of the Ramón Rivera Lara building, first, the architectural plans
of the building were analyzed. Figure 4 shows the upper view of the ground floor of the building.
To establish the coordinates of the agents and the obstacles, measurements were taken of
four halls adjacent to the ground floor. First, a single room was simulated and later, the simulation
was made with the four rooms. To establish the speed of the pedestrians, the characteristics of the
morning shift students were analyzed in the classes from 8:00 to 9:00 AM and Equation 1, which
can be viewed in the previous section, was applied.
5. Simulation
The simulation was divided into two stages. First, a single classroom was simulated using the
methodology described in the previous section. Subsequently, four adjacent classrooms were used.
Figure 6: Project folder with scene, behavior and view XML files, as well as the graph file.
Simulating Crowds at a College School in Juarez, Mexico: A Humanitarian Logistics Approach 151
In the graph.txt file, the paths of the different agents were defined; Figure 7 shows some of the
paths defined in that file. It is worth mentioning that the darkest blue agent represents the teacher,
while the students are represented in light blue.
One of the files that most require configuration is the scene file, since there are agents and
obstacles declared. As the number of agents increases, this file grows proportionally. Figures 7 and 8
show some sections of this file.
To run the simulation, we must run the menge.exe and send as a parameter the project we want
to run (XML file of the project) which in this case is Salon5.xlm. Figure 9 shows an example of how
to run the simulation. Figure 10 shows the simulation in its initial, intermediate and final stages.
152 Innovative Applications in Smart Cities
References
[1] Chiappetta Jabbour, C.J., Sobreiro, V.A., Lopes de Sousa Jabbour, A.B., de Souza Campos, L.M., Mariano, E.B. and
Renwick, D.W.S. 2017. An analysis of the literature on humanitarian logistics and supply chain management: paving
the way for future studies. Ann. Oper. Res., pp. 1–19.
[2] Instituto Municipal de Investigación y Planeación. Atlas de riesgos naturales y atlas de riesgos antropogénicos. Ciudad
Juárez, Chihuahua. 2016. [Online]. Available: https://www.imip.org.mx/atlasderiesgos/.
[3] ITCJ - Nosotros. 2019. [Online]. Available: http://www.itcj.edu.mx/nosotros.
[4] Google Maps. 2019. [Online]. Available: https://www.google.com/maps/place/Instituto+Tecnológico+de+Ciudad+Ju
árez/@31.7211545,-106.4251575,1122m/data=!3m1!1e3!4m5!3m4!1s0x86e75dc249fd3e4b:0x58a769357165487b!8
m2!3d31.7213256!4d-106.4238612.
[5] Aguilar, F. 2017. ‘Liebres’, de fiesta, El Diario de Juárez.
[6] van der Laan, E., van Dalen, J., Rohrmoser, M. and Simpson, R. 2016. Demand forecasting and order planning for
humanitarian logistics: An empirical assessment. J. Oper. Manag., 45: 114–122.
[7] Özdamar, L. and Ertem, M.A. 2015. Models, solutions and enabling technologies in humanitarian logistics. Eur. J.
Oper. Res., 244(1): 55–65.
[8] Souza, J.C. and de C. Brombilla, D. 2014. Humanitarian logistics principles for emergency evacuation of places with
many people. Procedia - Soc. Behav. Sci., 162, no. Panam: 24–33.
[9] Van Toll, W., Jaklin, N. and Geraerts, R. 2015. Towards Believable Crowds: A Generic Multi-Level Framework for
Agent Navigation. Ict.Open.
[10] Ochoa, A., Rudomin, I., Vargas-Solar, G., Espinosa-Oviedo, J.A., Pérez, H. and Zechinelli-Martini, J.L. 2017.
Humanitarian logistics and cultural diversity within crowd simulation. Comput. y Sist., 21(1): 7–21.
[11] Simonov, A., Lebin, A., Shcherbak, B., Zagarskikh, A. and Karsakov, A. 2018. Multi-agent crowd simulation on large
areas with utility-based behavior models: Sochi Olympic Park Station use case. Procedia Comput. Sci., 136: 453–462.
[12] Curtis, S., Best, A. and Manocha, D. 2016. Menge: A modular framework for simulating crowd movement. Collect.
Dyn., 1: 1–40.
[13] Curtis, S., Best, A. and Manocha, D. 2013. MENGE.
[14] Ochoa Zezzatti, A., Contreras-Masse, R. and Mejia, J. 2019. Innovative Data Visualization of Collisions in a Human
Stampede Occurred in a Religious Event using Multiagent Systems, no. Figure 1: 62–67.
CHAPTER-12
1. Introduction
The development of technology in public management will be the first step towards the
transformation of large cities, the use of big data, technologies for industrial internet within the
cloud, new accounting tools, budget management, and others. The transformation towards the
development of the “Start cities” in countries like the Russian Federation, the People’s Republic of
China and Mexico and two societies in Africa as emerging powers of development is the first-level
priority, they are usually commissioned to develop innovations in the areas of artificial intelligence,
mass data processing, intranet, and computer security, that is why great efforts are being made in
legislative matters by prioritizing laws whose main objective is the inclusion of a digital economy
through the use of technologies, as it will cover absolutely everything in the development of trade,
infrastructure, urban development, public transport, payment of taxes, etc.
In the case of the Russian Federation, cities in full industrial development have a greater
advantage over larger metropolitan cities such as Moscow and St. Petersburg, although these cities
are large metropolises, their congestion, and limited growth space could hinder the use of new state
management systems, for example in the use of new public transport systems and urban development,
compared to emerging cities such as Kazan, Ekaterinburg, Rostov-on-Don or Sochi, which is the
best-planned city in the federation, the “Intelligent transport” how introduction to the transport
systems in these cities and future metropolises will be able to minimize the potential and future
problems well in advance, in the particular case of the city of Kazan in the Republic of Tatarstan is
contemplated the creation of a model of development of public transport, which will bring together
public and private specialists from the sector of construction, insurance, civil protection, transport
and communications and the automotive sector. As for the improvement of the quality of transport
and road services, one vital point cannot be forgotten: road safety. Road accidents in these big
cities are increasing year by year, mainly due to an imbalance between their infrastructure and the
needs of citizens and the state; a strong investment is needed in the construction of new avenues,
the maintenance of the existing ones, pedestrian crossings, ring roads around metropolitan areas to
avoid traffic congestion [1]. In the implementation of such measures, a special role is played by the
introduction of technical means of regulation with the use of electronic control systems, automation,
telemetry, traffic control and television to control roads in a large area or throughout the city. The
construction of new communication and road transport centers is not enough in itself. The role of
the intellectual component in the organization of the operation of the street and road networks is
increasing. The concept of “Smart cities” in recent years in Mexico and Latin America has ceased to
be considered a fantasy, due to the rise of interest in this topic, unlike the large metropolises of the
Russian Federation mentioned above, the large cities in Mexico do have vital space to develop at an
even faster level to increase the quality of life of its citizens, however, the challenge in Mexican cities
is not the budget or government development plans, is the absence of political projects and a marked
lack of automation strategies and legislation. In Mexico City, the main problem is transportation,
since mobility in a city of more than 20 million inhabitants, in addition to the people who travel
every day from the State of Mexico, is an alarming priority, as can be seen in Figure 1.
Figure 1: Data based on portal Cities in Motion Índex of the Escuela de Negocios IESE [2].
Mexico City is ranked 118th in the world in the use of technology in state management, and
it is clear that there has been little development in the area of mobility and transport on a par with
the use of technology and little legislation in this area. However, the combination of public and
private initiatives is increasing day by day, and the population spends an average of 45 days a year
using transportation [3]. Querétaro on the other hand already has a legislation developed since 2013
focused mainly on online tools, all public service information will be managed and connected to
the internet 100% in the city by 2021, will be used in services such as garbage collection, payment
of electricity, gas and water services, transportation services, traffic reports, while in the industrial
sector will promote the use of sustainable development. In these times of innovation, humanity has
entered an urban era, never in the whole history of mankind, half the population of the planet lives
in cities, life is more connected than ever, connectivity is not measured in distance, it is measured in
data consumption, data that is used as big data, cloud storage, etc. The functioning of government
institutions in terms of information management improves performance and its development
allowing later regional and municipal governance. There is much discussion about how and how
much information should be collected from citizens. Intelligent cities are now an experiment for
new public management to ensure the proper use of data, the quality of life of citizens and their
rights, seeking a rapprochement between the citizen and the state. Other possible risks derived
from the management of information and data collected in the intelligent city would be, on the one
hand, the generation of information bubbles that thanks to big data and algorithms deform reality
and only show us information according to our respective preferences and, on the other hand, the
consolidation of the phenomenon of the so-called “post-truth”, which consists of the possibility of
lying in the public debate, especially on the Internet, without relevant consequences and without the
supporters of those who have lied reacting even though the lie is even judicially proven. The “post-
truth”, in short, is built on a “certain indifference to the facts” [4]. In countries where corruption
indexes are high, the panorama of misuse of citizens’ information is one of the biggest challenges for
the conception of smart cities, that is why emphasis is placed on the development of general public
law, that is why transparency systems must have legal tools in the data collection and storage sets
Perspectives of State Management in Smart Cities 157
opened by state administrations, over time there will be a serious problem of duplication of data
and many other issues associated with the digitization of documents that are currently written on
paper, which will be very complicated to develop the digital transformation of state institutions, it
will be necessary the participation of the general population and organizations to solve this problem,
citizens will have to take responsibility for uploading their data legally if they want to be part of
the process in question. For example, when a child is born, its parents are obliged to register it at
the local civil registry office to obtain a birth certificate, the secretary of foreign affairs is obliged to
obtain a passport, the institute of social security is obliged to obtain a certificate of insurance, and
the institute of education is obliged to obtain a certificate of preschool education, at the time of the
child’s birth all his information will be captured in state databases in a digital data “Cloud”, where
the data of the citizens will be stored, the interaction between the citizens and the state will change
forever, the state will not only be limited to provide basic services but also to manage all these data
and more complex but at the same time more dynamic and fast life situations, by using tools and
algorithms developed with high quality. The direction and priority of the development of these tools
is directly focused on the improvement of bureaucratic, economic and social processes and the
improvement of the quality of life of the cities, all the state information managed from a centralized
system as an objective to improve the governance system also giving rise to new public services via
electronic forming an intelligent system of “self-government”. Not only is the use of citizen data
proposed in the “Intelligent state management”, but the systems will also contain information on
the population, territories, properties, urban planning, social development, public space, budgeting,
etc. This will mean a considerable increase in state income in a faster and more reliable way, for this
reason, the following conceptual diagram is proposed, which explains in a general way the potential
of digitalization in the management of public information in the framework of state management.
Figure 2.
The evolution in the state management has many questions before the automation of the
processes, as it is already known the fear to the disappearance of traditional jobs is not well seen
by anybody, it is not necessary to think about the disappearance of state jobs but of the evolution
of these, will job disappear? Of course, they will, but at the same time, new ones will also arise,
as has always happened throughout the history of the world. Rational management of resources
158 Innovative Applications in Smart Cities
and automation in public management will seek to eliminate excessive costs, diversion of public
resources, duplication of jobs, money laundering, etc. It will free up personnel resources that can be
used in the improvement of other services to make the bureaucratic system more efficient. Constant
monitoring 24 hours a day 7 days a week throughout the year will provide an audit of the resources
to detect corrupt processes by detecting possible irregularities thanks to the implementation of new
algorithms when creating public contracts, state concessions, recruitment of personnel to avoid
conflicts of interest.
the concentration of larger population centers, improving public services in rural areas, with
urban planning complaints. This will promote the cooperation of different levels of government
and the participation of civil society in the organization of the city, taking care of the economic
and environmental order, Through the construction of buildings and urban settlements near major
centers to generate jobs and avoid unnecessary expenses, each time you generate a construction of
industrial, housing or any other is important to take into account the environmental factor according
to the contact in which it is counted to generate a more sustainable building in this way it is possible
to plan to organize and use the resources for each space or time.
top-down promotion so it is very important that the state is the first one to make use of the new tools
that are available in terms of technologies since the main beneficiaries will be the citizens who in
turn will provide large corporations with better qualified human capital to be able to understand and
make optimal use of new technologies regardless of their age, the objective of the digitalization of
government affairs is to accelerate the intelligent transformation of government. The requirements
of building digital government in different places can become very diversified, so companies must
provide customized solutions in view of the country’s cultural diversity. The technology requirements
in the country’s major metropolises will not be the same as those in rural areas or in small or
developing cities in the west of the nation. Barriers to entry in the field of public safety have been
lifted. The automotive industry, dominated by driverless technology, will mark the beginning of
innovation in the industrial chain. The production, channeling and sales models of traditional
automotive companies will be replaced by emerging business models. The boundaries of the industry
between emerging driverless technology companies and traditional automotive companies will be
broken. With the rise of the car-sharing concept. Driverless technology carpools will replace the
traditional concept of private cars. With the development of specifications and standards for the
unmanned industry, emerging industries such as the safer and faster cars and at the same time will
be able to solve 2 of the most serious problems of large cities in China and the world, the problem
of traffic due to excessive vehicle fleet and pollution emitted by them significantly lowering travel
times and carbon dioxide rates in cities, in addition to reducing health problems caused by pollution
that in the end also represent a high cost to the state. For this reason, the potential for the application
of artificial intelligence in the field of intelligent car manufacturing in large cities should not be
underestimated. At present, the decrease in costs is greater than ever, and it is therefore possible to
invest in this area as it is a guarantee of success for the future, even though high-quality data
resources are not fully available or fully develop Through the use of algorithms that allow
communications to connect their devices to an internet network, decision support systems can be
made, to process large amounts of data for user support, control systems that also process data and
allow “to manage” in real time such as intelligent lighting for energy saving or the traffic light
network for full traffic flow to eliminate traffic problems in addition to obtaining real time data. The
development and use of intelligent vehicle traffic management is an obligatory aspect in Smart
Cities, which is not only limited to vehicle data, but also by using data obtained from the infrastructure
to connect to the internet and process this data. The most used is the use of video cameras and
different types of sensors, magnetic, infrared, radar, acoustic and of course the devices that travel
inside the vehicles that are circulating. Through the simulation in real time to be able to predict the
traffic at a certain time but the accuracy of the data will depend on the quality of the tools and their
use, with the simulators it is possible to learn and understand the traffic in the Smart Cities the
accomplishment of maintenance to the public roads, the pedestrianization, intelligent traffic lights
etc. As previously mentioned, logistics companies will benefit and increase due to the demand for
these intelligent systems. In the area of vehicle safety, the possibility of issuing fines in real time for
the violation of traffic laws, such as ignoring traffic signs, parking in prohibited places will be
detected by video surveillance systems that allow the identification of the vehicle by recording the
license plates or the proportion of emergency vehicles in case they are necessary in the event of a
breakdown of a vehicle that could compromise the flow of traffic.
of financing than other industries. From a company perspective, the world’s most important
equipment, financial strength, and technology genes are more likely to be favored by investors
in the secondary market. From the industry perspective, the new retail, driverless, medical, and
adaptive education that is easy to land indicates more opportunities, so companies in these areas
have more investment opportunities. The investment market has begun to favor the underlying
new technology companies. Unlike the previous investment preferences of applied artificial
intelligence companies, the investment market has gradually started to focus on start-ups with
underlying artificial intelligence technologies. The underlying technology is more popular, and
due to the high ceiling, these companies are more competitive in the market. The development
of the underlying artificial intelligence technology in China continues to lag behind that of the
United States, and the underlying technology is an important support for the development of
artificial intelligence, with the further development of artificial intelligence in China, investment
in the underlying technology will continue to grow.
- The proportion of companies that have won rounds A and B remains the highest, and strategic
investments have gradually increased. Currently, more than 1,300 AI companies across the
country have received venture capital investments. Among them, the proportion of A-round
investment frequency started to gradually decrease. Investors continue to be very enthusiastic
about round A, and it is currently the most frequent round of investment. Strategic investments
started to explode in 2017. With the gradual maturity of the artificial intelligence market
segment, leading companies, mainly the Internet giants, have turned their attention to strategic
investments that seek long-term cooperation and development. This also indicates that strategic
cooperation between the artificial intelligence industry and the capital level industry has
started to increase. The giants are investing in artificial intelligence upstream and downstream
of business-related industries. At the height of the development of artificial intelligence,
Internet giants with a keen sense of smell have also initiated their strategic design. Technology
giants like Alibaba, Tencent, Baidu, and JD.com have invested in various sectors of artificial
intelligence, supported by technology investment funds backed by the Ministry of Science and
Technology, the National Science Holding of the Chinese Academy of Sciences, the Local
Finance Bureau and the Economic and Information Commission. In terms of fields, the projects
in which investment institutions decide to invest are all before and after their future strategic
industrial design, and these investment projects also promote the implementation of national
strategies for the development of artificial intelligence. For example, Alibaba’s investment is
mainly focused on security and basic components. Representative companies that have won
investments include Shangtang, MegTV, and Cambrian Technology. Tencent’s investment
focus is mainly in the areas of health, education and intelligent cars. Representative companies
include Weilai Automobile and Carbon Cloud Smart. Baidu’s investment focus is primarily in
the areas of automotive, retail and smart homes. JD.com’s investment focus is on areas such
as automotive, finance and smart homes. The customer transformation and market strategy of
the new retail platform Tmall.com, an online sales platform operated by the Alibaba Group. In
the age of the internet, as traditional retail modes are concerned with the difficulties of finding
sustainability, artificial intelligence technologies have been gaining popularity in the Chinese
retail market. In addition to unmanned stores, new emerging innovations such as unmanned
delivery vehicles and artificial intelligence customer support have also been launched or
planned in China. The National Science Department, which is based on the Chinese Academy
of Sciences system, is involved in artificial intelligence technologies and applications such as
chips, medical treatment, and education. With the transformation and integration of digitization
in various industries, artificial intelligence will become a necessity for giants in many fields
such as automotive, medical and health care, education, finance, and intelligent manufacturing.
162 Innovative Applications in Smart Cities
References
[1] Mikhailova, N. //Innovative forms and mechanisms of forming the concept of efficient municipal management//
Bulletin of Volgograd stare university//#3.// p. 127–134.
[2] Cities in Motion Index de la Escuela de Negocios IESE. //2018 url: https://citiesinmotion.iese.edu/indicecim/.
[3] Moovit insights/Data and statistics on the use of public transport in Mexico City//2019//url:https://moovitapp.com/
insights/es/Moovit_Insights_%C3%8Dndice_de_Transporte_P%C3%BAblico-822.
[4] Rosado, J. and Diaz, R. 2017. //Latin America facing the challenge of the Smart Cities// d+i desarrollando ideas//2017//
p. 1–4.
[5] Li, K. 2019. //Global Artificial Intelligence Industry Development// 2019// url: https://xueqiu.
com/9508834377/137204731.
[6] How does artificial intelligence develop in various fields? // 2018// URL: https://www.ofweek.com/ai/2018-10/ART-
201700-8470-30276953.html.
[7] China Artificial Intelligence Industry White Paper// 2019// URL: https://www2.deloitte.com/cn/en/pages/technology-
media-and-telecommunications/articles/global-ai-development-white-paper.html.
[8] Korshunova, E. //Educational Potential of Smart City Management: Analysis of Civil Service Training Standards//
Business community power//2017.
[9] Toppeta, D. //The Smart City Vision: How Innovation and ICT Can Build Smart, “Livable”//Sustainable Cities// 2010//.
URL: http://www.inta-aivn.org/images/cc/Urbanism/background%20documents/Toppeta_Report_005_2010.pdf.
9 Taylor & Francis
Taylor & Francis Group
http://taylorandfra ncis.com
PART III
This chapter explores the relationship among different routing policies for order picking and the
features of the problem (describing both warehouse layout and orders), the results obtained by
simulation show that some policies are especially sensitive to the presence of certain conditions that
are likely to be present in real-world cases.
Moreover, the routing policies are represented—for the first time in the literature as far as our
knowledge—on structured algorithms. This contribution can facilitate their implementation because
the features of the policies are modeled by formal mathematical structures, laying the foundations to
standardize the way they operate.
1. Introduction
A warehouse is a fundamental part of a company, and its performance can impact the entire supply
chain [1].
Order picking is a problem that is present in all companies. It has received special focus from
research areas related to planning and logistics. This fact is a consequence of several studies that
identify order picking as the activity that demands more resources inside the warehouses, reaching
up to 55% of the operational cost of the entire warehouse [2].
This activity has a strong impact on production lines, so companies with complex warehouses
have areas dedicated to improving their product collection processes.
Universidad Autónoma de Ciudad Juárez, Av. Hermanos Escobar, Omega, 32410 Cd Juárez, Chihuahua, México.
* Corresponding author: gilberto.rivera@uacj.mx
166 Innovative Applications in Smart Cities
There are optimization models to support the resolution of this problem in the generation of
product-picking routes; however, being considered as an NP-complete problem is not feasible to
solve the models when working at medium and large scale due to the high cost that this represents.
Thus, it is possible to apply some heuristics to get an approximate solution in real cases.
Although several studies in the literature (e.g., [3]) show that these procedures are far from
finding solutions close to the optimal one; these heuristics are still applied to real problems due to
the simplicity and the way they relax the problem, granting a good balance between the quality of
the solution and the ease of implementation.
The picking routes are dependent on the structure of the warehouse and the properties of the
orders, so studies have stated [4] that the input elements and the performance of the routes obtained
are highly related. For example, a greater number of cross aisles facilitates movements inside the
warehouse, so that the distance of the route tends to decrease.
Throughout this chapter, we are going to define the algorithms for five of these heuristics and
deepen on the study of which of them are more sensible to the characteristics describing the layout
of a warehouse.
2. Background
Warehouses are an important part of supply chain in a factory, and the main activities inside of it,
like reception (receive and collect all product data), storage (move products to their locations), pick
up (pick products from their storage location), packing (prepare for being transported), and shipping
(place product in the transport medium). On this last step, the warehouse operation ends.
It is necessary to start from a solution and, by applying operators, calculate solutions better than the
initial solution. Normally, this strategy is applied to NP-hard problems, where heuristic functions
are used to eliminate non-promising routes [6].
The solution for this project is represented as SPRP (Single-Picker Routing Problem), which
consists of finding the shortest path that includes all the products to pick up [7]. This problem
could be represented as a special case for TSP and can be applied as such to solve the initial SPRP
problem. The objective is minimizing the distance and the time travel of the picker, either a human
or machine, so it becomes a TSP. TSP consists of a salesman and a set of cities. The salesman must
visit each of the cities, starting from a specific location (for example, native city), and come back
to the same city. The challenge of this problem is that the salesman wishes to minimize the total
duration of his/her trip.
SPRP could be modeled as TSP, where the vertices of the correspondent graph are defined
by the location of the available products inside of the warehouse and the location of the depot, as
presented in Figure 2.
This graph shows all the vertices and not only the picking ones, so the SPRP was modelled as a
Steiner TSP (which is a variant of the classical TSP) that is defined as follows:
Let G = (V, E) be a graph with a set of vertices V and a set of edges E. Let P be a subset of V.
The elements of V \P are Steiner points. On a Steiner route, each vertex of P is visited only once.
Steiner points should not be visited multiple times. However, a Steiner route could travel through
168 Innovative Applications in Smart Cities
some vertices and edges more than one time. In conclusion, the TSP of Steiner consists of finding a
route of Steiner with the minimum distance [8].
Figure 3 shows an example of a warehouse layout with different parameters and variables.
In Figure 4, the black-filled vertices are the picking vertices and the initial vertex, also known
as the depot. This set of vertices is the set P, a subset of all the vertices V. This subset will form a
Steiner graph, and the vertices formed at the intersections of the cross aisles and the picking aisles
we will call Steiner points.
Once the graph is obtained, the objective is to find a Hamiltonian circuit with the minimum
cost. The initial and finish point of this circuit will always be the depot.
Also, it is important to know that there are six different ways to travel through a picking aisle [9].
Figure 5 describes each one over one example of a unique block, one front cross aisle, and one
rear cross aisle.
Picker enters by the front cross aisle, crosses it completely, picks up all required products, and
finishes leaving the aisle by the rear cross aisle.
On the Order Picking Policies in Warehouses: Algorithms and their Behavior 169
Picker enters by the rear cross aisle, crosses it completely, picks up all required products, and
finishes leaving the aisle by the front cross aisle.
Picker enters and leaves twice through the aisle, enters once through the front cross aisle and
once more through the rear cross aisle, picker enters and leaves by the same place. The picker
will make its return defined by the largest gap, which is the largest distance between two adjacent
picking vertices or the picking vertex and cross aisle.
Picker enters through the front cross aisle, and its return point is the picking vertex farthest from
the front aisle.
Picker enters through the rear cross aisle, and its return point is the picking vertex farthest from
the rear aisle.
The picker doesn’t need to travel through the aisle because there are no picking vertices inside.
170 Innovative Applications in Smart Cities
These ways to travel are combined, generating different routing policies, which are highly
popular in practice.
3. Routing Policies
The routing policies determine the collection sequence of the SKUs (Stock-keeping unit) [10]. The
objective of the routing policies is minimizing the distance traveled by the picker using simple
heuristics [3].
To achieve this, it is necessary to consider the following features of the warehouse layout and
the product orders, which can influence the final performance of each policy: quantity of products in
the order, picker capacity, aisle length, and the number of aisles.
Five of these heuristics are described below.
3.1 S-Shape
The picker must start by entirely crossing the aisle (with at least one product) that is at the left or
right end (depending on which is the closest one to the depot) until reaching the rear cross aisle of
the warehouse. Then, the sub-aisles that belong to the farthest block of the depot are visited one by
one until they end up at the opposite end of the warehouse. The only case where it is not necessary
to cross a sub-aisle completely is when it is the last one in the block. In this case, after picking up the
last product, the picker returns to the cross aisle from where it entered the sub-aisle. When changing
blocks, the picker visits the closest sub-aisle to the last visited sub-aisle of the previous block. After
picking up all the products, the picker must return to the depot [11]. Figure 6 shows an example.
3.3 Midpoint
This routing policy is similar to Largest Gap; the main difference is that the picker identifies the
product to pick closest to the center of each sub-aisle, which is considered the travel limit [13];
at first, the products that are in the upper half of the sub-aisle are picked up from the rear cross
aisle, then, after picking up all the upper half products of the entire block, continues picking up the
remaining products from the front cross aisle. If the product is exactly in the center, the picker takes
it from either of the two cross aisles. In the end, the picker must return to the depot. An example is
represented in Figure 8.
3.4 Return
When applying this routing policy, the picker enters and leaves the sub-aisle from the same cross
aisle; this means that, after picking up the last product of the sub-aisle, the picker must return to the
cross aisle [14].
172 Innovative Applications in Smart Cities
In the case that the warehouse configuration contains more than one block, the picker visits all
the sub-aisles of the two adjacent blocks to the cross aisle alternately. After that, the picker moves
to the next cross aisle that is adjacent to two unexplored blocks. The picker must return to the depot
once all the products have been picked. This route is shown in Figure 9.
3.5 Combined
This routing policy is considered a combination of S Shape and Largest Gap policies. When all
products are picked up completely, the picker must decide between (1) continuing through the front
cross aisle, or (2) returning to the rear cross aisle [14]. This decision is made according to the
shortest distance to the next product to pick up. An example of this route is shown in Figure 10.
4.1 S-Shape
A fundamental part of the implementation of this heuristic is to define the order in which the picker
will visit the sub-aisles, an example of the correct order according to its characteristics is shown in
Figure 11.
Also, to help obtain the order, it is necessary to assign “auxiliary coordinates” to each sub-aisle.
Figure 12 represents an example.
174 Innovative Applications in Smart Cities
xmax = 4
ymax = 3
The following equation returns the picking order of each sub aisle, given x and y coordinates:
y if x = 0,
f (x, y)
= ymax + (xmax −1)( ymax − ( y +1)) + x −1 if ymax − ( y +1) mod =
2 0, (1)
y + (x −1)( y − ( y +1)) + (x − (x + 1))
max ( max max ) max otherwise.
where:
ymax is the quantity of sub-aisles per aisle,
xmax is the quantity of sub-aisles per block,
x is the current picking aisle, and
y is the current block.
On the Order Picking Policies in Warehouses: Algorithms and their Behavior 175
Once the order is defined, the next step is to obtain the direction in which the picker will pick
products from each sub-aisle.
Algorithm 1 describes the procedure used to obtain the final path.
Algorithm 1. S Shape
Input: Sub-aisles s with at least one product to pick up, visit order of sub-aisles
Output: Final path C
1 Begin
2 While there is unexplored s with product Do
3 Select next s according to the visit order
4 If s is part of the first picking aisle
5 Add products in s in ascending order to C
6 Else if s-1 was explored in an ascending direction
7 Add products in s in descending order to C
8 Else if s-1 was explored in a descending direction
9 Add products in s in ascending order to C
10 If the current block was explored completely
11 Add products in s in descending order to C
12 While end
13 Return C
14 End
Where all the sub-aisles that are part of the first picking aisle are explored and added to the final
path ascendingly (lines 4–5); then, the direction must alternate between each sub-block (lines 6–9)
until the picker has visited the block completely; in that case, the first sub-block the picker visits in
the new block is always traversed in a descending direction (lines 10–11).
The application of this equation only depends on whether the sub-block is on the first picking
aisle; in any other case, the blocks are explored from left to right. The following equation represents
this:
y if x = 0,
f ( x, y) = (2)
ymax + (xmax −1)( ymax − ( y +1)) + x − 1 otherwise.
Once the order in which the picker will visit the sub-blocks is defined, the generation of the final
route begins. This method is described in Algorithm 2.
Algorithm 2. Largest Gap
Input: Sub-aisles s with at least one product to pick up, visit order of sub-aisles
Output: Final path C
1 Begin
2 While there is unexplored s with product Do
3 Select next s according to the visit order
4 If s is part of the first picking aisle
5 Add products to C in an ascending direction
6 Else
7 Valuate elements in s
8 Calculate the distance between the current element and the next one
9 If the current distance is higher than the current limit
10 The new limit is the current element.
11 Traverse elements in s
12 If the current element is higher than the limit
13 Add the current element to C
14 Else
15 Add the current element to pending
16 If the current block was explored completely
17 Add elements in pending to C.
18 While end
19 Return C
20 End
Where the direction in which the picker traverses the sub-aisles that are in the first picking
aisle is ascending, adding the elements to the final path (lines 4–5). Then, the largest gap of each
sub-aisle is calculated by the distance between each of the elements that it contains; the new limit
is defined by the element that detects the highest travel distance (lines 8–10), the next step is to add
the elements that are above this limit to the final path (lines 11–12) and the elements that are below
are stored in a stack (14–15); once all the elements of the block—that are above this limit—have
already been added to the final path, lines 16–17 insert the pending elements in LIFO order (Last
In, First Out).
4.3 Midpoint
It is a heuristic similar to Largest Gap. They share the order in which the picker traverses the sub-aisles
(Figure 13).
Hereunder, the algorithm designed for the generation of the final path for Midpoint:
On the Order Picking Policies in Warehouses: Algorithms and their Behavior 177
Algorithm 3. Midpoint
Input: Sub-aisles s with at least one product to pick up, visit order of sub-aisles, locations per sub-block LS, locations
per aisle LA.
Output: Final path C
1 Start
2 While there is unexplored s with product Do
3 Select next s according to the visit order
4 If s is part of the first picking aisle
5 Add products to C in an ascending direction
6 Define the Midpoint value of the current block: LP-(LS/2)
7 Traverse elements on s
8 If the current element is higher or equal to Midpoint
9 Add element to C
10 Else
11 Add element to pending
12 If the current block was explored completely
13 Add elements on pending to C
14 Update Midpoint value of the next block: Midpoint-LS
15 While end
16 Return C
17 End
Where the sub-aisles that are in the first picking aisle must be traversed in an ascending direction
while adding all the elements to the final path. (Lines 4–5). After, to obtain the midpoint and take
it as a limit (line 6), starting from the farthest block to the depot, this value is obtained as follows:
LS (3)
Mp
= LA −
2
Where:
Mp is the midpoint,
LA represents the locations per picking aisle, and
LS represents the locations per sub-aisle.
This value is defined as a limit until there is a block change (lines 12 and 14), where it must be
updated as follows:
Mp = Mp – LS (4)
If the element is above the midpoint, it will be added directly to the final path, storing the rest
on the pending elements (lines 8–11). The elements of the last sub-block of the block must be added
completely in a descending way; so, it passes to the lower cross aisle and then starts adding the
pending elements to the final path (line 13).
4.4 Return
The first step in the implementation of this routing policy is to define the order in which the picker
will visit the sub-aisles; Figure 14 shows an example of the correct order according to the previously
described properties.
178 Innovative Applications in Smart Cities
y if x = 0,
=g(x, y ) h1 (x, y ) if( ymax − ( y +1)) mod 4 < 2, (5)
h (x, y ) otherwise.
2
Algorithm 4. Return
Input: Sub-aisles s with at least products to pick up, visit order of sub-aisles, locations per sub-block LS, locations
per aisle LA.
Output: Final path C
1 Start
2 While there is unexplored s with product Do
3 Select next s according to the visit order
4 If s is part of the first picking aisle
5 Add products to C in an ascending direction
6 If the quantity of the blocks is an even number
7 If s is part of a block with an even coordinate
8 Add elements in s to C in an ascending direction
9 Else
10 Add elements in s to C in a descending direction
11 Else
12 If s is part of a block with a even coordinate
13 Add elements in s to C in a descending direction
14 Else
15 Add elements in s to C in an ascending direction
16 If s is part of the block on y=0
17 Add elements on s to C in an ascending direction
18 If the current block was explored completely
19 Add elements in s+1 to C in an ascending direction
20 While end
21 Return C
22 End
The elements found in the first picking aisle are added to the final path in an ascending direction
(lines 4–5). Subsequently, it is important to define whether the number of blocks in the warehouse is
even or odd (line 6). This fact influences because the characteristics of the routing policy states that
the picker must alternate between the sub-blocks of two blocks, this implies that, in cases where the
warehouse configuration has an odd number of blocks, the sub-blocks belonging to the last block to
explore (closest to the depot) must be explored continuously (without alternating) (line 16). When
the total number of blocks is even, all sub-aisles belonging to a block are crossed ascendingly, and
in the odd ones in a descending direction (lines 6–10). In the opposite case (warehouses with an odd
number of blocks), the sub-aisles that belong to the odd-number blocks are traversed in an ascending
direction, and the sub-aisles that belong to the even-number blocks are traversed in a descending
direction (lines 11–15). On the last block to be explored, all sub-aisles are visited ascendingly (line
16). In both cases, at the end of every block, the elements of the first sub-block of the next block
(lines 18–19) are added to the final path in a descending direction.
4.5 Combined
The order of how to traverse the sub-aisles is similar to S Shape (represented in Figure 11), but
there are cases where it can vary because of the characteristics of this routing policy; when picking
up the last element of every block, it is important to define which is the sub-aisle of the next block
with product that has the smaller distance, if this sub-aisle is on the left end, the path of the block
turns in the direction from left to right; otherwise (if the sub-block with the nearest product is in the
extreme right sub-aisle), the opposite direction must be taken. The following equation represents
the above behavior:
y if x = 0,
f (x, =
y ) ymax + (xmax −1)( ymax − ( y +1)) + x − 1 if d1 < d 2 , (8)
y + (x −1)( y − ( y +1)) + (x − (x +1)) otherwise.
max max max max
180 Innovative Applications in Smart Cities
Where d1 is the distance between the last element of the block and the sub-block with the
leftmost product and d2 is the distance between the last element of the block and the sub-block with
the product on the rightest location. Once having the order in which the picker will visit the sub-
aisles, the final route is generated as presented in Algorithm 5.
Algorithm 5. Combined
Input: Sub-aisles s with at least one product to pick up, visit order of sub-aisles.
Output: Final path C
1 Start
2 While there is unexplored s with product Do
3 Select next s according to the visit order
4 If s is part of the first picking aisle
6 Add elements on s to C in an ascending direction
7 Capture the last element on s
8 Calculate d1: the difference between the last element on s and the first one on s+1 from the rear cross aisle.
9 Calculate d2: the difference between the last element on s and the first one on s+1 from the front cross aisle.
10 If d2 is greater than d1
11 Add elements on s to C in an ascending direction.
12 Else
13 Add elements on s to C in a descending direction.
14 If the current block was completely explored
15 Add elements in s+1 to C in an ascending direction
16 While end
17 Return C
18 End
Where the elements that belong to the first picking aisle are added to the final path in an
ascending direction (lines 4–5), from this point, the distance from the last element of each sub-
aisle to the first element of the next sub-aisle accessing from the front and rear cross aisle must be
evaluated (lines 7–9). In the case that the distance to the first element from rear cross aisle is slower,
the elements on the next sub-aisle have to be added in a descending direction to the final path; in the
opposite case, the elements on the next sub-aisle are added in an ascending direction (lines 10–13).
where:
C = <x1, x2, x3, …, xp> is the sequence of elements to evaluate,
p is the number of vertices that forms a circuit, and
D is the distance matrix.
5. Experimental Results
In this section, the results obtained by this project are shown and interpreted.
A total of 125 different warehouse layouts were processed and defined according to the
combination of the following values:
On the Order Picking Policies in Warehouses: Algorithms and their Behavior 181
5.1 Insights
Let us remember that, as the correlation gets closer to 1 or –1, the correlation is greater. Because
this is a minimization problem, a correlation with the performance is better if it is negative. So, the
main insights are:
• S Shape tends to be sensitive to the number of locations by sub-aisles (negative correlation
of 0.41824) and the number of products in the warehouse (negative correlation of 0.24840)
(Figures 15a and 16a).
• For Largest Gap, the correlation coefficient that stands out is the number of picking aisles,
obtaining a positive 0.37988 (Figures 15b and 16b). Figure 15b demonstrated the tendency
generated by this result, where the level of efficiency compared with the other policies decreases
as this value becomes greatest.
• For Midpoint, the two most relevant variables are the number of aisles and the number of
products to pick up, both with a positive correlation of 0.33772 and 0.26812, respectively
(Figures 15c and 16c).
• The variable with the most effect over the performance of Return is the number of locations
by aisle, where it gets a positive correlation of 0.58660. The more locations by aisle, the more
competitiveness Return obtains. Figures 15d and 16d show that this policy gets better results as
the number of aisles increases.
• Regarding the results of Combined, the number of locations by sub-aisle seems to be an
important feature, with a negative coefficient of 0.48762, considerably higher compared to the
other variables (Figures 15e and 16e show).
• Combined has better results in warehouses where the number of locations by aisle is the greatest
variable, while the most degraded is Return.
• A high number of aisles tends to affect the performance of the five policies, but the policy with
the most unfavorable results is Largest Gap. Being S-Shape and Combined the least affected.
• In the case of cross aisles, there was no improvement in the performance of any studied policy.
The policies where its effectiveness decreases are Return and Combined. While in S Shape is
just a little bit affected.
• S-Shape is the most benefited policy in warehouses where the number of products to be picked
up increases.
To be able to develop this project, it was necessary to process orders, get five different results,
and compare them over different methods. A benchmark of instances was synthetically created, and
the performance in this wide range of different conditions was measured.
182 Innovative Applications in Smart Cities
a) S shape
b) Largest Gap
c) Midpoint
d) Return
e) Combined
The main purpose of this project is to explain and get knowledge on policies and to reduce
traveled distances in order picking processes in warehouses, offering an encouraging panorama for
the construction of more complex routing policies.
On the Order Picking Policies in Warehouses: Algorithms and their Behavior 183
a) S shape
b) Largest Gap
c) Midpoint
Figure 16 contd. ...
184 Innovative Applications in Smart Cities
d) Return
e) Combined
Figure 16: Pearson correlation coefficient Scatter Plot Matrix.
References
[1] Ochoa Ortiz-Zezzatti, A., Rivera, G., Gómez-Santillán, C., Sánchez–Lara., B. Handbook of Research on Metaheuristics
for Order Picking Optimization in Warehouses to Smart Cities. Hershey, PA: IGI Global, 2019. doi.org/10.4018/978-1-
5225-8131-4.
[2] Tompkins, J.A., White, J.A., Bozer, Y.A. and Tanchoco, J.M.A. 2010. Facilities planning. New York, John Wiley &
Sons.
[3] Petersen, C.G. and Aase, G. 2004. A comparison of picking, storage, and routing policies in manual order picking. Int.
J. Production Economics, 92: 11–19.
[4] De Koster, R., Le-Duc, T. and Roodbergen, K. 2007. Design and control of warehouse order picking: A literature
review. European Journal of Operational Research, 182: 481–501.
[5] Theys, C., Bräysy, O., Dullaert, W. and Raa, B. 2010. Using a TSP heuristic for routing order pickers in warehouses.
European Journal of Operational Research, 200(3): 755–763.
[6] Pansart, L., Nicolas, C. and Cambazard, H. 2018. Exact algorithms for the order picking problem. Computers and
Operations Research, 100: 117–127.
[7] Scholz, A. 2016. An exacto solution approach to the single-picker routing problem in warehouses with an arbitrary
block layout. Working Paper Series, 6.
[8] Henn, S., Scholz, A., Stuhlmann, M. and Wascher, G. 2015. A New Mathematical Programming Formulation for the
Single-Picker Routing Problem in a Single-Block Layout, 5: 1–32.
[9] Ratliff, H.D. and Rosenthal, A. 1983. Order-picking in a rectangular warehouse: a solvable case of the traveling
salesman problem. Operations Research, 31(3): 207–521.
[10] Gu, J., Goetschalckx, M. and McGinnis, L.F. 2007. Research on warehouse operation: A comprehensive review.
European Journal of Operational Research, 177(1): 1–21. doi.org/10.1016/j.ejor.2006.02.025.
[11] Hong, S. and Youngjoo, K. 2017. A route-selecting order batching model with the S-shape routes in a parallel-aisle
order picking system. European Journal of Operational Research, 257: 185–196.
[12] Cano, J.A., Correa-Espinal, A.A. and Gomez-Montoya, R.A. 2017. An evaluation of picking routing policies to improve
warehouse. International Journal of Industrial Engineering and Management, 8(4): 229–238.
CHAPTER-14
1. Introduction
A fish tank can be installed in various spaces, from the living room of a home, a consulting
room, a restaurant, an aquarium or a hotel. There are more than 400 ornamental species that have
commercial relevance: zebrafish, angel, Japanese, molly or sword. So, the possibilities of this agro-
business, whose demand is growing in the Mexican market, are numerous. Commissioner Mario
Aguilar Sánchez, during the closing of the First National Watercolor Expo in the Federal District
[1], noted that 60 million organisms are produced each year, worth 4.5 billion MXN, from about
700 productive units. He affirmed that the national production is developed in 23 entities, where 160
multi-species species are cultivated, such as koi carp, guppy, molly, angelfish, platy, danio zebra,
tetra, cichlid, betta, gurami, sword, nun, oscar, plecos, catfish, shark, sumatra, dragon and red seal.
The national production of ornamental fish is a business with prospects of social and economic
growth that is developed in 23 entities, where 160 species and varieties are cultivated by aquarists,
said the national commissioner of Aquaculture and Fisheries, Mario Aguilar Sánchez.
However, according to various groups of breeders and authorities of the Federal Government,
the great challenge for this segment to take off and generate wealth at the local level, consists of
strengthening the breeding, sale, and distribution of fish, since it is now possible to satisfy the
demand only by importing animals in large volumes.
Among the existing varieties, one of the most popular is the so-called Goldfish or Japanese fish.
The conservation and breeding of cold-water fish are not new concepts, since from ancient times
in the Asian continent, particularly in distant China, people began to select beautiful specimens.
In this research we focus on colorful Koi carp species, were often introduced in small outdoor
ponds or ceramic pots. These animals were not only raised for ornamental purposes but also had a
practical purpose since their conservation in captivity facilitated the ability to eat fresh fish at any
time without the difficulty of capture in the wild. With the passage of time and the boom aquariofilia
1
Juarez City University.
2
Universidad Politécnica de Aguascalientes.
* Corresponding author: alberto.ochoa@uacj.mx
Color, Value, Price and type Koi Variant Species for the Aquaculture Industry 187
acquires, it gives way to the selective breeding of specimens, producing a great variety of fish, both
in colors and in certain peculiar characteristics of their phenotype.
The state of Morelos is an ideal setting for the breeding of Japanese ornamental fish. Many of
the businesses, most are familiar and small. When you launch into this world of aquiculture do it
without any system of decision making that allows them to go a good way to obtain greater profits
and in the shortest possible time [2].
But other states could have several problems implementing fish breeding which could benefit
the economy of several places, that is why the cultivation of ornamental species had a considerable
increase in Mexico. One of the states where this economic activity is emerging is Chihuahua, which,
despite not having the most appropriate weather conditions although it has the physiological spaces
for it. Thinking in the costs of implementing a tank for different states and the required technologies
we proposed with this research to determine which is the ideal model to optimize breeding and
development processes of the different species of Koi Fish. Determining the ideal and optimal value
of a tank of koi fish is essential to specify the marginal gain of this type of project in aquaculture
but the adaptation of the thank depends on several factors like the size of the carps in the tank and
its quantity.
Problem Statement
A project is a temporary effort, with a variety of resources that seeks to satisfy several specific
objectives in a given time. Innovation is the creation and use of new ideas that give value to the
client or business. Proper planning of a Japanese fish breeding project depends on many factors.
Detailed planning should be done, foreseeing risks that may arise. Technological Innovation
Projects Scheduling Problems (TI-PSP) are a variant of Project Scheduling Problems (PSP). The
PSP is a generic name that gives to a whole class of problems in which the best form, time, resources
and costs for the programming of projects are necessary. The problem studied in the present
investigation corresponds to a PSP theme because it involves variables of resource allocation to
tasks and processes (computation).
determine. While in discriminant analysis, the groups are known and what we want to know is the
extent to which the available variables discriminate against these groups and can help us to classify
or assign the individuals to the given groups.
Observations in the same group are similar (in a sense) to each other and are different in the
other groups. Clustering methods can be divided into two basic types: hierarchical and partitioned
grouping. Hierarchical clustering can be achieved with the agglomerated algorithm. This algorithm
starts with disjoint clusters (n objects in a single group) and gradually proceeds to merge objects or
clusters of more similar objects into a cluster.
Algorithm 1. Basic Algorithm of Hierarchical Cluster Clusters
For the decision-making, a cluster analysis using hierarchical clustering and the agglomerated
algorithm was employed. It was used as the input variables’ initial budget and space to mount the
Japanese fish farm (m2). The hierarchical cluster generates a dendrogram, which is a tree diagram
frequently used to illustrate the arrangement of the clusters produced. A dendrogram is a diagram
showing the attribute distances between each pair of merged classes in a sequential fashion. To
avoid crossing lines, the diagram is graphically exposed such that the members of each pair of
classes that merge are close elements. Dendrograms are often used in computational biology to
illustrate the grouping of genes or samples, sometimes on top of heatmaps. After obtaining the
dendrogram, a mathematical model will be applied to each element of the dendrogram. This will
reveal the optimal values for the implementation of the project according to the values of entry of
budget and quantity of square meters.
ANNs are mathematical for representing biological neurons that were proposed in the 1950s ,
their applications cover several areas including regression models [4]. The activation functions are
the hyperbolic tangent sigmoid for hidden layers, and the output layer is the linear activation function,
which builds a good approximator for functions with finite discontinuities [7]. In this research, we
train the neural network with the Scaled Conjugate Gradient (SCG) backpropagation and the Mean
Square Error (MSE) as the expectation function in the equation (3). The SCG Backpropagation
is a modified backpropagation proposed by Moller in 1993 that calculates the gradient in specific
conjugate directions increasing convergence speed. With as the number of samples, as the target,
and as the computed output of the FNNs.
3. Mathematical Model
In this section, the mathematical model that was used to optimize the costs of the initial investments
for breeding goldfish is addressed. The model analyzes the main elements necessary to start a
business of Japanese fish farming.
Each investment project includes material resources that are divided into infrastructure
elements, equipment needed for cultivation, cost of young small fish (approximately 2 months old)
and cost of fish feed. Most of the costs were obtained from the website Mercado Libre [5], except
for costs for the construction of tanks [6].
The Objective Function corresponds to the budget necessary for the cultivation of Japanese fish
and is formulated as follows:
Rb = Af * Cf + Cff + I (1)
Color, Value, Price and type Koi Variant Species for the Aquaculture Industry 189
Acronym Concept
Ib Initial budget
Rb Real budget
Nml Number of meters long
Nmw Number of meters width
Nma Number of square meters available
Nms Number of square meters suggested by the model
Af Amount of fish to buy
Nft Number of fish per tank 3 m x 2 m x 0.5 m
Lf Quantity of liters needed by a fish of 10 or more centimeters (constant value)
Cf Cost of small fish (3-4 cm)
Cff Cost of food for all fish
I Infrastructure cost
Tc Tank cost of 3 m x 2 m x 0.70 m
Ce Cost per equipment
Nt Number of tanks
Mt Square meters of a tank (constant value 3m x 2m)
Where Af corresponds to the quantity of fish, Cff is the general cost of food for the fish, I
corresponds to the estimated cost of infrastructure for the crop and Cf is the cost per Japanese fish
where the average value is 10 MXN per fish.
Model Restrictions:
The budget Rb cannot exceed the initial budget that is denoted by Ib.
Ib ≥ Rb (2)
Similarly, the restriction is made that the number of square meters used Nms cannot exceed the
available Nma:
Nms ≥ Nma (3)
For this, first determine the amount of m available Nma:
2
3000
Nft = (8)
Lf
Nft is the number of fish to be placed per tank,
Tc = 2 ((l + a) * h) * 208.41 + ((2(l + a) * h) + l * a) * 125.61 (9)
In this, Tc is the cost per tank, where l is the length, a is the width and h the height of the tank,
in this case l = 3 m, a = 2 m and h = 0.7 m. See table # 2.
Af * 0.408
Cff = * 170 (10)
1.5
Table 2: Description of the elements for the construction of the tanks.
Concept Costs
Annealed wall of 5.5 cm thick, common finish in flat areas 208.41
Polished planar with wood plane in walls, with mortar, cement-sand in proportion 1:6 125.61
of 2.0 cm of thickness, includes the repelled
Where Cff is the cost per food according to Af that is the quantity of fish, in a period of 4
months, period in which they must have 10 or more centimeters. See table # 3. For this computation
[5,6] were used as reference where in the research the best results were obtained feeding the goldfish
twice a day, where the amount is calculated by 2% of body mass.
Ce = Mph + Wt + Ctf + Wp (11)
Concept Amount
1.5 kg food package cost 170 MXN
Amount of food per fish in 4 months 0.408 kg
The following equation is the one that is responsible for calculating the cost of infrastructure (I).
I = Ce + Tc * Nt (13)
Color, Value, Price and type Koi Variant Species for the Aquaculture Industry 191
Model for determining the location of a koi carp to determine its size underwater
Based on the method described in [8], suppose an observer at the edge of a pool perceiving an object
immersed in water located at a distance and at a depth of incidence of the light on the water, which
are determined using the law of refraction.
The apparent position of an object seen by the observer is in the direction of the refracted ray
and the object is located at the origin at a depth of but the observer perceives it at h-associated with
incidence of the light on the water. A ray (red color) starts from the object and forms an angle of
incidence θi. The refracted ray forms an angle with the normal one (Figure 1). According to the law
of refraction, and visualized in Figure 1:
nsinθi = sinθr (14)
where n = 1.33 is the water refractive coefficient and the angle of refraction θr is greater than the
incident θi.
Figure 1: Apparent position of koi carp under water based on refraction incidence.
The direction of the refracted ray and its extension passes through the point (xs,h) and its slope
is 1/tanθr. Knowing that xs=htanθi. The equation for this line is
x – xs
y–h=
tan θr
From the object, a ray (blue) forms an angle of incidence θ’i. The refracted ray forms an angle
θ’r with the normal one.
The direction of the refracted ray and its extension passes through the point (x’s,h) and its slope
is 1/tanθ’r. Knowing that x’s=htanθ’i. The equation for this line is
x – x's
y–h=
tan θ'r
The extensions of the refracted rays are cut at the point indicated in blue coordinates
tan θi tan θ'r – tan θr tan θ'i
xa = h
tan θ'r– tan θr
This is the apparent position (xa, already) of the object as seen by an observer in the direction
of the refracted beam. Where θ’i = θi+δ, where δ is a small angle increment.
We represent the apparent position (xa, ya) of an object located at the origin, for various angles
of incidence.
192 Innovative Applications in Smart Cities
The depth of the object is h = 1 m, the angle increment δ = 0.01 degrees. The arrows indicate
the direction of the refracted beam, the direction of observation of the object, as is possible see in
Figure 2.
Figure 2: Apparent positions of koi carp under water as observed position varies.
Figure 3: Perceived position of koi carp under water by observer in the edge.
Color, Value, Price and type Koi Variant Species for the Aquaculture Industry 193
Knowing the position of the object (xb, yb) and that of the swimmer’s eyes (0,y0), from the law
of refraction, nsinθi = sinθr, we will calculate the xs position, where the ray of light coming from the
object is refracted, and the angles of incidence θi and the refracted ray θr. In the figure, we see that
xb – xs xs
tan θi = tan θr =
–yb y0
Eliminating xs and using the law of refraction
sin θi
xb – y0 tan θr + yb =0
√1–sin2θi
sin θr sin θr
xb – y0 + yb =0
√1–sin2θr √n2–sin2θr
We solve this transcendent equation, to calculate θr, then xs and θi
The equation of the direction of the refracted beam and its extension is
x – xs
y=–
tanθr
xb – x = y tan θr – yb tan θi
To determine the apparent position of the object, we need to draw one more ray of refraction
angle θ’r= θr+δ, (being δ a very small angle, infinitesimal) and find the intersection of the extension
of the two refracted rays, as shown exaggeratedly in the figure. The equations of the red and blue
lines are, respectively
{ xb – x = y tan θr – yb tan θi
xb – x = y tan θ'r – yb tan θ'i
We clear the apparent position xa as it is the point of intersection of the two lines
{
tan θ'i tan θr – tan θ'r tan θi
xa = xb – yb
tan θ'r– tan θr
tan θ'i – tan θi
yb = yb
tan θ'r– tan θr
df
f(θr + δ) ≈ f(θr) + — δ
dθr
δ
tan (θr + δ) ≈ tan θr +
cos2θr
sin (θr + δ) sin θi n2cos θr
≈ + δ
√n2–sin2(θr + δ) √n2–sin2θr (n2–sin2θr)3/2
{ sin θr
√n –sin θr
2 2
–
n2cos θr
(n –sin θr)
2 2 3/2 }
δ –
sin θr
√n2–sin2θr
{ }
ya = yb
δ
tan θr + – tan θr
cos2θr
n2cos3 θr
= yb
(n2–sin2θr)3/2
We calculate the abscissa xa
sin θ'i sin θi
tan θr – tan θ'r
tan θ'i tan θr – tan θ'r tan θi √1–sin2θ'i √1–sin2θi
= =
tan θ'r– tan θr tan θ'r– tan θr
{ sin θr
√n2–sin2θr
+
} {
n2cos θr
(n2–sin2θr)3/2
δ tan θr– tanθr +
δ
cos2θr } sin θr
√n2–sin2θr
{ }
tan θr +
δ
cos2θr
– tan θr
Calculation of the apparent position of a submerged object as different Koi carps issues inner diverse
distances form the water tank in Figure 4.
Let y0=1.5 be the height of the swimmer’s eyes, the position of the object (xb, yb) is (5,-2)m
We solve the transcendent equation to calculate the angle of the refracted beam θr and the xs position
on the water surface where the incident beam is refracted
sin θr sin θr
xb – y0 + yb =0
√1 – sin2θr √n2 – sin2θr
{
and then the apparent position (xa, ya) of an object in the position (xb, yb)
(n2 – 1) sin3 θr
xa = xb + yb
(n2 – sin2 θr)3/2
n2 cos3 θr
ya = yb
(n2 – sin2 θr)3/2
We trace the incident beam, the refracted beam and its extension to the apparent position of the object
(Koi carp) in Figure 6. After obtaining last equation, we get a mathematical model for determining
the apparent position of the koi carps, but we perceive images of the apparent position through the
camera, therefore, we could fix the equations for obtaining the real position of the carp with the
apparent position but in the real life identifying is a difficult task.
Alternatively, we propose to generate a dataset using a simulation with the equation (28) and
varying its parameters, then train an ANN with the structure in section 2 for determining the real
position with the input of the apparent position of carps, then that information will be used for
determining the size of a carp.
Figure 4: Variations of position of koi carp underwater for generating the dataset.
196 Innovative Applications in Smart Cities
Finally, we drew the apparent shape of the bottom of a pool as seen by the swimmer on the edge.
The real shape is described by the function
{
0.5xb
–0.9 – 0 ≤ xb < 15.5
15.5
yb = –1.6xb + 21.02
15.5 ≤ xb < 18.2
2.7
–3 18.2 ≤ xb < 25
The histogram comparison the performance in the training which also support the suppression of
overfitting is shown in Figure 7.
Color, Value, Price and type Koi Variant Species for the Aquaculture Industry 197
The Aquaculturist does not perceive the bottom of the fresh water tank from a certain distance
xb of about 6 m, for which it is already almost zero (see in Figure 9).
Determining the value of each issue is a complicated task, mainly because many of the issues
are subspecies of other species and the valuation models are different for each species, as shown in
Figure 10.
5. Experimentation
In order to be able to simulate the most efficient arrangement of individuals in a social network,
we developed an atmosphere able to store the data of each one of them representing individuals of
each society, this with the purpose of distributing in an optimal form to each one of the evaluated
societies. One of the most interesting characteristics observed in this experiment was the diversity
of the cultural patterns established by each community. After identifying the best architecture, we
trained the neural with the 80% for training dividing it in 70% for training, 15% for cross-validation
in the training and 15% for testing, finally with the trained model we test the performance in the
20% reserved at the beginning as test set. The training results comparing the training performance,
cross-validated performance, and test performance are shown in Figure 6, showing that there is not
overfitting in the ratio between train, validation, and test responses. The generated configurations
can be metaphorically related to the knowledge of the behavior of the community with respect to
an optimization problem (to select Aquaculture societies, without being of the same quadrant [3]).
The main experiment consisted of detailing each one of the 21 koi carp’s variant. This allowed us to
generate the best selection of each Quadrant and their possible location in a Koi Fish Pond, which
was obtained after comparing the different cultural and social similarities from each community, and
to evaluate each one of them with Multiple Matching Model. Using ANNs we determine the correct
species, size and relatively the possible weight of an issue as is possible to see correctly in Figure 11.
Figure 11: Intelligent application to determine the correct parameters associated with the final price of an issue of Koi Fish
species determined.
The developed tool classified each one of the societies pertaining to each quadrant, with the
proposed result of obtaining real position of carps based on the apparent position the future research
for this work must be to extract the apparent positions from images captured with a digital camera or
a cellphone picture, this should be done by using Deep Learning, specifically Convolutional Neural
Networks (CNNs), because they are good classifiers in object recognition, which could identify
the koi carps in images and its position, after that the obtained location could be send to our ANN
trained of obtaining the real position of the coordinates in the boxes and the calculate the sizes of the
carps based on the corners of its box transformed into real positions..
200 Innovative Applications in Smart Cities
The design of the experiment consists of an orthogonal array test, with the interactions between
the variables: socialization, the temperature required, adult size, cost of food, maintenance in a
freshwater tank, growing time, fertility rate, valuation in a sale. These variables are studied in a range
of colors (1 to 64). The orthogonal array is L-N (2**8), in other words, 8 factors in N executions, N
Factors Weather
No. A B AB C D BC E measurements
1 2 3 4 5 6 7 1 2
1 1 1 1 1 1 1 1 26 38
2 1 1 1 2 2 2 2 16 6
3 1 2 2 1 1 2 2 3 17
4 1 2 2 2 2 1 1 18 16
5 2 1 2 1 2 1 2 0 5
6 2 1 2 2 1 2 1 0 1
7 2 2 1 1 2 2 1 4 5
8 2 2 1 2 1 1 2 5 3
Instance Crappie Socialization Temperature Size(cm) Cost of Maintenance Growing Fertility Valuation
food time rate
1 Doitsu 4 21 °C a 31°C 13 0.72 0.23 0.14 0.72 10
2 Bekko 3 24°C a 27°C 6.5 0.91 0.92 0.92 0.77 10
3 Asagi 5 25.5 °C 10 a 13 0.43 0.94 0.33 0.98 15
4 GinRin 6 26.5°C 5 0.18 0.67 0.79 0.74 15
Kohaku
5 Kawarimono 5 27°C a 31°C 7.5 0.85 0.52 0.74 0.27 20
6 Hikari 3 25°C a 31 °C 10 a 15 0.32 0.47 0.71 0.96 20
7 Goshiki 5 22°C a 30°C 10 a 13 0.66 0.82 0.36 0.17 15
8 Kohaku 6 15°C a 32°C 4a7 0.33 0.47 0.54 0.24 10
9 Kumonryu 7 21°C a 27°C 5 a 7.5 0.55 0.89 0.43 0.48 10
10 Kujaku 5 13°C a 27°C 10 0.44 0.87 0.47 0.26 25
11 Goromo 6 20°C a 30°C 10 a 13 0.88 0.27 0.22 0.42 20
12 Gin Matsuba 6 24°C a 28°C 25 a 60 0.72 0.23 0.19 0.44 20
13 Sanke 7 22°C a 28°C 6 0.91 0.92 0.47 0.71 20
14 Orenji Ogon 2 22°C a 28°C 5 0.43 0.94 0.23 0.68 20
15 Platinum 7 24°C a 26.5°C 4 0.18 0.67 0.58 0.27 20
Ogon
16 Ochiba 6 26.5°C 5 0.85 0.52 0.38 29 20
17 Tancho 5 20°C a 30°C 27 0.32 0.47 0.51 12 20
18 Tancho 3 20°C a 25°C 15 a 20 0.66 0.82 0.18 34 30
Sanke
19 Showa 5 18°C a 25°C 7 0.33 0.47 0.84 14 50
20 Shisui 5 20°C a 30°C 40 a 50 0.55 0.89 0.18 79 50
21 Utzuri 4 24°C a 28°C 25 a 30 0.44 0.87 0.86 60 35
22 Yamabuki 3 22°C a 28°C 14 0.88 0.27 0.64 64 40
Ogon
Color, Value, Price and type Koi Variant Species for the Aquaculture Industry 201
is defined by the combination of possible values of the 8 variables and the possible range of color
(see Table A and the importance of each issue in Table B, socialization attribute is a Lickert model
with better socialization 7 and poor socialization 1). Considering features of porosity in a Fresh
Water tank with many koi fishes and when weather conditions affect these.
Future Research
With the proposed result of obtaining real position of carps based on the apparent position the
future research for this work must be to extract the apparent positions from images captured with
a digital camera or a cellphone picture, this should be done by using Deep Learning, specifically
Convolutional Neural Networks (CNNs), because they are good classifiers in object recognition,
which could identify the koi carps in images and its position, after that the obtained location could
be send to our ANN trained of obtaining the real position of the coordinates in the boxes and the
calculate the sizes of the carps based on the corners of its box transformed into real positions. Deep
Learning offers a powerful alternative for object recognition. On the other hand, with the sizes
identified in the tank it is possible to send koi carps to different tanks went its size is not over 10 cm.
The general description of future research is shown in Figure 11.
References
[1] Conapesca. Nuestros mares, sinónimo de abundancia y diversidad de alimentos. Rev. Divulg. Acuícola, vol. 4, no.
38, p. 8, 2017, [Online]. Available: http://divulgacionacuicola.com.mx/revistas/36-Revista Divulgación Acuícola
Julio2017.pdf.
[2] Hernández-Pérez, E., Gónzalez-Espinosa, M., Trejo, I. and Bonfil, C. 2011. Distribución del género Bursera en el
estado de Morelos, México y su relación con el clima. Rev. Mex. Biodivers., 82(3). [Online]. Available:
[3] Salam, H.J., Hamindon, W. and Badaruzzaman, W. 2011. Cost Optimization of Water Tanks Designed according to the
ACI and EURO Codes. doi: 10.13140/RG.2.1.2102.8329.
[4] Goodfellow, I., Bengio, Y. and Courville, A. 2016. Deep Learning. The MIT Press.
[5] Hsu, W.C., Chao, P.Y., Wang, C.S., Hsieh, J.C. and Huang, W. 2020. Application of regression analysis to achieve a
smart monitoring system for aquaculture. Inf., doi: 10.3390/INFO11080387.
202 Innovative Applications in Smart Cities
[6] Yang, X., Ramezani, R., Utne, I.B., Mosleh, A. and Lader, P.F. 2020. Operational limits for aquaculture operations from
a risk and safety perspective. Reliab. Eng. Syst. Saf., doi: 10.1016/j.ress.2020.107208.
[7] Hagan, M.T., Demuth, H.B., Beale, M.H. and De Jesús, O. 1996. Neural Network Design, 2nd ed. 1996.
[8] Suresh, S., Westman, E. and Kaess, M. 2019. Through-water stereo slam with refraction correction for AUV
Localization. IEEE Robot. Autom. Lett., doi: 10.1109/LRA.2019.2891486.
[9] Berrar, D. 2018. Cross-validation. In Encyclopedia of Bioinformatics and Computational Biology: ABC of
Bioinformatics.
CHAPTER-15
This chapter presents the design and validation of a measuring instrument using the digital
questionnaire evaluation technique, oriented to the self-perception of business leaders, to diagnose
the current state of company work dynamics regarding the use, incorporation, learning, and
technological appropriation. From the study carried out, a theoretical model capable of measuring
the technological competencies of business leaders is obtained to diagnose the current state of
companies’ work dynamics regarding use, incorporation, learning, and technological appropriation.
1. Introduction
Industry 4.0 was defined due to the growing trends in the use of ICT for industrial production, based
on three main components: the internet of things (IoT), cyber-physical systems (CPS) and smart
factories [1]. Industry 4.0 undoubtedly generates numerous new opportunities for companies, but
several automation and digitalization challenges arise simultaneously [2]. Therefore, management,
as well as employees, must not only acquire specific technical skills but appropriate them [3].
Multiple studies have been developed around the growth of industries generated by
the technological factor [4,5,6,7,8,9,10,11,12,13,14,15], pointing to the challenges faced by
underdeveloped countries to achieve high levels of competitiveness, industrial scaling and similar
scopes to those registered by developed countries, and these refer to a vision regarding the proposals
of technological appropriation in the productive processes on which the context is prioritized [16,17].
The research highlights the importance of clear top-down governance to succeed in the appropriate
use of technologies since an “uncoordinated bottom-up series” would block the path to Industry 4.0.
The following chapter shows the design and validation of a measuring instrument, using
the digital questionnaire evaluation technique, oriented to the self-perception of business leaders
to diagnose the current state of the companies work dynamics regarding the use, incorporation,
learning, and technological appropriation.
1
Autonomous University of Baja California, Blvd. University, Valle de las Palmas, 1000 Tijuana, Baja California, México.
2
Department of Social Studies at COLEF. México.
3
Industrial Processes Research Group.
4
Distance Continuing Education Research Group.
* Corresponding author: rodriguez.bernabe@uabc.edu.mx
204 Innovative Applications in Smart Cities
analysis approach. In [11], human capital skills and labor abilities related to a particular sector
are identified; in [12] diagnosis of the Aerospace industry is made and in [10] the policies for
business development in the state are reviewed. On the other hand, at the Autonomous University
of Baja California, while [5] have worked on models of competitiveness based on the information
and communication technologies knowledge; [6], applied the matrix of technological capabilities
into the industry of Baja California; and [37] described a systematic review of the literature on
the concept of technological competence in the industry, rethinking the meaning of the term in
knowledge areas seldom explored.
4. Evaluation Delimitations
The study is aimed at the Renewable Energy Sector industry in the state of Baja California, to
which belongs a group of companies identified as a new and promising national investment strategy.
The population of interest is professionals within the Renewable Energy Sector, named for this
project business leader, who is currently in a managing responsibility position in Small and Medium
Enterprises registered in the state.
The purpose of the evaluation focuses on the description of the behavior of an industrial sector
in the state, regarding the technological competencies that their leaders demonstrate, without
emphasizing the particularities of each company, that is to say, it is not intended to evaluate each
leader individually and point differences in performance and levels of knowledge among the
companies analyzed, on the contrary, the proposal corresponds to a comprehensive evaluation of the
sector, demonstrating as final evidence a situational analysis and behavior in a diagnostic manner.
and internal communication, where the internal structure of the company is evaluated, human
relations and learning in terms of communication, knowledge acquisition, department analysis,
training and promotion of human capital, professional degrees, and the impulse and technological
vision of the company leaders are analyzed; the environment dimension and external communication,
linkage, means and devices that make communication effective are assessed, collaboration between
companies or sectors, working in networks for the company growth and recognition; the training
and updating dimension, based on the follow-up of the leaders on issues to update technological
knowledge, the preferred training modality, the training institutions, the periodicity of the training
and the immediate application of the knowledge acquired in the training; and the company innovation
factors’ dimension, analyzed in terms of company innovation, from the products construction
proposals, as their impact in the global market, patent registration, certifications acquisition, the
administrative flexibility of the company’s structure that allows the incorporation of an innovation
and development group or department for the creation of new products, or, if such a group already
exists, analyze conditions and context, as well as its impact on the company objectives. In Figure 1
the theoretical model of the evaluation is represented graphically.
Regarding the measurement variables defined in the dimension of technological knowledge, in
the software and hardware update, the version control in the programs and equipment is reviewed,
as well as the constant revision of the market proposals; on interoperability and security, integrity
and transfer of information; in collaboration and mobile applications, the technological tools
that are used. In the environment and internal communication dimension, in the internal structure
organization variable, it is reviewed what is related to the strategic planning of the company and
its administrative conditions based on technological elements; the technological culture refers to
the behavior of the leader within his work community, his relationship with technology and the
diffusion in the use of it; the digital resilience considers the possibilities that the company shows
to face computer problems, solve and quickly reorganize them, without affecting the processes and
projects in execution. In the environment and external communication dimension, client tracking
analyzes the structure defined for the acquisition and growth of the client portfolio; in the distribution
strategies, the communication and collaboration methods with distributors or possible distributors
are reviewed; in digital marketing, the promotion of the company is analyzed through social media
and the market strategies used; in group participation, the collaboration and contributions of
the leader in governmental, academic and industrial groups are investigated. In the training and
updating dimension, the questions in training strategies are oriented to know how much the company
leader promotes and receives updates; in innovative training practices, the modality in which the
update courses are promoted and if mixed programs are considered is reviewed. Finally, in the
company innovation factors dimension, the patents and new products/services variable is defined
to review the results of the company on the design and registration of new products, models and/
or services; in innovation and development, the organizational conditions for the creation of spaces
for development; in certification and regulation, the attention of human capital to the validation of
knowledge by means of certifications and their knowledge about pre-established mechanisms in the
sector for the regulation of processes.
The quantitative measurement approach is proposed, and an evaluation instrument oriented
towards self-perception of leaders is developed, using the digital questionnaire evaluation technique,
with a nominal response scale. The evaluation instrument is constructed to diagnose the work
dynamics current state on the state Small and Medium Enterprises companies of the Renewable
Energy Sector, regarding the use, incorporation, learning, and technological appropriation. Table 1
shows the structure of the measuring instrument.
208 Innovative Applications in Smart Cities
expected, otherwise, the item must be removed from the instrument [64]. The operation variables
are shown in Equations 1 and 2.
ne
CVR = — (1)
N
Equation (1): ne = Number of expert judges who agreed – essential, useful and useless; and N =
Total number of judges.
∑ Mi–1 CVRi
CVI = (2)
M
Equation (2): CVRi = Content validity ratio of acceptable items; and M = Total acceptable items on
the instrument.
6. Results
6.1 Validation of the measuring instrument results
The CVI global value was calculated at 0.93, based on Tristán’s proposal [64], the result is catalogued
as acceptable. Results show that 48 indicators from the total 80 obtained a CVR value of 1 the
maximum scale score. Table 2 shows the CVR average per dimension on the evaluation instrument
validation by the six expert judges; likewise, the items that were suggested to modify due to a lack
of legibility are indicated.
Technological competences and Training and updating dimensions (0.896 and 0.868) and acceptable
result in the company’s innovation factors dimension (0.745). For this last case, the authors point out
that results below 0.8 require reviewing the items writing [68], since it may not be understandable
to the respondent.
of the company. These dimensions have operational variables, such as software and hardware
update, interoperability and security, collaboration and mobile applications, internal structure
organization, technological culture, digital resilience, customer tracking, distribution strategies,
digital marketing, group participation, strategies for training, innovative training practices,
patents and new products/services, innovation and technological development and certification and
regulation. The final product has a measuring instrument, using the 79-item digital questionnaire
evaluation technique.
The theoretical conceptual model shown in Figure 1 was duly validated by the expert judgment
employing the Content Validity Reason (CVR) and the Content Validity Index (CVI) obtaining
the value of 1 for CVR and 0.93 for CVI. The measuring instrument was corrected in the reagents
that had a lack of readability and then validated by Cronbach’s alpha. It is important to note that
Cronbach’s global calculation of alpha was 0.972, which according to the acceptable values is
considered excellent.
As future work, it is necessary to validate the model by applying the measurement instrument
to a considerable sample and thus be able to define a useful organizational diagnostic methodology
to define a technological profile of a business leader concerning the incorporation, learning, and
technological appropriation.
References
[1] Ghadimi, P., Wang, C., Lim, M.K. and Heavey, C. 2019. Intelligent sustainable supplier selection using multi-agent
technology: Theory and application for Industry 4.0 supply chains. Computers & Industrial Engineering, (127): 588–
600. https://doi.org/10.1016/j.cie.2018.10.050.
[2] Hecklau, F., Galeitzke, M., Flachs, S. and Kohl, H. 2016. Holistic Approach for Human Resource Management in
Industry 4.0. Procedia CIRP, (54): 1–6. https://doi.org/10.1016/j.procir.2016.05.102.
[3] Schneider, P. 2018. Managerial challenges of Industry 4.0: An empirically backed research agenda for a nascent field.
Review of Managerial Science, 12(3): 803–848. https://doi.org/10.1007/s11846-018-0283-2.
[4] Asociación Mexicana de la Industria de Tecnologías de la Información. Alianza Mundial de Servicios de Tecnologías
de la Información, 2011.
[5] Ahumada, E., Zárate, R., Plascencia, I. and Perusquia, J. 2012. Modelo de competitividad basado en el conocimiento:
El caso de las PyMEs del Sector de Tecnologías de la Información en Baja California. Revista Internacional
Administración & Finanzas, pp. 13–27.
[6] Brito, J., Garambullo, A. and Ferreiro, V. 2014. Aprendizaje y acumulación de capacidades tecnológicas en la industria
electrónica den Tijuana. Revista Global de Negocios, 2(2): 57–68.
[7] Buenrostro, M.E. 2013. Experiencias y desafíos en la apropiación de las TICs por las PyME Mexicanas - Colección de
Memorias de Seminarios, INFOTEC.
[8] Carrillo, J. and Hualde, A. 2000. El desarrollo regional y la maquiladora fronteriza: Las peculiaridades de un Cluster
Electrónico en Tijuana. Mercado de valores, (10): 45–56.
[9] Carrillo, J. and Gomis, R. 2003. Los retos de las maquilladoras ante la pérdida de competitividad. Comercio Exterior,
(53): 318–327.
[10] Fuentes, N. 2008. Elementos de la política de desarrollo empresarial: El caso de Baja California, México. Reglas,
Industria y Competitividad, pp. 152–172.
[11] Hualde, A. and Díaz, P.C. 2010. La Industria de software en Baja California y Jalisco: dos experiencias contrastantes.
Estrategias empresariales en la economía basada en el conocimiento, ISBN 978-607-95030-7-9.
[12] Hualde, A., Carrillo, J. and Dominguez, R. 2008. Diagnóstico de la industria Aeroespacial en Baja California.
Características productivas y requerimientos actuales y potenciales de capital humano. Tijuana: Colegio de la Frontera
Norte.
[13] Instituto Mexicano para la Competitividad. Visión México 2020. Políticas Públicas en materia de Tecnologías de la
Información y Comunicaciones para impulsar la Competitividad de México. México: Concepto Total S.A de C.V.,
2006.
[14] Marzo, N.M., Pedreja, I.M. and Rivera, T.P. 2006. Las competencias profesionales demandadas por las empresas: el
caso de los ingenieros, pp. 643–661.
[15] Núñez-Torrón, S.A. Las 7 competencias imprescindibles para la transformación digital, 2016.
[16] Sampedro, José Luis. Contribucción de capacidades de innovación en la industria de Software a través de la creación
de interfases: estudio de caso de empresas mexicanas. Economía y Sociedad (11)(17) (2006).
[17] Pérez-Jácome, D. and Aspe, M. Agenda Digital.mx México: Secretaría de Comunicaciones y Transportes (1)(2012).
[18] Porter, M. 2002. Ventaja Competitiva: creación y sostenimiento de un desempeño superior. Compañía Editorial
Continental, pp. 556.
214 Innovative Applications in Smart Cities
[19] Porter, Michael E. Competing to Change the World: Creating Shared Value. Rotterdam School of Management,
Erasmus University, Rotterdam, The Netherlands, 2016.
[20] PYME. Fondo de apoyo para la Micro, Pequeña y Mediana Empresa, 2014. URL http://www.fondopyme.gob.mx/.
[21] INEA. Instituto Nacional de Educación para Adultos, 2010. URL http://www.inea.gob.mx/.
[22] Instituto PYME. Acceso a la Tecnología, 2015. URL: http://www.institutopyme.org.
[23] Lloréns, B.L., Espinosa, D.Y. and Castro, M.M. 2013. Criterios de un modelo de diseño instruccional y competencia
docente para la educación superior escolarizada a distancia apoyada en TICC. Sinéctica Revista Electrónica de
Educación.
[24] eCompetence. European e-Competence Framework, 2016. URL: www.ecompetence.eu.
[25] ITIL. Training Academy. The Knowledge Academy, 2017. URL: https://www.itil.org.uk/.
[26] Patel, P. and Pavitt, K. 1997. The technological competencies of the world’s largest firms: complex and path-dependent,
but not much variety. Research Policy, 26(2): 141–156.
[27] Renaud, A. 1990. Comprender la imagen hoy. Nuevas imágenes, nuevo régimen de lo visible, nuevo imaginario, en
A.A.V.V. Video culturas de Fin de Siglo, Madrid, Cátedra.
[28] Urraca, R.A. 2013. Especialización tecnológica, captura y formación de competencias bajo integración de mercados;
comparación entre Asia y América Latina. Economía y Sociedades, (22)(3): 641–673.
[29] Chomsky, N. 1965. Aspects of the Theory of Syntax, Cambridge, MA: MIT Press.
[30] Hymes, D. 1974. Pidginization and creolization of languages: Proceedings of a conference held at the University of the
West Indies Mona, Jamaica. Cambridge University Press.
[31] González, J.A. 1999. Tecnología y percepción social evaluar la competencia tecnológica. Estudios sobre las culturas
contemporáneas, (9): 155–165.
[32] Cabello, R. 2004. Aproximación al estudio de competencias tecnológicas. San Salvador de Jujuy, 2004.
[33] Motta, J.J., Zavaleta, L., Llinás, I. and Luque, L. 2013. Innovation processes and competences of human resources in
the software industry of Argentina. Revista CTS, (24): 147–175.
[34] Romijn, H. and Albadalejo, M. 2002. Determinants of Innovation capability in small electronics and software firms in
southeast England, Research Policy (31)(7): 1053–1067.
[35] García Alcaraz, J.L. and Romero González, J. 2011. Valoración subjetiva de los atributos que los ingenieros consideran
requerir para ocupar puestos administrativos: Un estudio en empresas maquiladoras de Ciudad Juárez. Revista mexicana
de investigación educativa, (16)(48): 195–219.
[36] Hernández, J. Sampedro and Vera-Cruz, A. 2003. Aprendizaje y acumulación de capacidades tecnológicas en la
industria maquiladora de exportación: El caso de Thomson-Multimedia de México, Espacios. Espacios (24).
[37] Candolfi Arballo, N., Chan Núñez, M. and Rodríguez Tapia, B. 2019. Technological Competences: A Systematic
Review of the Literature in 22 Years of Study. International Journal Of Emerging Technologies In Learning (14)(04):
pp. 4–30. http://dx.doi.org/10.3991/ijet.v14i04.9118.
[38] Colobrans, J. 2011. Tecno-Antropología, Etnografies de la Cultura Digital i Etnografies de la Innovación. Revista
d’Etnologia de Catalunya.
[39] Villanueva, G. and Casas, M.D. 2010. e-Competencias: nuevas habilidades del estudiante en la era de la educación, la
globalidad y la generación de conocimiento. Signo y pensamiento, pp. 124–138.
[40] Gil Gómez H. 2003. Aprendizaje Interorganizativo en el entorno de un Centro de Investigación Tecnológico. Aplicación
al sector textil de la Comunidad Valenciana. Universidad Politécnica de Valencia.
[41] Ordoñez, J.E., Gil-Gómez, H., Oltra, B.R. and González-Usach, R. 2015. Importancia de las competencias en
tecnologías de la información (e-skills) en sectores productivos. Propuesta de investigación en el sector transporte de la
comunidad Valenciana. 3Ciencias TIC, (4)(12): 87–99.
[42] Burillo, V., Dueñas, J. and Cuadrado, F. 2012. Competencias profesionales ETIC en mercados emergentes. Fundación
Tecnologías de la Información. Madrid: FTI-AMETIC.
[43] Crue-TIC Y Rebiun. Competencias informáticas e informacionales en los estudios de grado, 2009, España. URL:http://
www.rebiun.org/doc/documento_competencias_informaticas.pdf.
[44] Díaz, Y.E. and Báez, L.L. 2015. Exploración de la capacidad de liderazgo para la incorporación de TICC en educación:
validación de un instrumento/Exploring the leadership to incorporate TICC in education: validation of an instrument.
Revista Latinoamericana de Tecnología Educativa-RELATEC, (14)(3): 35–47.
[45] European Commission. e-Skills: The international dimension and the impact of globalisation. European Commission
DG Enterprise and Industry, 2014.
[46] ITE (2011). Competencia Digital. Instituto de Tecnologías Educativas. Departamento de Proyectos Europeos, 2011.
URL: http://recursostic.educacion.es/blogs/europa/.
[47] OCDE. Digital Economy Outlook 2017. Organización para la Cooperación y el Desarrollo Económicos, 2017.
[48] Ukces. Information and Communication Technologies: Sector Skills Assessment, 2012. UK Commission for
Employment and Skills.
[49] Urraca, R.A. 2007. Patrones de inserción de las empresas multinacionales en la formación de competencias tecnológicas
de países seguidores. Revista Brasileira de Innovación.
[50] Cabello, R. and Moyano, R. 2012. Tecnologías interactivas en la educación. Competencias tecnológicas y capacitación
para la apropiación de las tecnologías. Buenos Aires, Argentina: Universidad Nacional de General Sarmiento.
Measurement of Industry 4.0 Technological Competencies 215
[51] Barajas, M., Carrillo, J., Casalet, M., Corona, J., Dutrénit, G. and Hernández, C. 2000. Protocolo de Investigación
Aprendizaje Tecnológico y Escalamiento Industrial: Generación de Capacidades de Innovación en la Industria
Maquiladora de México.
[52] Barroso, R.S. and Morales, Z.D. 2012. Trayectoria de acumulación de competencias tecnológicas y procesos de
aprendizaje. Propuesta de un modelo analítico para agencia de viajes y operadoras turísticas. Estudios y perspectivas en
turismo, (21): 515–532.
[53] CEMIE. Centros Mexicanos de Innovación en Energía, 2015. URL: https://www.gob.mx/sener/articulos/centros-
mexicanos-de-innovacion-en-energia.
[54] CFE. Comisión Federal de Electricidad, 2017. URL: http://www.cfe.gob.mx/.
[55] Chan, M.E. 2016. Virtualization of Higher Education in Latin America: Between Trends and Paradigms, (48): 1–32.
[56] CONUEE. 2008. Comisión Nacional para el Uso Eficiente de la Energía, 2008. URL https://www.gob.mx/conuee.
[57] Dutrénit, G. 2000. Learning and knowledge management in the firm: from knowledge accumulation to strategic
capability. Edward Elgar.
[58] Dutrénit, G. 2004. Building technological capabilities in latecomer firms: a review essay. Science Technology Society,
9: 209–241.
[59] Gasca, L.K. 2015. Reforma Energética en México. México: SENER.
[60] Gobierno de Baja California. Programa Especial de Energía 2015–2019. Mexicali, BC, México, 2015.
[61] Instituto Federal de Telecomunicaciones. Adopción de las TIC y uso de internet en México, 2018.
[62] Secretaría de Energía. Comisiones Estatales de Energía. Secretaría de Energía, 2016. URL http://www.conuee.gob.mx/
wb/Conuee/comisiones_estatales_de_energia.
[63] Lawshe, C.H. 1975. A quantitative approach to content validity. (D. 10.1111/j.1744-6570.1975.tb01393.x, Ed.)
Personnel Psychology, pp. 563–575.
[64] Tristán, A. 2008. Modificación al modelo de Lawshe para el dictamen cuantitativo de la validez de contenido de un
instrumento objetivo, pp. 37–48.
[65] Valdivieso, C. 2013. Efecto de los métodos de estimación en las modelaciones de estructuras de covarianzas sobre un
modelo estructural de evaluación del servicio de clases. Comunicaciones en Estadística, (6)(1): 21–44.
[66] Hernández Sampieri, R., Fernández Collado, C. and Baptista Lucio, P. 2006. Capítulo 1, Similitudes y diferencias entre
los enfoques cuantitativo y cualitativo. En McGraw-Hill (Ed.), 2006. https://doi.org/10.6018/turismo.36.231041.
[67] Castillo-Sierra, D.M., González-Consuegra, R.V. and Olaya-Sánchez, A. 2018. Validity and reliability of the Spanish
version of the Florida Patient Acceptance Survey. Revista Colombiana de Cardiologia (25)(2): 131–137. https://doi.
org/10.1016/j.rccar.2017.12.018.
[68] Rositas, J., Badii, M.H. and Castillo, J. 2006. La confiabilidad de las evaluaciones del aprendizaje conceptual: Indice
Spearman-Brown del metodo split-halves (Reliability of the evaluation of conceptual learning: index of Spearman-
Brown and the split-halves method). Innovaciones de Negocios, (3)(2): 317–329.
[69] Morales, Pedro. 2012. Análisis de ítems en las pruebas objetivas. Madrid: Universidad Pontificia Comillas.
CHAPTER-16
The technological progress, particularly in the implementation of biosignal acquisition systems, big
data, and artificial intelligence algorithms, has enabled the gradual increase in the use of myoelectric
signals. Its applications range from monitoring and diagnosing neuromuscular diseases to myoelectric
control to assist the disabled. This chapter describes the proper treatment of EMG signals such as
detection, processing, characteristics extraction techniques and classification algorithms.
1. Introduction
The technological progress has made it possible for intelligent devices such as smartphones,
tablets, and phablets to use sensors (like accelerometer, triaxial, gyroscope, magnetometer, and
altimeter) to give the consumer a very intuitive sense of the virtual environment [1], but beyond
the implementation of sensors in different devices, it has started the digitization of health. The
Health and Healthcare in the Fourth Industrial Revolution article, published recently by the World
Economic Forum, highlights that social networks, internet of things (IoT), wearables, sensors, big
data, artificial intelligence (AI), augmented reality (AR), nanotechnology, and 3D printing are about
to drastically transform society and health systems.
Prominent leaders in health sciences and informatics have stated that AI could have an important
role in solving many of the challenges in the medical sector. [2] mentions that almost all clinicians,
from specialized physicians to paramedics, will use artificial intelligence technology in the future,
especially for in-depth learning. A significant niche of this technological advance is related to the
development of portable systems that allow the monitoring of biosignals and devices that can assist
disabled people.
Biosignals have been used in healthcare and medical domains for more than 100 years, among
the most studied ones are electroencephalography (EEG) and electrocardiography (ECG), however,
due to the development of commercial technologies for myoelectrical (EMG) signal acquisition,
data storage, and management, monitoring and control based on EMG signals has increased [3].
Real-time evaluation of these signals may be essential for musculoskeletal rehabilitation or for
preventing muscle injury. On the other hand, muscle activation monitoring is useful for the diagnosis
of neuromuscular disorders.
1
Universidad Autónoma de Ciudad Juárez, Av. Hermanos Escobar, Omega, 32410 Cd Juárez, Chihuahua, México.
2
Universidad Autónoma de Baja California, Blvd. Universitario #100. Unidad Valle de las Palmas, 21500 Tijuana México.
* Corresponding author: rodriguez.bernabe@uabc.edu.mx
Myoelectric Systems in the Era of Artificial Intelligence and Big Data 217
Recently, the HMI (Human-Machin-Interface) and IT communities have started using these
signals for a wide variety of applications, such as muscle-computer interfaces. Sensors on human
body extremities enable the use of exoskeletons, electric chair control, prosthesis control, myoelectric
armbands, handwriting identification, and silent voice interpretation.
Characteristics of EMG signals
The EMG signal is known as the electrical manifestation of the neuromuscular activation associated
with a contracting muscle. [4] defines it as “the current produced by the ionic flow through the
membrane of muscle fibers that spread across intermediate tissues reaching the surface for
the detection of an electrode”, therefore, it is a signal which is affected by the anatomical and
physiological properties of muscles, the control scheme of the nervous system, as well as the
characteristics of the instrumentation that is used to detect and register it.
The EMG signal consists of the action potentials of muscle fiber groups organized into
functional units called motor units (MUs), this signal can be connected with sensors placed on the
surface of the skin or through needle or wire sensors into muscle tissue. A graph of surface EMG
signal decomposition at its motor unit action potentials is displayed in Figure 1. It is often desirable
to review the data gathered at the time of individual motor unit discharges, in order to assess the
degree of dysfunction in diseases such as cerebral palsy, Parkinson’s disease, amyotrophic lateral
sclerosis (ALS), stroke, and other diseases. Nonetheless, from a practical perspective, it is desirable
to obtain such data from the signal detected from a single sensor that is as subtle as possible and
that detects EMG signals high in MU rather than multiple sensors that detect EMG signals low in
MU [5].
On the other hand, [12] points out that the identity of a real EMG signal originated in the muscle
is lost due to two main effects: the attributes of the EMG signal that rely on the individual’s internal
structure, including individual skin formation, bloodstream speed, skin temperature, tissue structure
(muscle, fat, etc.), and external contaminants in EMG recordings, including inherent electrode noise,
device motion, electric line interference, analog-to-digital conversion cutoff, quantization error,
amplifier saturation, and electrocardiographic (ECG) interference. The major external contaminants
are displayed in Table 2.
Contaminants Authors
Device motion
Line interference
Amplifier saturation [13–15]
Physiological interference (e.g. EGC)
Noise (additive white gaussian noise saturation)
Source: Author’s own compilation
Due to the inherent characteristics of EMG signals, proper processing is necessary for its correct
interpretation.
An overall system based on pattern recognition consists of three stages:
(1) Processing stage: The signal is collected with electrodes and preprocessed with amplifiers and
filters and then converted into digital data. A raw signal is output as segments.
(2) Extraction and characteristics reduction stage. It involves transforming the raw signal into a
characteristics vector in order to highlight important data. At its output, there is a reduced vector
of characteristics.
(3) Classification stage. Classification algorithms are used to distinguish different categories
between the reduced vector of characteristics. The categories obtained will be used for stages
such as control commands or diagnostics.
The following sections describe the main considerations for signal processing at each stage.
The disadvantages of surface electrodes lie in their restriction to surface muscles and that they
cannot be used to selectively detect signals from small muscles or adjacent muscles. However, they
are useful in myoelectric control for the physically disabled population, studies of motor behavior,
when the activation time and magnitude of the signal contains the required information, or in studies
with children or other people opposed to the insertion of needles [4].
Intramuscular technique
The most common electrode is the needle type, like the “concentric” electrode used by clinicians.
This monopolar configuration contains an isolated wire and a bare tip to detect the signal. The
bipolar configuration contains a second wire and provides a second surface detection.
[4] mentions that the needle electrode has two distinct advantages. One is that it allows the
electrode to detect individual MUAPs during relatively low force contractions. The other is that the
electrodes can be conveniently re-positioned within the muscle.
220 Innovative Applications in Smart Cities
Configuration Bipolar
Material Ag/AgCl
Shape and size Round from 8 to 10 mm
Electrode distance 20 mm
Source: Author’s own compilation
Placement procedure
The most commonly used skin preparation techniques include: shaving, cleansing the skin with
alcohol, ethanol or acetone, and gel application [16].
Sensor placement
Three strategies can be identified for placement of a pair of electrodes [17].
● In the center or most prominent lump of the belly muscle
● Someplace between the innervation zone and the distal tendon
● At the motor point
The reference electrode is placed over inactive tissue (tendons or osseous areas), often at a
certain distance from active muscles. The “popular” locations for placing the reference electrode
have been the wrist, waist, tibia, sternum, and spinal process [16].
Fixing of electrodes
The way the sensor is connected to the body is known as “fixation”, this facilitates good and steady
contact between the electrode and the skin, a limited risk of the sensor moving over the skin and a
Myoelectric Systems in the Era of Artificial Intelligence and Big Data 221
minimal risk of pulling the wires. Some methods may include adhesive tape (double-sided) or collar,
elastic bands, and keeping the sensor in the desired placement by hand [16].
Rectification: The rectification process is carried out before any relevant analysis method is
performed. It entails the concept of rendering only positive deviations of the signal, it is achieved
by eliminating negative values (half-wave rectification) or reversing negative values (full-wave
rectification), the latter is the preferable procedure as it preserves all the energy of the signal [4].
Root mean square average (rms): An alternative to capture the envelope is calculating the value of
the root mean square (rms) within a window that “slides” through the signal [19]. This approach is
mathematically different from the rectification and filtering approach. [4] points out that, due to the
parameters of the mathematical operation, the rms value provides the most rigorous measure of the
information content of the signal, because it measures the energy of the signal.
Figure 5: Adjacent window technique for an EMG signal channel. The data windows (W1, 12 and W3) are adjacent and
disjointed. For each data window a classification decision is made (D1, D2 and D3) in time t , the processing time required
of a classifier [23].
Overlapping windows. In this technique, the new segment slides over the existing segment, with a
shorter increase time than the segment length.
Myoelectric Systems in the Era of Artificial Intelligence and Big Data 223
Figure 6: Overlapping window technique for an EMG channel. Window diagram that maximizes computing performance
and produces the most possible dense decision flow [23].
According to research performed by [23] and [24] regarding the effect of both techniques,
they conclude that: overlapping segmentation increases processing time and produces no significant
improvement, the segmentation of adjacent windows seems to achieve an increase in sorting
performance. In this technique a smaller segment increase produces a denser but semi-redundant
class decision flow that could improve response time and accuracy. [24] observed that a window of
less than 125 ms produces high variation in frequency domain characteristics.
N
Mean absolute value (MAV) MAVk = 1–
N
∑ |x | i
i=1
N
MMAV1k = 1–
N ∑ w |x | i i
Modified mean absolute value 1 (MMAV1) i=1
1,
{
w(i) = 0.5,
0.25N ≤ i ≤ 0.75N
otherwise
N
MMAVk = 1–
N ∑ w |x | i i
i=1
{
Modified mean absolute value 2 (MMAV2)
1, 0.25N ≤ i ≤ 0.75N
4i/N, 0.25N > i
w(i) =
4(i–N)/N, 0.75 < i
√
N
i=1
N
EMG variance (VAR) 1
VARk = –
N ∑x i
2
i=1
N–1
{xi > xi–1 and xi > xi+1} or {xi < xi–1 and xi < xi+1}
Zero crossing (ZC) AND
|xi – xi+1| ≥ ò or |xi – xi–1| ≥ ò
N
f (x) = { 10 x>ò
otherwise
N
i=1
∑Mi=1 fi PSDi
Frequency mean (FMN) FMN =
∑Mi=1 PSDi
M
∑Mj=1 fi Aj
Modified frequency mean (MFMN) MFMN =
∑Mj=1 Aj
|F(.)|jlowfreq
Frequency ratio (FR) FRj =
|F(.)|jhighfreq
The main difference between STFT, WT and WPT is how each one divides the time-
frequency plane. The STFT has a static pattern, each cell has an identical aspect ratio; while the
WT has a variable pattern and the cell aspect ratio varies in a way that the frequency resolution is
proportional to the center frequency. Lastly, the WPT has an adaptive pattern, which offers several
tilt alternatives [28].
Figure 7: Time-frequency pattern of (a) STFT, (b) WT and (c) WPT [28].
226 Innovative Applications in Smart Cities
4. Classification Algorithms
Once the characteristics of a recorded EMG signal have been retrieved and the dimensionality
has been reduced, some classification algorithms must be implemented. [22] advises that, due to
the nature of the myoelectric signal, it is reasonable to expect a wide variation in the value of a
particular characteristic. In addition, there are external factors such as changes in electrode position,
fatigue, or sweating that cause changes in a signal pattern over time. However, a classifier should be
able to cope optimally with such variable patterns; it must be fast enough to comply with restrictions
in real-time. There are several classifier approaches such as neural network, Bayes classifier, fuzzy
logic, linear discriminant analysis, support vector machine, hidden Markov model and k-nearest
neighbors [3]. The summary of the main classification algorithms is displayed in Figure 8. Examples
of the uses of the different classifiers are shown in Table 7.
Classifier Application
SVM, LDA and MLP Evaluating upper limb motions using EMG
NN EMG-based computer interface
FL Control of a robotic arm for rehabilitation
SVM Post-stroke robot-aided rehabilitation
LDA, and SVM Classification of muscle activity for robotic device control
NN Hand motion detection from EMG
BN, and a hybrid of BN and NN EMG-based human–robot interface
NN, BN and HMM HCI system
FL Classification of arm movements for rehabilitation
Source: [31]
5. Conclusion
The development of technology and portable system applications to monitor and control through
myoelectric signals is possible thanks to acquisition systems, real-time processing, and classification
algorithms, associated with the analysis of large amounts of data. This has made it possible to
detect, process, analyze and control signals as small and complex as those generated by any muscle
contraction.
Knowing each stage in the processing of these signals allows us to identify criteria for the
design of new human-computer interfaces, more efficient and useful for the user.
No doubt there is a need for proper detection and ergonomic systems, despite the efforts
of communities such as SENIAM and ISEK, the mapping for the location of sensors are still
being studied. On the other hand, portable acquisition systems must be developed with adequate
characteristics in the sampling frequency, in order to decrease computational costs in processing
and time, but without losing vital frequency spectra for the correct monitoring and interpretation
of patterns. The development of statistical algorithms, data analysis, and artificial intelligence are
making possible the optimization of relevant characteristics in the interpretation of patterns, allowing
the reduction of the raw data dimensionality of the sampled signals to facilitate their interpretation
by the different classification algorithms. The difficulty of systems that can interpret patterns through
myoelectric signals lies in the diversity of the anatomical set of users, the placement of sensors, and
the relevant characteristics, which is why the algorithms of machine learning or deep learning can
allow greater progress in dealing with each of these variables.
References
[1] Athavale, Y. and Krishnan, S. 2017. Biosignal monitoring using wearables: observations and opportunities. Biomedical
Signal Processing and Control, 38: 22–33. https://doi.org/10.1016/j.bspc.2017.03.011.
[2] Topol, E.J. 2019. High-performance medicine: the convergence of human and artificial intelligence. Nat Med, 25(1):
44–56. https://doi.org/10.1038/s41591-018-0300-7.
[3] Rechy-Ramirez, E.J. and Hu, H. 2015. Bio-signal based control in assistive robots: a survey. Digital Communications
and Networks, 1(2): 85–101. https://doi.org/10.1016/j.dcan.2015.02.004.
[4] De Luca, C.J. 2006. Electromyography. Encyclopedia of Medical Devices and Instrumentation. John Wiley Publisher,
98–109.
[5] De Luca, C.J., Adam, A., Wotiz, R., Gilmore, L.D. and Nawab, S.H. 2006. Decomposition of surface EMG signals.
Journal of Neurophysiology, 96(3): 1646–1657. https://doi.org/10.1152/jn.00009.2006.
[6] Betancourt, O., Gustavo, A., Suárez, G., Franco, B. and Fredy, J. 2004. Disponible En: Http://Www.Redalyc.Org/
Articulo.Oa?Id=84911640010.
228 Innovative Applications in Smart Cities
[7] Raez, M.B.I., Hussain, M.S., Mohd-Yasin, F., Reaz, M., Hussain, M.S. and Mohd-Yasin, F. 2006. Techniques of EMG
signal analysis: detection, processing, classification and applications. Biological Procedures online, 8(1): 11–35. https://
doi.org/10.1251/bpo115.
[8] Supuk, T., Skelin, A. and Cic, M. Design 2014. Development and testing of a low-cost SEMG system and its use in
recording muscle activity in human gait. Sensors, 14(5): 8235–8258. https://doi.org/10.3390/s140508235.
[9] Fuketa, H., Yoshioka, K., Shinozuka, Y. and Ishida, K. 2014. Measurement sheet with 2 V organic transistors for
prosthetic hand control. IEEE Transactions on Biomedical Engineering, 8(6): 824–833. https://doi.org/10.1109/
TBCAS.2014.2314135.
[10] Prince, N., Nadar, S., Thakare, S., Thale, V. and Desai, J. Design of Front End Circuitry for Detection of Surface EMG
Using Bipolar Recording Technique. 2016 International Conference on Control Instrumentation Communication and
Computational Technologies, ICCICCT 2016, 2017, 594–599. https://doi.org/10.1109/ICCICCT.2016.7988019.
[11] Chen, H., Zhang, Y., Zhang, Z., Fang, Y., Liu, H. and Yao, C. Exploring the Relation between EMG Sampling Frequency
and Hand Motion Recognition Accuracy. In 2017 IEEE International Conference on Systems, Man, and Cybernetics
(SMC); IEEE: Banff, AB, 2017; pp 1139–1144. https://doi.org/10.1109/SMC.2017.8122765.
[12] Chowdhury, R., Reaz, M., Ali, M., Bakar, A., Chellappan, K. and Chang, T. 2013. Surface electromyography signal
processing and classification techniques. Sensors, 13(9): 12431–12466. https://doi.org/10.3390/s130912431.
[13] Chan, A. and MacIsaac, D. CleanEMG: Assessing the Quality of EMG Signals. 34th Conference of the Canadian
Medical & …, 2011, No. November, 17–20.
[14] McCool, P., Fraser, G.D., Chan, A.D.C., Petropoulakis, L. and Soraghan, J.J. 2014. Identification of Contaminant Type
in Surface Electromyography (EMG) Signals. IEEE Transactions on Neural Systems and Rehabilitation Engineering,
22(4): 774–783. https://doi.org/10.1109/TNSRE.2014.2299573.
[15] Rosli, N.A.I.M., Rahman, M.A.A., Mazlan, S.A. and Zamzuri, H. 2014. Electrocardiographic (ECG) and
Electromyographic (EMG) Signals Fusion for Physiological Device in Rehab Application. 2014 IEEE Student
Conference on Research and Development, SCOReD 2014. https://doi.org/10.1109/SCORED.2014.7072965.
[16] Hermens, H.J. 2000. Development of Recommendations for SEMG Sensors and Sensor Placement Procedures, 10:
361–374.
[17] Mesin, L., Merletti, R. and Rainoldi, A. 2009. Surface EMG: The issue of electrode location. Journal of Electromyography
and Kinesiology, 19(5): 719–726. https://doi.org/10.1016/j.jelekin.2008.07.006.
[18] Wang, J., Tang, L. and Bronlund, J.E. 2013. Surface EMG signal amplification and filtering. International Journal of
Computer Applications, 82(1): 15–22. https://doi.org/10.5120/14079-2073.
[19] Rose, W. 2016. Electromyogram Analysis. Online course material. University of Delaware. Retrieved July, 2011, 5.
[20] Merletti, R. and Di Torino, P. 1999. Standards for Reporting EMG Data. J Electromyogr Kinesiol, 9(1): 3–4.
[21] Phinyomark, A., Khushaba, R.N. and Scheme, E. 2018. Feature extraction and selection for myoelectric control based
on wearable EMG sensors. Sensors, 18(5). https://doi.org/10.3390/s18051615.
[22] Asghari Oskoei, M. and Hu, H. 2007. Myoelectric control systems-a survey. Biomedical Signal Processing and Control,
2(4): 275–294. https://doi.org/10.1016/j.bspc.2007.07.009.
[23] Englehart, K. and Hudgins, B. 2003. A robust, real-time control scheme for multifunction myoelectric control. IEEE
Trans Biomed Eng, 50(7): 848–854. https://doi.org/10.1109/TBME.2003.813539.
[24] Rainoldi, A., Nazzaro, M., Merletti, R., Farina, D., Caruso, I. and Gaudenti, S. 2000. Geometrical factors in surface
EMG of the vastus medialis and lateralis muscles. Journal of Electromyography and Kinesiology, 10(5): 327–336.
https://doi.org/10.1016/S1050-6411(00)00024-9.
[25] Zecca, M., Micera, S., Carrozza, M.C. and Dario, P. 2002. Control of multifunctional prosthetic hands by processing
the electromyographic signal. Critical Reviews in Biomedical Engineering, 30(4–6).
[26] Veer, K. and Sharma, T. 2016. A Novel feature extraction for robust EMG pattern recognition. Journal of Medical
Engineering & Technology, 40(4): 149–154. https://doi.org/10.3109/03091902.2016.1153739.
[27] Oskoei, M.A. and Hu, H. 2006. GA-Based feature subset selection for myoelectric classification. In 2006 IEEE
International Conference on Robotics and Biomimetics; IEEE: Kunming, China, pp. 1465–1470. https://doi.
org/10.1109/ROBIO.2006.340145.
[28] Englehart, K., Hudgins, B., Parker, P.A. and Stevenson, M. 1999. Classification of the myoelectric signal using time-
frequency based representations. Medical Engineering & Physics, 21(6-7): 431–438. https://doi.org/10.1016/S1350-
4533(99)00066-1.
[29] Englehart, K. 1998. Signal Representation for Classification of the Transient Myoelectric Signal. Ph. D. thesis,
University of New Brunswick.
[30] Bishop, C.M. et al. 1995. Neural Networks for Pattern Recognition; Oxford university press.
[31] Spiewak, C. 2018. A Comprehensive Study on EMG Feature Extraction and Classifiers. OAJBEB, 1(1). https://doi.
org/10.32474/OAJBEB.2018.01.000104.
CHAPTER-17
These days, we are living in the epitome of Industry 4.0, where each component is intelligent and
suitable for Smart Manufacturing users, which is why the specific use of Big Data is proposed
to determine the continuous improvement of the competitiveness of a car assembling industry.
The Boston Consulting Group [1] has identified nine pillars of I4.0, which are: (i) Big Data and
Analytics, (ii) Autonomous Robots, (iii) Simulation, (iv) Vertical and Horizontal Integration of
Systems, (v) Industrial Internet of Things (IoT for its acronym in English), (vi) Cybersecurity,
(vii) Cloud or Cloud, (viii) Additive Manufacturing including 3D printing, and (ix) Augmented
Reality. These pillars are components of the Industry 4.0 that can be implemented as models of
continuous competitiveness. In Industry 4.0, the Industrial IoT is a fundamental component and its
penetration in the market is growing. Car manufacturers, such as General Motors or Ford, expect
that by 2020 there will be 50 billion (trillion in English) connected devices, Ericsson Inc. estimates
18 billion. These estimated quantities of connected devices will be due to the increase in technological
development, development in telecommunications and adoption of digital devices, and this will
invariably lead to the increase in the generation of data and digital transactions, which leads to the
mandatory increase in regulations, for security, privacy and informed consent in the integration of
these diverse entities that will be connected and interacting among themselves and with the users.
Finally, the use of Fuzzy Logic type 2 is proposed to adopt the correct decision making and achieve
the reduction of uncertainty in the car assembly industry in the Northeast of Mexico.
1. Introduction
Today, technology is an important part of everyday life, from the way we communicate to the
different types of technologies that allow us to carry out many types of processes in different
industries.
1
Universidad Autónoma de Ciudad Juárez, Av. Hermanos Escobar, Omega, 32410 Cd Juárez, Chihuahua, México.
2
Universidad de Portugal.
3
Centro CONACYT
* Corresponding author: jose_peinado@utcj.edu.mx
230 Innovative Applications in Smart Cities
The Mexican industry, particularly the automotive industry, is not exempt from these
technological advances, which are part of industry 4.0 (I4.0), and has an endless number of
technologies that make it competitive in the market. However, these technologies are not effective
enough to meet the demands of today’s world, therefore, this chapter will show a literature review of
the concepts that will be the basis for the proposal of a new intelligent model that is able to combine
cutting-edge technologies and optimize processes and resources within the automotive industry in
northern Mexico.
2. Literature Review
This section shows the main concepts of this article and how they have been generating and evolving
throughout history. This section gives us an idea of what exists with respect to the technologies
mentioned as Industry 4.0, Big Data, Fuzzy Logic Type-2.
As mentioned before, the I4.0 is based on nine pillars; this was written by [1] and the pillars are:
1. Big Data and Analytics
2. Autonomous Robots
3. Simulation
4. Horizontal and Vertical System Integration
5. The Industrial Internet of Things
6. Cybersecurity
7. The Cloud
8. Additive Manufacturing
9. Augmented Reality
Intelligent Model for the Car Assembly Industry 231
Some of the authors like Zhang et al. [7] talk about the use of Big Data in the automobile
industry. They propose that the use of big data helps determine the characteristics that a user searches
for in a car, in addition to predicting how sales will be in the coming months.
Otherwise, Kambatla et al. [8] talk about the future to big data. They give us an idea of what
the use of big data implies, from the type of hardware that is needed to apply this technology, be it
the use of memory or the hierarchy of memory that this implies, to the types of network and systems
distributed that enable the application of big data for companies.
Furthermore, Philip Chen and Zhang [9] mention that in order to be competent, the use of big
data is a big part of innovation, competition, and production for any company and that the use of big
data should include the use of cloud computing, quantum computation and biological computation,
besides that, the development of tools is an important part of the use of these technologies.
(FSs), or so-called type-1 FSs, are capable of handling input uncertainties, they are not adequate to
handle all types of uncertainties associated with knowledge-based systems [10]. The type-2 provide
additional design degrees of freedom fuzzy logic systems, which can be very useful when such
systems are used in situations where lots of uncertainties are present. The resulting type-2 fuzzy
logic systems (T2 FLS) have the potential to provide better performance than a type-1 (T1) FLS
[11]. A type-2 fuzzy set is characterized by a fuzzy membership function, i.e., the membership value
(or membership grade) for each element of this set is a fuzzy set in [0,1], unlike a type-1 fuzzy set
where the membership grade is a crisp number in [0,1] [12].
Membership functions of type-1 fuzzy sets are two-dimensional, whereas membership functions
of type-2 fuzzy sets are three-dimensional. It is the new third-dimension of type-2 fuzzy sets that
provides additional degrees of freedom that make it possible to directly model uncertainties [11].
3. Discussion
The automobile assembly industry today has multiple options for the assembly, from different
models of cars, different types between these models, even the color of these is an important factor
for decisions within companies.
On the other hand, currently, companies use different mathematical models as a solution for
decision making, which, although useful and functional, only present between 60% and 65% of
success in them, showing a little less than half of the failure within the decisions for the company.
Consider, a car is assembled in 7 stages and this passes through 4 work stations, only the
assembly of this car has as result 28 critical points, now if 3 different models are made at the same
time, and what happens if 4 cars are made of each model, the number of variables and critical points
of the process grow significantly (Figure 3), so the mathematical and stochastic models are not
being practical enough for this type of companies, representing 40% of losses or inefficiencies in
the production of final products.
Intelligent Model for the Car Assembly Industry 233
Figure 3: A multiple production of cars with multiple variables produce multiple critical points within the company.
4. Proposed Methodology
The proposal to help the way to optimize resources in the supply chain of a company is the realization
of an intelligent model based on Big Data, which will be the technology responsible for generating
the best options to optimize the use of materials in the warehouse of a car assembly industry in
north-eastern Mexico (Figure 4), as well as a great help in making decisions for the company. Once
the analysis through Big Data and the best options generated are available, Fuzzy Logic Type 2
technology will be integrated to determine the best way to use the company’s resources or the best
decision for the company.
The combination of these cutting-edge technologies would represent an improvement for many
of the warehouses within the assembly industry within Mexico; this model can even be adaptable
to other industries and government agencies or any business that has a warehouse and involves
decision making in it since the goal of this intelligent model is to increase the optimization of
resources and the effectiveness of decisions made by the company by up to 85%.
Figure 5: Integration of Fuzzy Logic Type-2 for the choice of the best option.
References
[1] Rüßmann, M. et al. 2015. Industry 4.0: Future of productivity and growth in manufacturing. Bost. Consult. Gr., no.
April, p. 20.
[2] Govindarajan, U.H., Trappey, A.J.C. and Trappey, C.V. 2018. Immersive Technology for Human-Centric Cyberphysical
Systems in Complex Manufacturing Processes : A Comprehensive Overview of the Global Patent Profile Using
Collective Intelligence, vol. 2018.
[3] Sung, T.K. 2018. Industry 4.0: A Korea perspective. Technol. Forecast. Soc. Change, 132, no. October 2017, pp. 40–45.
[4] Khan, M., Jan, B. and Farman, H. 2019. Deep Learning: Convergence to Big Data Analytics. Springer Singapore.
[5] Kaur, N. and Sood, S.K. 2017. Efficient resource management system based on 4Vs of big data streams. Big Data Res.,
9, no. February, pp. 98–106.
[6] Wu, C., Buyya, R. and Ramamohanarao, K. 2016. Big Data Analytics = Machine Learning + Cloud Computing, no. Ml.
[7] Zhang, Q., Zhan, H. and Yu, J. 2017. Car sales analysis based on the application of big data. Procedia Comput. Sci.,
107, no. Icict, pp. 436–441.
[8] Kambatla, K., Kollias, G., Kumar, V. and Grama, A. 2014. Trends in big data analytics. J. Parallel Distrib. Comput.
[9] Philip Chen, C.L. and Zhang, C.Y. 2014. Data-intensive applications, challenges, techniques and technologies: A
survey on Big Data. Inf. Sci. (Ny).
[10] Zamani, M., Nejati, H., Jahromi, A.T., Partovi, A., Nobari, S.H. and Shirazi, G.N. 2008. Toolbox for Interval Type-2
Fuzzy Logic Systems.
[11] Mendel, J.M., John, R.I. and Liu, F. 2006. Interval type-2 fuzzy logic systems made simple. IEEE Trans. Fuzzy Syst.
[12] Hagras, H.A. 2004. A hierarchical type-2 fuzzy logic control architecture for autonomous mobile robots. IEEE Trans.
Fuzzy Syst., 2004.
CHAPTER-18
In this chapter, a practical and dynamic method to determine the reliability of a process (or product)
is presented. The novelty of the proposed method is that it let us to use the Weibull distribution to
determine the reliability index, by using only the quadratic form of the analyzed process (or product)
as an input. So, since this polynomial can be fitted by using, e.g., simulation, mathematical and/
or physical modeling, empirical experimentation and/or any optimization algorithm, the proposed
method can easily be implemented in several fields of the smart manufacturing environment. For
example, in the industry 4.0 framework, the proposed method can be used to determine, in dynamic
form, the reliability of the analyzed product, and to give instantaneous feedback to the process.
Therefore, to show the efficiency of the proposed method to determine the reliability in several
fields, it is applied to the design, the quality and the monitoring product phases as well as to the
fatigue (wearout and aging) phase. In order to let readers adapt the given theory to their fields and/or
research projects, a detailed step by step method to determine the Weibull parameters directly from
the addressed quadratic form is given for each one of the presented fields.
1. Introduction
Nowadays smart manufacturing (SM) is empowering businesses and achieving significant value by
leveraging the industrial internet of things. Therefore, because process and products are now more
complex and multifunctional, more accurate, flexible and dynamic analysis’ tools are needed in the
SM environment. For example, these technical tools are now being implemented into the industry
4.0 framework to evaluate and to make instantaneous feedback in the SM environment. Therefore, in
this chapter a method to determine and/or to design a product or process with high reliability (R(t))
is presented. More importantly, since the proposed method is based on the Weibull distribution
[1], then based on its Weibull shape parameter (β), the proposed method allows us to evaluate the
reliability of the process or product in either of their principal phases; to know the design phase,
which occurs for β < 1, the production phase which occurs for β = 1, and the wearout and aging
phase which occurs for β > 1 [2]. Hence, due to the flexibility given by the β parameter, the proposed
method can be used in the SM environment to evaluate in dynamic form the reliability of any SM
process for which we know the optimal function.
1
Universidad Autónoma de Ciudad Juárez, Av. Hermanos Escobar, Omega, 32410 Cd Juárez, Chihuahua, México.
2
Salvador University (UNIFACS), Brazil..
* Corresponding author: manuel.pina@uacj.mx
236 Innovative Applications in Smart Cities
The novelty of the proposed method is that it lets us to determine the Weibull parameters directly
from the quadratic form elements of the optimal polynomial function used to represent the analyzed
process (or product). Thus, since in the proposed reliability method, its input are only the elements
of the quadratic form, its integration into the industry 4.0 paradigm is direct, and it will leave it to
the decision maker managers to continuously determine the reliability that their processes present.
On the other hand, it is important to highlight that, because the proposed method can be applied
based only on the quadratic form of any optimization function, then, since this optimization function
can be determined by several mathematical, physical, statistical, empirical, and simulations tools,
as they can be a genetic algorithm, mathematical and physical modeling, empirical experimentation
[3] and [4], finite element analysis, and so on, readers easily will be able to adapt the given method
to determine the reliability in their field and/or their projects. Therefore, with the objective that
everyone can adapt the given method, in Sections 2 and 3 the theorical bases on which the proposed
method was formulated, the references, where a detailed explanation of the technical formulations
can be found, as well as the formula to determine the Weibull scale value which let us to determine
the mean and the standard deviation of the input data, are all given. And to show how the proposed
method works in several different fields, its application is presented in Section 4 to the mechanical
stress design field [5]. In Section 5, it is applied to the quality field analysis [6]. In Section 6, it
is applied to the multivariate statistical process control field [7]. In Section 7, it is applied to the
physical field by designing and performing a random vibration test analysis for both the normal and
the accelerated conditions. Finally, in Section 8, it is applied to the Fatigue (wear and aging) field.
Additionally, to facilitate its application to the fields or projects of the readers, in each one
of the above mentioned field applications, a detailed step by step formulation to fit the Weibull
parameters which represent (1) the random behavior of the applied stress, and (2) the Weibull-q
parameters from which We can validate that the estimated Weibull stress distribution accurately
represents the random behavior of the applied stress, are both derived. And its validation is made by
demonstrating that by using the expected stress values given by the Weibull-q parameters, we can
accurately derive both the mean and the standard deviation values of the Q elements from which the
Weibull parameters were determined.
2. Weibull Generalities
This section has the objective of presenting the characteristics of the Weibull distribution that we
can use to determine its parameters directly from an observed set of lifetime data or the known
log-mean and log-standard deviation of the analyzed process. The main motivations to do this are
(1) the Weibull distribution is very flexible to model all life phases of a products and processes,
(2) in either phase of a process (or product), as can be among others, design, analysis, improvement,
forecasting or optimization, both the region which contains the optimum (minimum or maximum)
and the variable levels (values) at which the process represents the optimum, must be considered,
and 3) because it is always possible to model the optimal region by using a homogeneous second
order polynomial model of the form
Ŷ = b0 + b1X1 + b2X2 + b12X1X2 + b11X 12 + b22X 22 (2.1)
Therefore, because from Equation (2.1), the optimum of the analyzed process is determined from
the quadratic form of the fitted optimal polynomial, we can use its quadratic form Q to determine the
Weibull parameters. The quadratic form Q in terms of the interaction (bij) and quadratic effects (bjj)
of the fitted polynomial [8] is given as
b 1
b
Q = 1 11 2 12
(2.2)
2 b21 b22
Weibull Reliability Method for Several Fields Based Only on the Modeled Quadratic Form 237
Here, it is important to notice that, when the interaction effects of Q are zero (bij = 0), the
optimum occurs in the normal plane (see Figure 1), and when they are not zero (bij2 = 0), the optimum
occurs in a rotated plane (see Figure 2).
Thus, because the rotated plane is represented by the eigenvalues (λ1 and λ2) of the Q matrix,
in any optimization process analysis when bij2 ≠ 0 both λ1 and λ2 and the rotation angle θ (see Figure
2) must be estimated, and then they are used to determine the optimal of the process. Even more,
because from the eigenvalues of Q, the corresponding angle θ is unique, then the corresponding
eigenvalues λ1 and λ2 are both unique also. Consequently, in this chapter, both λ1 and λ2 and θ are
used to determine the corresponding Weibull shape β and scale η parameters. Therefore, because λ1,
λ2 and θ are unique, β and η are unique also.
On the other hand, notice because λ1 and λ2 are the axes of the rotated plane (see Figure 2), then
the forms of the analyzed system before and after the rotation are different. Thus, since the normal
distribution does not have a shape parameter, then the normal distribution should not be used to
model the Q form when its interaction elements are not zero (bij2 ≠ 0). In contrast, also notice that
because θ completely determines λ1 and λ2 (see Equation (3.4)), and they can also be determined by
the logarithm of the collected data as in [9], the probabilistic behavior of Q easily can be modeled
by using the Weibull distribution [1] given by
β −1 β
β t t
=f (t ) exp − (2.3)
η η η
Moreover, since for different β values the Weibull distribution can be used to model the whole
life of any product or process [2], the use of the Weibull distribution to model the quadratic form
Q, fitted from data of several fields, is direct. So, since β and η are both time and stress dependent
parameters, then the Weibull distribution is efficient to predict through the time the random behavior
of the λ1 and λ2 values of Q. The analysis to estimate β and η directly from the λ1 and λ2 values of Q
is as follows.
238 Innovative Applications in Smart Cities
−1
n= (2.7)
ln(R(t ))
In contrast, if you are analyzing a set of n collected data, then the R(n) index which the set of
the used n data represents, is determined from Equation (2.7) by solving it to R(n).
Note 1. Here, notice that in Equation (2.7) n is not being used to determine if data whether follows
or not a Weibull distribution. Instead, it is being used only to collect the exact amount of data which
let us accurately fit the Weibull parameters [11].
Step 2. By using the n value estimated in step 1, determine the cumulated failure percentile by using
the median rank approach [10] as
F(ti) = (i – 0.3)/(n + 0.5) (2.8)
Where F(ti) = 1 – R(ti) is the cumulated failure time percentile.
Step 3. By using the F(ti) elements from step 2, determine the corresponding Yi elements as
Yi = ln(–ln(1 – F(ti))) = b0 + B ln(ti) (2.9)
Note 2. Equation (2.9) is the linear form of Equation (2.4), that was defined in Equation (2.5).
Step 4. From a regression between the Yi elements of step 3, and the logarithm of the collected
lifetimes Xi = ln(ti) elements, determine the Weibull-q time β and ηtq values. From Equation (2.9), β
is directly given by the slope, and the Weibull-q scale value is given as
ηtq = exp {–b0/β} (2.10)
The addressed β and ηtq parameters are the corresponding Weibull-q family W(β, ηtq) that
represents the collected data.
Step 5. From the Xi elements of step 4, determine its corresponding log-mean μx and log-standard
deviation σx values, and determine the Weibull scale parameter that represents Q(x)
Weibull Reliability Method for Several Fields Based Only on the Modeled Quadratic Form 239
η t
= exp{μx} (2.11)
Thus, the addressed β and ηt parameters are the Weibull family W(β, ηt) that represents the
related quadratic form Q(x), as shown in Section 3. At this point, only notice that ηt ≠ ηtq because
while ηt is directly given by the μx value, ηtq is given by the collected data.
From this section the general conclusion is that by using Equations (2.9) and (2.10) the Weibull-q
time distribution which represents the collected lifetime data is determined, and by using Equations
(2.9) and (2.11) the Weibull time distribution that represents Q(x) is determined. Now let us present
the numerical application.
Table 1. Weibull Analysis forTable 1: Weibull analysis for collected lifetime data.
Collected Lifetime Data
Equations
(2.7) (2.8) (2.9) (2.12) (2.9) (2.13) (2.7) (2.8) (2.9) (2.12) (2.9) (2.13)
N F(ti) Yi tqi Xi ti N F(ti) Yi tqi Xi ti
1 0.0327 -3.4034 17.4114 2.8571 13.2298 13 0.5934 -0.1052 91.7387 4.5189 69.6882
2 0.0794 -2.4916 27.5652 3.3165 20.9435 14 0.6401 0.0219 97.8115 4.5830 74.3006
3 0.1261 -2.0034 35.2525 3.5625 26.7831 15 0.6869 0.1495 104.3064 4.6473 79.2335
4 0.1728 -1.6616 41.8781 3.7347 31.8160 16 0.7336 0.2798 111.3853 4.7129 84.6099
5 0.2196 -1.3943 47.9144 3.8694 36.4013 17 0.7803 0.4159 119.2925 4.7815 90.6154
6 0.2663 -1.1720 53.5945 3.9814 40.7158 18 0.8271 0.5625 128.4338 4.8554 97.5581
7 0.3130 -0.9793 59.0583 4.0785 44.8660 19 0.8738 0.7276 139.5757 4.9386 106.0201
8 0.3598 -0.8074 64.4027 4.1651 48.9254 20 0.9205 0.9293 154.5059 5.0402 117.3591
9 0.4065 -0.6504 69.7027 4.2442 52.9511 21 0.9672 1.2296 179.7497 5.1915 136.5305
10 0.4532 -0.5045 75.0229 4.3177 56.9920 μy=-0.545624 μ=85.000 μx=4.297077
11 0.5000 -0.3665 80.4249 4.3873 61.0950 σy=1.175117 σ=43.0950 σx=0.592090
12 0.5467 -0.2341 85.9727 4.4540 65.3088 Exp(μx)=73.4846
Step 4. By using the Minitab routine, the regression equation is Yi = –9.074 + 1.985Xi. Hence,
β = 1.985 and from Equation (2.10), ηtq =exp{–(–9.074/1.98469)} = 96.7372 hrs. Consequently, the
Weibull-q distribution that represents the life time data is W(β = 1.985, ηtq = 96.7372 hrs).
Step 5. Since from Equation (2.11) ηt = 73.4846 hrs, the Weibull distribution that represents the
related Q(x) form is W(β =1.985, ηt = 73.4846 hrs).
Finally observe, from Equations (2.4) or (2.9) the lifetime which corresponds to the expected
R(t) index is given as
(β
)
tqi = √–ln(R(t)) * ηtq = exp{Yi /β + ln(ηtq)} (2.12)
For R(t) = 0.9535, t = 20.86 hrs. And the time that corresponds to the expected R(t) index of the
related Q(x) form is given by
( β
)
ti = √–ln(R(t)) * ηt = exp{Yi /β + ln(ηt)} (2.13)
For R(t) = 0.9535, it is tq =15.85 hrs.
From Table 1, we observe that because the mean of the lifetime data of μ = 85 hrs, and the
standard deviation of σ = 43.095 hrs, were both generated by the Weibull-q family W(β = 1.985, ηtq
= 96.7372 hrs), then its corresponding log-mean μx = 4.297077 was also generated by the Weibull-q
family. Furthermore, using μx in Equation (2.11) gives the Weibull scale parameter of ηt =73.4846
hrs of the related Weibull time distribution that represents Q(x), then the Weibull-q family can
always be used to validate the ηt parameter.
240 Innovative Applications in Smart Cities
In Section 3, we will show the elements of the quadratic from which generate the ηt =73.4846
hrs value (λ1 = 127.72 and λ2 = 42.28), which corresponds to an angle of Ɵ = 29.914. However, let
us first present how to estimate the Weibull time and the Weibull-q families when no experimental
lifetime data is available.
And as a consequence of Equation (2.1) being the optimal response surface polynomial model
widely used in the experiment design analysis, then based on the B canonical form of Equation (2.1)
(see [4] Chapter 10), given by
. .
Ŷ = Ys + λ1X 12 + λ1 X 22 (3.2)
the Q(s) matrix defined in Equation (3.1), in terms of the λ1 and λ2 values of Equation (3.2) is given
as
Q(s) = ∑ki,j=1 λj Xj2 (3.3)
Here, it is important to notice that (1) in this section Q(s) instead of representing time represents
stress, and (2) that in the case that Q(s) has several eigenvalues, then in the analysis we have to use
only the maximum λ1 = λmax and minimum (λ2 = λmin) eigenvalues. Therefore, based on the λ1 and λ2
values of stress Q(s) form, the corresponding Weibull stress β, and ηs parameters that represents Q(s)
and the ηsq that represents the expected stresses values are determined as follows.
Step 2. By using the μy value of step 1 of Section 2.2 and the eigenvalues λ1 and λ2 of step 1,
determine the corresponding β value as
−4 µY
β= (3.7)
0.9947 *ln(λ1 / λ2 )
Note 3: From Equations (3.6) and (3.7), the estimated β and ηs values are the Weibull stress
distribution W(β, ηs) which represents the random expected stresses values.
Step 3. From Equation (3.6) (determinant of Q(s)), the expected log-mean μx value is given as
(ηs ) ln
µ x ln=
= ( λ1λ2 ) (3.8)
Step 4. By using the β value of step 2, the value of step 1 of Section 2.2 and the μx value of
step 3 in Equation (2.15), determine the Weibull-q stress ηsq parameter which can be used to validate
the addressed Weibull stress family.
Note 4: The estimated β and ηsq values are the Weibull-q stress distribution W(β, ηsq). Here remember
that for n = 21, μy = –0.545624 and μy = 1.175117 are both constant.
Step 5. By using the β value of step 2, the σx value of step 1 of Section 2.2, determine the expected
log-standard deviation σx as
σx = σy /β (3.9)
242 Innovative Applications in Smart Cities
Finally, note that because both μx and σx let us to determine the Weibull β and η parameters, and
since they are given by the quadratic and interaction effects of the quadratic form Q(s), then in order
to control μx and σx as in [12], the quadratic and interaction effects of Q(s), must be monitored. Or
equivalently μx and σx can be used as the signal parameters in the corresponding dynamic Taguchi
analysis [13] to determine the sensibility of μx and σx to the variation of the Q(s) elements.
3.2 Validation that the estimated β and η parameters represents the used Q(s)
matrix data
The validation is made in the sense that, by using the expected stress data given by the W(β, ηs)
distribution, the eigenvalues λ1 and λ2 defined in Equation (3.4), the mean tress µ defined in Equation
(3.5), and the ηs stress value defined in Equation (3.6) are all completely determined. This fact can
also be seen from Equations (3.7) and (3.8) by noticing that the Weibull-q parameters are determined
by using the λ1 and λ2 eigenvalues, and by noticing from Equation (3.4), that the λ1 and λ2 eigenvalues
are determined by using the mean stress µ and ηs the stress value.
Therefore, in order to validate in each application that the addressed Weibull family W(β, ηsq)
represents the stress data from which it was determined, in the Table of each presented analysis,
the expected data which corresponds to the W(β, ηsq) family is also given. From this data, observe
that the average of the given data is the mean stress µ value defined in Equation (3.5), and that the
exponent of the average of the logarithm of these data, is the ηs stress value.
Hence, it is clear that in using these µ and ηs values in Equation (3.4), the corresponding λ1 and
λ2 eigenvalues are completely determined also. Thus, the conclusion is that in using the expected
data of the W(β, ηsq) family, the original µ, ηs, λ1 and λ2 parameters are all completely determined,
then the W(β, ηsq) family can be used to validate the W(β, ηs) parameters which determine the
random behavior of the applied stresses values given in this section as λ1 and λ2. In the next Sections,
λ1 and λ2 are known as principal stresses σ1 and σ2 values (λ1 = σ1, λ2 = σ2).
Step 1. From the stress analysis of the analyzed component, determine the normal σx and σy and the
shear τxy stresses values that are acting on the element, and then form the corresponding Qs matrix
as in Equation (4.1).
Step 2. By using σx and σy of step 1 in Equation (3.5), determine the arithmetic mean µ.
Step 3. By using the σx, σy and τxy values of step 1 in Equation (3.6), determine the Weibull stress
parameter.
Weibull Reliability Method for Several Fields Based Only on the Modeled Quadratic Form 243
Step 4. By using the µ value of step 2 and the ηs value of step 3 in Equation (3.4), determine the
principal stresses σ1 = λ1 and σ2 = λ2 values.
Step 5. By using the σx, σy and τxy values of step 1, determine the principal angle Ɵ as
θ = 0.5 * tan–1 (2τxy/(σx – σy)) (4.2)
Step 6. By using the principal stresses σ1 and σ2 values of step 4, the yield strength Sy value of the
used material and the desired safety factor SF, in the maximum distortion-energy theory (DE) (Von
Mises) criterion [15] Section 5.4, given by
DE theory = √σ 21 – σ1σ2 + σ22 < Sy/SF (4.3)
And the maximum-shear-stress (MSS) (Tresca theory) [15] Section 5.5, criterion given by
MSS theory = σmax < Sy/SF (4.4)
determine whether the designed element is safe or not.
Step 7. Determine the desired R(t) index and, by using it in Equation (2.7), determine the
corresponding n value.
Step 8. By following steps 1 to 3 of Section 2.1, determine the Yi elements and from them
determine its mean µy and its standard deviation σy. (Remember that for n = 21, µy = –0.545624 and
µy = 1.175117 are both constant).
Step 9. By using the σ1 and σ2 values of step 4, and the µy value from step 8 in Equation (3.7),
determine the Weibull βs parameter.
Note 5: The ηs parameter of step 3 and the βs parameter of this step are the Weibull stress family
W(βs, ηs) which determines the random behavior of the applied stress.
Step 10. By using the ηs parameter of step 3, the µy value of step 8 and the βs parameter of step 9 in
Equation (2.15), determine the Weibull-q stress scale ηsq parameter.
Note 6: The ηsq parameter of this step and the βs parameter of step 9 are the Weibull-q stress family
W(βs, ηsq) which can be used to validate that the addressed W(βs, ηs) family completely represents
the applied stress values.
Step 11. Determine the R(t/s,S) index which corresponds to the yield strength value of used material
mentined in step 6, as
Syβs
R(t/s,S) = βs (4.5)
Sy + ηsβs
Note 7: Equation (4.5) is the Weibull/Weibull stress/strength reliability function (see [9] Chapter
6), which is used to estimate the reliability of the analyzed component only when the Weibull
shape parameter is the same for both the stress and strength distributions. Here, the Weibull stress
distribution W(β, ηs) is given by Equation (3.6) and Equation (3.7), and the Weibull strength
distribution is given by using Sy as the Weibull strength scale parameter. Thus, the Weibull strength
distribution is W(βs, Sy = ηy).
Here, remember that the R(t) = 0.9535 index used to estimate n in Equation (2.7) is the R(t)
index of the analysis, and that the R(t/s,S) of Equation (4.5) is the reliability of the product. On the
other hand, due to in any Weibull analysis the σ1i values given by the W(β, ηs) family, can be used
as the Sy value, then the steps to determine the σ1i values that corresponds to a desired R(t/s,S) index
are also given.
244 Innovative Applications in Smart Cities
Step 6. Suppose after applying the modifier factors, the material’ strength is Sy = 800 mpa and the
safety factor is SF = 3. Hence, due to Equation (4.3) 215.2 < 266.7 and Equation (4.4) 234.3 < 266.7,
the designed element is considered to be safe. See Figure 4.
Step 7. Suppose a reliability analysis with R(t) = 0.9535 is desired, thus, from Equation (2.7),
n = 21. (Remember that for n = 21, µy = –0.545624 and µy = 1.175117 are both constant).
Step 8. The elements and its corresponding µy and σy values are given in Table 1.
Step 9. From Equation (3.7), the Weibull shape parameter is βs = 1.336161. Therefore, the Weibull
stress family is W(βs = 1.336161, ηs = 103.44 mpa).
Step 10. From Equation (2.15) the Weibull-q stress scale parameter is ηsq = 155.8244 mpa. Therefore,
the Weibull-q stress family is W(βs = 1.336161, ηsq = 155.8244 mpa).
Step 11. From Equation (4.5) the designed reliability of the mechanical element is R(t/s,S) = 93.84%.
Here observe the Weibull strength family is W(βs = 1.336161, Sy = 800 mpa).
Step 12. The basic Weibull tan(θi) values for each one of the Yi elements are given in Table 2.
Step 13. The expected pair of principal stresses σ1i and σ2i values for each one of the Yi elements are
given in Table 2.
Step 14. The reliability R(ti/s) values for each one of the Yi elements are given in Table 2.
Table 2: Weibull analysis for mechanical field.
ly
Weibull Stress Weibull-q Data Weibull Stress Weibull-q Data
μ=140 μx=4.639
9 -0.650 0.6136 31.53 168.59 63.47 95.607 4.56025 0.5935 21 1.229 2.5178 68.33 41.08 260.45 392.341 5.97213 0.0327
σ=100.83 σx=0.882
246 Innovative Applications in Smart Cities
From Table 2, observe that because the average of the Weibull-q family is also μ = 140 Mpa,
and from its log-mean, ηs is also (ηs = exp{µx} = 103.4403 Mpa), then the addressed Weibull stress
family completely represents the applied stresses.
On the other hand, notice from Table 2 that the σ1i value which corresponds to R(t/s) = 0.9365
is σ1i = 800 Mpa, and in using Sy = 800 Mpa in Equation (4.5), R(t/s) is also R(t/s) = 0.9365, then
we conclude that for R(t) > 0.90, the R(t/s) values of Table 2 and those given from Equation (4.5),
are similar. Therefore, the σ1i column of Table 2 can be used as a guide to select the minimum yield
strength scale Sy parameter which corresponds to any desired R(t/s,S) index.
Moreover, from the minimum applied stress σ2 value and the Weibull stress and Weibull
strength scale parameters, the minimum yield strength value which we must select from the material
engineering handbook, in order for the designed product to meet the desired reliability, is given as
σ 2η S η sη S
=S y min = , S y max (4.9)
ηs σ2
For example, if the minimal applied stress is σ2 = 45.66 mpa, the Weibull stress parameter is
ηs = 103.44 mpa and the Weibull strength parameter is Sy = 800 mpa, then from Equation (4.9)
the minimum material’s strength value to be selected from the material engineering handbook is
Symin = 353.14 mpa. Similarly, the corresponding expected maximum value is Symax = 1812.36 mpa.
As a summary of this section we have that:
(1) The Weibull-q family W(βs = 1.336161, ηsq = 155.8244 mpa) allows us to validate that the
Weibull stress family W(βs = 1.336161, ηsq = 103.44 mpa), completely represents the quadratic
form Qs elements.
(2) The expected σ1i elements given by the W(βs = 1.336161, ηs =103.44 mpa) family can be used
as the minimum Weibull strength eta value to formulate the corresponding minimum Weibull
strength family W(βs =1.336161, ηs = 800 mpa).
(3) The reliability R(t/s) indices given by the W(βs = 1.336161, ηs = 103.44 mpa) family and that
given by the stress/strength function R(t/s,S) defined in Equation (4.5) are both similar for
higher reliability percentiles (say, higher than 0.90).
Now let present the analysis for the quality field.
elements, the corresponding pdf parameters are determined. Here, the Weibull pdf is used to perform
the analysis, and the steps to fit the Weibull parameters from the Qq matrix elements are as follows.
b b12 / 2
Qq = 11 (5.3)
b21 / 2 b22
Step 6. By using the b11, b22 and b12/2 elements from step 5 in Equation (3.6), estimate the Weibull
stress quality ηs parameter.
Step 7. By using the b11 and b22 elements from step 5 in Equation (3.5), determine the arithmetic
mean µ.
Step 8. By using µ from step 7 and ηs from step 6 in Equation (3.4), determine the maximum λ1 and
the minimum λ2 eigenvalues.
Step 9. Determine the desired reliability R(t) index to perform the analysis, and by using it in
Equation (2.7), determine the corresponding sample size n value.
Step 10. Following steps 1 to 3 of Section 2.1, determine the corresponding Yi elements and its mean
μy and standard deviation σy.
Step 11. By using the λ1 and λ2 values from step 8, and μy from step 10 in Equation (3.7), determine
the quality Weibull β parameter.
Step 12. By using ηs from step 6 and β from step 11, form the quality Weibull stress family W(β, ηs).
Step 13. By using the β value and the Yi elements of step 10, determine the basic Weibull values
tan(θi ) = exp {Yi / β } (5.4)
Step 14. By using the basic Weibull values from step 13 and the ηs value from step 6, determine the
expected eigenvalues λ1i and λ2i as
Weibull Reliability Method for Several Fields Based Only on the Modeled Quadratic Form 249
=λ1i η=
s / toi and λ2i η s *toi (5.5)
Step 15. By using β from step 11, μy from step 10 and ηs from step 6 in Equation (2.15), determine
the Weibull-q parameter ηsq. The estimated β value in step 10 and the ηsq value of this step are the
Weibull-q distribution W(β, ηsq).
Now let us present the numerical application.
Step 2. By using in Minitab the signal to noise ratio nominal best given by
(
S / N = 10 log(10) µˆ 2 / σˆ 2 ) (5.6)
The signal to noise response Table and the mean response Table are
Table 4: S/N ratio nominal the best.
/N
Level A B C D E F G
1 34.84 34.57 32.95 35.99 34.59 32.85 34.31
2 32.70 32.96 34.59 31.55 32.95 34.69 33.23
Delta 2.14 1.61 1.63 4.44 1.64 1.84 1.08
Rank 2 6 5 1 4 3 7
From Table 4 and Table 5, the factor levels which are closer to the weather strip requirement
of 350 ± 35 mm are: Setting 1, (A1 B1 C2 D1 E1 F1 G2). And Setting 2, (A1 B1 C1 D1 E1 F1 G2).
Therefore, by using the Taguchi polynomial model given by
k
T
= ∑ µ̂ − (k − 1)µ
i=1
i (5.7)
250 Innovative Applications in Smart Cities
Where μ̂i is the mean of the corresponding factor’s levels, and μ is the overall mean, the predicted
mean and standard deviation of the Setting 1 (A1 B1 C2 D1 E1 F1 G2) are µ = 353.415 and
σ = 4.77323 mm, respectively. And to the Setting 2, (A1 B1 C1 D1 E1 F1 G2), they are µ = 349.135
and σ = 6.37765 mm. Thus, because from Setting 2, µ = 349.135 is closer to the nominal value of
350, and since from Equation (5.1) and Equation (5.2), its corresponding capability indices are
cp = 1.83 and cpk = 1.98, which are close to six sigma performance (cp = 2, cpk = 1.67), Setting 2
(A1 B1 C1 D1 E1 F1 G2) is implemented.
Step 3. Suppose we found that two environmental noise factors (Z1 and Z2) affect the selected
Setting 2 process output.
Step 4. The central composite design and the corresponding experimented data for the environmental
factors are given in Table 6. By using Minitab, the Anova analysis is given in Table 7. The fitted
second order polynomial model is Dim = 349.20 + 15 Z1 + 10 Z2 + 100Z1 * Z1 + 70Z2 * Z2 + 80Z1 * Z2.
Step 5. From the fitted polynomial, the quadratic Qq matrix is Qq = [100 40; 40 70].
Step 6. From the determinant of Qq, the Weibull stress parameter is ηs = 73.4847 mm.
Step 7. From the Qq elements, the arithmetic mean is µ = 85 mm.
Step 8. From Equation (3.4), the eigenvalues of Qq are λ1 = 127.72 mm and λ2 = 42.28 mm.
Step 9. Suppose the desired reliability index is R(t) = 0.9535. Thus, from Equation (2.7), n = 21.
Step 10. The Yi elements, its mean μy and standard deviation σy are given in Table 8. Here, remember
that, for n = 21, μy = –0.545624 and μy =1.175117 are both constant.
Step 11. From Equation (3.7), the Weibull shape parameter is β = 1.984692.
Weibull Reliability Method for Several Fields Based Only on the Modeled Quadratic Form 251
Step 12. From steps 6 and 11, the Weibull stress family is W(β = 1.984692, ηs = 73.4847 mm).
Step 13. The basic Weibull elements are given in Table 8.
Step 14. The expected eigenvalues λ1i and λ2i elements are given in Table 8.
Step 15. From Equation (2.10), ηsq = 96.7367 mm. Therefore, the Weibull-q distribution is W(β =
1.984692, ηsq = 96.7367 mm).
On the other hand, notice that, as in Table 2, from Table 8 we have that a product with strength
of 257.89 presents a reliability of R(t) = 0.9206. If the product has a strength of 408.28, then it will
present a reliability of R(t) = 0.9673. Finally, it is important to mention that for several control factor
settings, the Weibull parameters can be directly determined from the Taguchi analysis, as in [17].
Now, let us present the principal components field analysis.
σ 2 σ 1σ 2
Qc = 1 (6.1)
σ 2σ 1 σ 22
On the other hand, in the case of the normal multivariate T^2 Hotelling chart, and in the case
of the non-normal R-chart, all the analyzed output variables (Y1, Y2,…, Yk) must be correlated with
each other. Hence, in the multivariate control field, the Qc matrix always exists. Thus, the decision-
making process has to be performed based on the eigenvalues of the Qc matrix. However, first it
is important to mention that, in the multivariate control process field, the Qc matrix is determined
in such a way that it represents the process and customer requirements. Also, it is determined in
the phase 1 of the multivariate control process [7]. In practice, this phase 1 is performed by using
only conformant products. Therefore, the Qc matrix always represents the allowed variance and
covariance expected behavior. For details of phase 1, see [7] Section 2. However, because the
output process’ variables are random, the Weibull distribution is used here to determine the random
behavior of the eigenvalues of the Qc matrix. Now, let us give the steps to determine the Weibull
stress and the Weibull-q families from the Qc matrix.
252 Innovative Applications in Smart Cities
Step 9. By using ηs from step 5 and βc from step 8, form the principal components Weibull stress
family W(βc, ηs).
Step 10. By using βc the parameter of step 8, and the Yi elements of step 7, determine the logarithm
of the basic Weibull values as
ln[tan(θi )] = {Yi / β c } (6.5)
Step 11. From the logarithm basic Weibull values of step 10, determine the corresponding basic
Weibull values as
tan(θi ) = exp {Yi / β c } (6.6)
Step 12. By using the basic Weibull values of step 11 and the Weibull stress ηs parameter from step
5, determine the expected pair of eigenvalues λmax and λmin as
=λmax η=
s / tan(θ i ) and λmin η s * tan(θi ) (6.7)
Step 13. By using ηs from step 5 and βc from step 8 in Equation (2.15), determine the parameter.
Then, form the corresponding Weibull-q distribution W(βc, ηs).
Now let us present the application.
Weibull Reliability Method for Several Fields Based Only on the Modeled Quadratic Form 253
Step 4. From Matlab the eigenvalues Qc are λ1 = 342.3, λ2 = 286.2 and λ3 = 199.3.
Step 5. By using λ1 = 342.3 and λ3 = 199.3 in Equation (6.3), the Weibull scale parameter is
ηs = 261.1903.
Step 6. The desired reliability for the analysis is R(t) = 0.9535, hence, from Equation (2.7), n = 21.
Step 7. The Yi, μy and σy values are given in Table 10.
Step 9. The principal component Weibull stress family is W(βc = 4.128837, ηs = 261.1903).
Step 10. The logarithm of the basic tangent Weibull values is given in Table 10.
Step 8. From Equation (6.4), the Weibull shape parameter is βc = 4.128837.
Step 11. The basic Weibull values are given in Table 10.
Step 12. The expected pair of λmax and λmin eigenvalues are given in Table 10.
Step 13. From Equation (2.15), the Weibull time scale parameter is ηsq = 298.091. Therefore, the
Weibull-q stress distribution is W(βc = 4.128837, ηsq = 298.091).
As a summary of this section we have that, by using λmax and λmin eigenvalues, their random
behavior can be determined using the Weibull distribution. Also, notice that the above Weibull
analysis can be performed to any desired pair of eigenvalues of Table 10, or to any desired pair of
eigenvalues of the analyzed Qc matrix. For example, following the steps above, the Weibull stress
parameters to λ1 = 342.3 and λ2 = 286.2 are W(βc = 12.476153, ηs = 312.9956). And for λ2 = 286.2
and λ3 = 199.3, they are W(βc = 6.171085, ηs = 238.8298). However, notice that, by using λmax and
λmin, we determine the maximum expected dispersion, as can be seen by observing that βc = 4.128837
< βc = 6.171085 < βc =12.476153). Finally notice from Table 10 that the higher eigenvalue of 595.6
has a cumulated probability of F(t) = 1–0.9673 = 0.0327 of being observed. Thus, the eigenvalues
of the analyzed process should be monitored as in [21].
Now let us present the Vibration field analysis.
No Y1 Y2 Y3 No Y1 Y2 Y3 No Y1 Y2 Y3 No Y1 Y2 Y3
1 43.8640 10.0494 33.1157 26 32.4801 17.1588 46.6231 51 43.7319 3.6033 21.1823 76 67.3737 66.0977 31.0451
2 26.2650 27.9573 61.9268 27 36.3008 19.6027 7.1347 52 40.9684 54.8082 21.5374 77 17.0418 16.4991 51.4294
3 10.6819 40.9290 52.3804 28 55.3878 5.7238 69.6335 53 32.7180 9.7179 41.4377 78 22.0765 48.3536 29.1045
4 21.2870 16.6687 8.1463 29 38.4896 37.4908 9.7581 54 33.7553 10.3044 34.2749 79 41.2681 16.7305 14.4449
5 19.8809 33.2707 30.6555 30 26.2484 43.7533 35.3334 55 25.4028 10.9298 23.6971 80 24.1916 2.4764 71.3878
6 14.9761 17.7642 38.6981 31 46.0915 32.5348 53.2886 56 52.6536 53.4061 13.7265 81 43.1579 29.8683 47.5383
7 11.5074 8.4100 24.7898 32 55.9161 40.9254 49.6362 57 17.0959 70.3467 14.8262 82 9.7300 19.2400 5.5050
8 42.7372 55.3353 21.3860 33 24.7075 25.8449 15.4058 58 24.2714 35.3896 47.1922 83 5.0324 2.6088 47.9323
9 13.9595 73.1556 76.0348 34 27.7981 42.7312 71.3045 59 12.9169 17.8995 30.6870 84 21.4039 2.1993 48.3945
10 21.7754 51.9827 27.6237 35 17.3796 12.2502 18.3954 60 18.9455 28.2305 34.9089 85 36.8481 19.6955 15.8172
11 20.0170 37.6447 74.3218 36 9.8233 32.1899 36.2559 61 49.5829 52.7012 24.4299 86 24.9993 29.8108 65.8987
12 30.6683 18.1314 15.8034 37 17.8899 5.5475 41.3209 62 30.6876 15.0500 30.4575 87 8.1575 35.3204 19.9059
13 1.9585 33.2475 32.3899 38 29.0605 21.1432 8.7047 63 5.6582 16.3089 17.6452 88 29.0943 20.4490 39.4786
14 34.5505 18.4945 59.5922 39 23.6911 61.2302 25.8783 64 27.2570 63.1206 23.3912 89 29.2487 6.4393 43.5499
15 19.3629 24.3424 65.7042 40 42.1966 28.0071 26.0839 65 23.9339 20.1540 30.9628 90 50.7453 43.8424 7.5473
16 17.9148 8.6084 24.1414 41 36.4545 36.2384 33.1884 66 25.0717 11.6396 43.1879 91 35.9559 31.6888 76.9695
17 14..3565 26.5591 13.1767 42 27.4647 25.2031 6.5683 67 26.3030 10.1967 27.9118 92 21.4835 30.4616 43.6526
254 Innovative Applications in Smart Cities
18 30.8967 20.2957 12.2357 43 11.2996 21.2611 31.2990 68 33.1910 8.2672 31.3328 93 45.1344 30.2467 38.5232
19 43.0528 29.1823 14.7479 44 68.6678 12.3167 29.7939 69 24.2425 37.4854 13.0109 94 31.0069 23.1948 45.0889
20 32.8374 7.4563 32.3237 45 46.4753 15.2157 44.6967 70 35.0409 32.6236 10.7002 95 3.6266 30.9451 61.7411
21 12.8864 57.8727 5.7250 46 17.0357 22.3405 55.3367 71 46.4863 31.6622 43.4032 96 29.8396 22.3479 65.3039
22 68.6678 57.8671 12.7594 47 30.7515 25.3002 41.9358 72 42.1082 23.1356 13.8176 97 4.3065 17.5413 24.7333
23 25.1271 2.8157 28.7178 48 5.1723 19.2992 36.3460 73 50.9111 34.8608 46.2676 98 15.9923 6.9033 9.7309
24 24.4596 20.2257 10.9644 49 27.6552 36.1258 43.3015 74 42.9994 22.2937 28.2687 99 10.4957 28.7806 33.2900
25 4.8921 16.2747 14.3619 50 11.1145 48.9268 43.8018 75 25.1423 16.5281 11.4357 100 41.5974 25.9443 24.0576
Table 10: Weibull analysis for the principal components field.
Table 10. Weibull analysis for the Principal Components Field
Equations
N Yi ln(tang(Ɵi)) tang(Ɵi) λmax λmin R(ti) Yi ln(tang(Ɵi)) tang(Ɵi) λmax λmin R(ti)
(2.7) (2.9) (6.5) (6.6) (6.7) (4.8) (2.7) (2.9) (6.5) (6.6) (6.7) (4.8)
n
1 -3.403 -0.8243 0.4385 595.60 114.54 0.9673 11 -0.366 -0.0887 0.9150 285.43 239.00 0.5000
2 -2.491 -0.6034 0.5469 477.57 142.84 0.9206 12 -0.234 -0.0567 0.9448 276.42 246.79 0.4533
3 -2.003 -0.4852 0.6155 424.31 160.77 0.8738 13 -0.105 -0.0255 0.9748 267.93 254.61 0.4065
4 -1.661 -0.4024 0.6686 390.60 174.65 0.8271 14 0.021 0.0053 1.0053 259.80 262.58 0.3598
5 -1.394 -0.3377 0.7133 366.12 186.33 0.7804 15 0.149 0.0362 1.0368 251.90 270.82 0.3131
Based on these assumptions, the steps to determine the corresponding vibration Weibull stress and
Weibull testing time families are as follows.
Step 6. Take the total cumulated energy as the maximum eigenvalue, given by
k
λmax = ∑A
i=1
i (7.2)
Where Ai represents the area of the th-row of the testing’s profile, given as
m/10log(2)
APSDi fi−1
=Ai 10 log(2) fi − fi−1 (7.3)
10 log(2) + m fi
where APSDi is the applied energy and fi is the frequency of the i-th row of the used testing’s profile,
f(i-1) is the frequency of the (th-1)-row of the testing’s profile, and m is the slope given as
m = dB/octaves (7.4)
where
dB = 10 log( APSDi / APSDi−1 ) (7.5)
octaves = log( fi / fi−1 ) / log(2) (7.6)
Step 7. By using the μy value from step 4, and the addressed λmin and λmax values from steps 5 and 6,
determine the corresponding Weibull vibration βv parameter as
−4 µY
β = (7.7)
v
0.9947 * ln(λmax / λmin )
Step 8. By using the testing’s time of step 2, the R(t) index from step 3, and the βv value from step 7,
determine the corresponding Weibull-q scale parameter as
ti
ηtq = (7.8)
ln(− ln(R(t )))
exp
βv
Note 8. The Weibull testing time family is W(βv, ηtq).
Weibull Reliability Method for Several Fields Based Only on the Modeled Quadratic Form 257
Step 9. By using the square root of the Ai value of step 6, the reliability index from step 3, and the βv
value from step 7, determine the corresponding Weibull stress vibration scale parameter as
Ai
ηs = (7.9)
ln(− ln(R (t )))
exp
βv
ηvt
= Ta niβ v *ti
βv
= (7.15)
Then the reliability function defined in Equation (4.8) in terms of Equation (7.14) is also given as
t β v
R
= (t ) exp − β v (7.16)
Ta
On the other hand, because the relation between the applied vibrations Si and the testing times
ti for any two rows of Table 12 always holds, from the relation
Saccel tnorm
= (7.17)
Snorm taccel
we can use the Si column of Table 12 as the accelerated vibration level for any desired sample size
value. For example, we can demonstrate R(t) = 0.97 by testing 6 parts for 24 hrs each at constant
vibration level of 5.447 Grms. (we must test each part in the X, Y and Z axes by 8 hrs each).
The accelerated test parameters and its corresponding testing’s profile are given in Table 13 and in
Figure 6.
For deeper Weibull/vibration analysis see [24]. Now let us present the relations between the
Weibull parameters and the cycling formulas which can be used to perform the corresponding
fatigue analysis.
8.1 Steps to determine the weibull stress and time fatigue analysis
Step 1. From the applied normal σx and σy and shear τxy stresses values, determine the arithmetic mean
stress as
μ = (σx + σx)/2 (8.1)
Step 2. By using the applied normal σx and σy and shear τxy stresses from step 1, determine the fatigue
Weibull stress parameter as
η f
= √σx σy – τ 2xy (8.2)
Step 3. By using the mean µ and the ηf values from steps 1 and 2, determine the maximum (σ1) and
the minimum (σ2) principal stress values as
σ1, σ2 = µ ± √µ2 – ηf2 (8.3)
Step 4. By using the principal stress values from step 3, determine the alternating stress as
Sa = (σ1 – σ2)/2 (8.4)
Step 5. Following steps 1 to 3 of Section 2.1, determine the Yi elements and its mean μy and standard
deviation σy.
Step 6. By using the μy value from step 5 and the principal stress values from step 3 in Equation (3.7)
determine the corresponding Weibull shape parameter β.
Note 10: The ηf parameter from step 2 and the β parameter from this step are the Weibull stress
family W(β, ηtf).
Step 7. By using μy from step 5 and the β value from step 6 in Equation (2.10), determine the Weibull
time scale parameter ηtf. Thus, the Weibull time family is W(β, ηtf).
Step 8. By using the β parameter from step 6 and the Yi elements from step 5, determine the logarithm
of the basic Weibull elements as
ln[tan(θi )] = {Yi / β } (8.5)
Weibull Reliability Method for Several Fields Based Only on the Modeled Quadratic Form 261
Step 9. From the logarithm of the basic Weibull values of step 8, determine the corresponding basic
Weibull values as
tan(θi ) = exp {Yi / β } (8.6)
Step 10. By using the basic Weibull values of step 9 and ηf the value from step 2, determine the
expected maximum and minimum stresses values as
=S1i η=
f / tan(θ i ), S 2i η f * tan(θi ) (8.7)
Step 11. Determine the basic Weibull value which corresponds to the principal stress values of step
3 as
tan(θλ1,λ2) = √λ2 – λ1 (8.8)
Step 12. By using the expected stress values of step 10, determine the corresponding Weibull angle
as
θ i
= tan −1 ( S2i / S1i ) (8.9)
Step 13. Determine the reliability index which corresponds to the principal stress values as
R(t) = exp{–(√λ2 – λ1 )βs} (8.10)
Now let us present the application.
Table 14: Weilbull fatigue analysis for mechanical data of Section 4.2.
Table 14. Weibull Fatigue analysis for Mechanical data of Section 4.2
Equations
(2.7) (2.9) (8.5) (8.6) (8.7) (8.9)
N Yi In(tan(Ɵi)) tan(Ɵi) σ1i σ 2i Ang(Ɵi) R(ti)
1 -3.4035 -2.5558 0.0776 1332.5045 8.0300 4.439 0.9673
2 -2.4917 -1.8711 0.1540 671.8874 15.9252 8.752 0.9206
3 -2.0035 -1.5045 0.2221 465.6722 22.9775 12.524 0.8738
4 -1.6616 -1.2478 0.2871 360.2496 29.7015 16.021 0.8271
5 -1.3944 -1.0471 0.3509 294.7447 36.3025 19.339 0.7804
6 -1.1721 -0.8801 0.4147 249.4209 42.8992 22.525 0.7336
-1.0890 -0.8178 0.4414 234.3400 45.6600 23.817 0.7142
7 -0.9794 -0.7355 0.4793 215.8224 49.5776 25.608 0.6869
8 -0.8074 -0.6063 0.5453 189.6810 56.4103 28.605 0.6402
9 -0.6505 -0.4885 0.6136 168.5917 63.4668 31.531 0.5935
10 -0.5045 -0.3789 0.6846 151.0868 70.8200 34.397 0.5467
11 -0.3665 -0.2752 0.7594 136.2141 78.5526 37.213 0.5000
12 -0.2341 -0.1758 0.8388 123.3234 86.7635 39.989 0.4533
13 -0.1053 -0.0791 0.9240 111.9509 95.5773 42.737 0.4065
0.0000 0.0000 1.0000 103.4406 103.4406 45.000 0.3679
14 0.0219 0.0165 1.0166 101.7512 105.1581 45.472 0.3598
15 0.1495 0.1123 1.1188 92.4541 115.7327 48.210 0.3131
16 0.2798 0.2101 1.2339 83.8350 127.6312 50.976 0.2664
17 0.4160 0.3124 1.3667 75.6891 141.3673 53.806 0.2196
18 0.5625 0.4224 1.5256 67.8020 157.8119 56.756 0.1729
19 0.7276 0.5464 1.7270 59.8955 178.6440 59.928 0.1262
20 0.9293 0.6979 2.0094 51.4772 207.8582 63.543 0.0794
1.0890 0.8178 2.2655 45.66 234.34 66.183 0.0512
21 1.2297 0.9234 2.5178 41.0830 260.4473 68.339 0.0327
µy = -0.545624 σy = 1.175117
σ 2η S η f ηS
=S y min = , S y max (8.12)
ηf σ2
For example, suppose we want to design an element with a minimum reliability of R(t) =
0.9673, thus, from Table 14, the corresponding value is ηs = 1332.5045 mpa, and since Table 14
was constructed with ηf = 103.44 mpa, then, by using these values with the minimal stress of
σ2 = 45.66 mpa in Equation (8.12), the minimum material’s strength to be selected from a engineering
handbook is Sy = 588.1843 mpa.
9. Conclusions
(1) When a quadratic form represents an optimum (maximum or minimum), its random behavior
can be modeled by using the two parameter Weibull distribution.
(2) In the proposed Weibull analysis, the main problem consists of determining the maximum
and the minimum stress (λ1 and λ2) values that generate the failure. However, once they are
determined, both the Weibull stress and the Weibull time families are determined. Therefore,
the stress values used in Equation (3.7) must be those stresses values that generate the failure.
Here, notice that the constant 0.9947 value used in Equation (3.7) was determined only for the
given application. The general method to determine it for any λ1 and λ2 values is given in [9]
Section 4.1.
(3) The columns σ1i and σ2i are the maximum and minimum expected stress values which generate
the failure, and thus, σ1i represents the minimum Weibull strength value that the product must
present to withstand the applied stress. Therefore, column σ1i can be used as a guide to select
the minimal strength material as it is given in Equations (4.9) and (8.12).
(4) From Table 12, the columns R(ti), ni, ti, and Si can be used to design any desired accelerated
testing scenario. For example, suppose we want to demonstrate R(ti) = 0.9490, then from Table
12 we have that by fixing the testing time t = 24 hrs, we can test n = 19.143 (testing 18 parts by
24 hrs each one and one part by 1.143*24 hrs) at a constant stress of Si = 3.425 Grms.
(5) In any Weibull analysis, the n value addressed in Equation (2.7) is the key variable in the
analysis. This because for the used β value it always let us to determine the basic Weibull
elements as tan(θi ) = 1/ ni1/ β . This fact can be seen by combining Equation (2.7) and Equation
(4.8) or directly from Equations (43) and (53) in [9].
264 Innovative Applications in Smart Cities
(6) From the applications, we have that although they appear to be very different, because all
of them use a quadratic form in their analysis, they can all be analyzed by using the Weibull
distribution. Generalizing, we believe that the Weibull distribution can always be used to model
the random behavior of any quadratic form when the cumulated damage process can be modeled
by using an additive damage model, such as that given in [27]. When the damage is not additive,
then the log-normal distribution, which is based on the Brownian motion [28], could be used.
(7) Finally, it is important to mention that by using the maximum and minimum applied stresses,
the given theory could be used in the contact ball bearing analysis [29] to determine the
corresponding Weibull shape parameter and to determine the corresponding Weibull stress
scale parameter from the equivalent stress and/or from the corresponding dynamic load as it is
given in [30]. Similarly, the Weibull time scale parameter can be determined from the desired
L10 life proposed by [31].
References
[1] Weibull, W. 1939. A statistical theory of the strength of materials. Proceedings, R Swedish Inst. Eng. Res. 151: 45.
[2] Rinne, H. 2009. The Weibull distribution a handbook. CRC PRESS. ISBN-13:978-1-42008743-7; http: //dx.doi.
org/10.1201/9781420087444.
[3] Montgomery, D.C. 2004. Design and Analysis of Experiments. Limusa Wiley, New York, USA. ISBN. 968-18-6156-6.
[4] Box, G.E.P. and Draper, N.R. 1987. Empirical Model-Biulding and Response Surfaces, Wiley, New York USA. ISBN-
13. 978-0471810339.
[5] Steven, R. Schmid, Bernard J. Hamrock and Bo O. Jacobson. 2014. Fundamentals of Machine Elements SI Version
Third Edition. Taylor and Francis Group. Boca Raton Fl. ISBN-13: 978-1-4822-4750-3 (eBook - PDF).
[6] Kay Yang and Basem El-Haik. 2003. Design for Six Sigma; A Roadmap for Product Development. McGraw-Hill.
ISBN-0-07-141208-5.
[7] Piña-Monarrez, M.R. 2013. Practical Decomposition Method for T^2 Hotelling Chart. International Journal of
Industrial Engineering Theory Applications and Practice. 20(5-6): 401–411.
[8] Howard Anton, Irl Bivens, Stephen Davis. Calculus: Early Transcendentals Combined. Somerset, New Jersey, 8th
Edition, 2005. ISSN-13:978-0471472445.
[9] Piña-Monarrez, M.R. 2017. Weibull Stress Distribution for Static Mechanical Stress and its Stress/strength Analysis.
Qual Reliab Engng Int. 2018; 34: 229–244. DOI:10.1002/qre.2251.
[10] Mischke, C.R. 1979. A distribution-independent plotting rule for ordered failures. Journal of Mechanical Design; 104:
593–597. DOI: 10.1115/1.3256391.
[11] Piña-Monarrez, M.R., Ramos-López, M.L., Alvarado-Iniesta, A, Molina-Arredondo, R.D. 2016. Robust sample size for
Weibull demonstration test plan. DYNA.; 83: 52–57.
[12] Piña-Monarrez, M.R. 2016. Conditional Weibull control charts using multiple linear regression. Qual Reliab Eng Int.;
33: 785–791. https://doi.org/10.1002/qre.2056.
[13] Taguchi, G., Subir, C. and Wu, Y. 2005. Taguchi’s Quality Engineering Handbook. John Wiley and Soons. ASI
Consulting Group, LLC. Livonia, Michigan. ISBN: 0-471-41334-8.
[14] Kececioglu, D.B. 2003. Robust Engineering Design‐By‐Reliability with Emphasis on Mechanical Components and
Structural Reliability. Pennsylvania: DEStech Publications Inc. ISBN:1-932078-07-X.
[15] Budynas, N. 2006. Shigley’s Mechanical Engineering Design. 8th ed. New York: McGraw-Hill.
[16] Piña-Monarrez, M.R., Ortiz-Yañez, J.F., Rodríguez-Borbón, M.I. 2015. Non-normal capability indices for the Weibull
and lognormal distributions. Qual Reliab Eng Int.; 32: 1321–1329. https://doi.org/10.1002/qre.1832.
[17] Piña-Monarrez, M.R. and Ortiz-Yañez, J.F. 2015. Weibull and Lognormal Taguchi Analysis Using Multiple Linear
Regression. Reliab Eng. Syst. Saf; 144: 244–53. doi:10.1016/j.ress.2015.08.004.
[18] Peña, D. 2002. Análisis de Datos Multivariantes, Mc Graw Hill. ISBN: 84-481-3610-1.
[19] Piña-Monarrez, M.R. 2018. Generalization of the Hotelling´s T^2 Decomposition Method to the R-Chart. International
Journal of Industrial Engineering, 25(2): 200–214.
[20] Liu, R.Y. 1995. Control Charts for Multivariate Processes. Journal of the American Statistical Association. 90(432):
1380–1387.
[21] Piña-Monarrez, M.R. 2019. Probabilistic Response Surface Analysis by using the Weibull Distribution. Qual Reliab
Eng Int. 2019; in Press.
[22] Piña-Monarrez, M.R. 2019. Weibull Analysis for Random Vibration Testing. Qual Reliab Eng Int. 2019; in Press.
[23] SS-ISO16750-3:(2013). Road vehicles – Environmental conditions and testing for electrical and electronic equipment
– Part 3: Mechanical loads (ISO 16750-3:2012, IDT) A https://www.sis.se/api/document/preview/88929/.
[24] Larry Edson. 2008. The GMW3172 Users Guide. The Electrical Validation Engineers Handbook Series. Electrical
Component Testing. https://ab-div-bdi-bl-blm.web.cern.ch/ab-div-bdi-bl-blm/RAMS/Handbook_testing.pdf.
Weibull Reliability Method for Several Fields Based Only on the Modeled Quadratic Form 265
[25] Enrique Castillo, Alfonso Fernández-Canteli, Roland Koller, María Luisa Ruiz-Ripoll and Alvaro García. 2009. A
statistical fatigue model covering the tension and compression Wöhler fields. Probabilistic Engineering Mechanics 24,
199–209. doi:10.1016/j.probengmech.2008.06.003.
[26] Yun Li Lee, Jwo Pan, Richard Hathaway and Mark Barkey. 2005. Fatigue testing and analysis, Theory and practice.
Elsevier Butter Worth Heineman, New York. ISBN:0-7506-7719-8.
[27] Nakagawa, T. 2007. Shock and Damage Models in Reliability Theory, vol. 54. Springer-Verlag: London.
DOI:10.1007/978-1-84628-442-7.
[28] Marathe, R.R. and Ryan, S.M. 2005. On the validity of the geometric Brownian motion assumption. The Engineering
Economist, 50: 159–192. doi:10.1080/00137910590949904.
[29] Erwin, V. Zaretsky. 2013. Rolling Bearing Life Prediction, Theory and Application (NA SA/TP—2013-215305) National
Aeronautics and Space Administration Glenn Research Center, Cleveland, Ohio 44135. Available electronically at
http://www.sti.nasa.gov.
[30] Palmgren, A. 1924. The Service Life of Ball Bearings. Z. Ver. Deut. Ingr. (NASA TT F–13460), 68(14): 339–341.
[31] Lundberg, G. and Palmgren, A. 1947. Dynamic Capacity of Rolling Bearings. Acta Polytech. Mech. Eng. Ser., 1(3).
9 Taylor & Francis
Taylor & Francis Group
http://taylorandfra ncis.com
Index
A I
Algorithms for Warehouses 167–169 ICT for industrial 203
Ambient Intelligence 47, 48 Industrial IoT 229
applied artificial intelligence 161 Industry 4.0 203, 235, 236
Aquaculture industry 186
K
B
KMoS-RE 90, 96–101, 105
Big data applied to the Automotive Industry 232 Knowledge Management 90, 96, 98, 105
breast cancer 1–3 Koi Fish 187, 198, 199, 201
C M
Caloric Burning 10–12, 15, 16, 18 Mechanical design 242
Capability indices 250 menge simulation 138, 139, 144, 146, 147, 149, 151, 154
Children color blindness 81 Mental workload 34, 39
classification 216, 218, 221, 222, 226, 227 Metabolic Equivalent 11, 12, 15
Clinical Dashboard 47, 48 micro-enterprises 25, 26, 31
Cognitive Architecture 89, 90, 95, 96, 98–104 Mobile APP 47, 49, 51, 52, 57, 59, 66, 69, 73, 74
Cognitive Innovation Model for Smart Cities 89 mobile cooler 28–30
Color blindness 75–81, 87 Monitoring 47–49, 69, 74
conservation 28–33 multi agent tool 117
crowd simulation 147, 154 Multicriteria Analysis 47, 49
myoelectric signals 216, 227
D
O
data analytics 48, 108, 114
data indexing 112 Order Picking Heuristics 176–180
deep learning 1–4
Diabetes 47–52, 66, 69, 73, 74 P
Diabetes Complications 47
distribution 23, 26, 27, 29–31, 33 pattern recognition and deep learning
perishable foods 22–24, 31, 32
E Principal components 251–253, 255
processing 216, 218, 219, 221–223, 227
E-governance 162
Electronic colorblindness 75 Q
Evaluation of business leaders 207–209
Quadratic form 235–237, 239, 240, 242, 247, 264
F Quality improvement 249
W
Weibull fatigue analysis 262