Cloud Computing For Cities
Cloud Computing For Cities
189
Cloud Infrastructures,
Services, and IoT Systems
for Smart Cities
Second EAI International Conference, IISSC 2017 and CN4IoT 2017
Brindisi, Italy, April 20–21, 2017
Proceedings
123
Lecture Notes of the Institute
for Computer Sciences, Social Informatics
and Telecommunications Engineering 189
Editorial Board
Ozgur Akan
Middle East Technical University, Ankara, Turkey
Paolo Bellavista
University of Bologna, Bologna, Italy
Jiannong Cao
Hong Kong Polytechnic University, Hong Kong, Hong Kong
Geoffrey Coulson
Lancaster University, Lancaster, UK
Falko Dressler
University of Erlangen, Erlangen, Germany
Domenico Ferrari
Università Cattolica Piacenza, Piacenza, Italy
Mario Gerla
UCLA, Los Angeles, USA
Hisashi Kobayashi
Princeton University, Princeton, USA
Sergio Palazzo
University of Catania, Catania, Italy
Sartaj Sahni
University of Florida, Florida, USA
Xuemin Sherman Shen
University of Waterloo, Waterloo, Canada
Mircea Stan
University of Virginia, Charlottesville, USA
Jia Xiaohua
City University of Hong Kong, Kowloon, Hong Kong
Albert Y. Zomaya
University of Sydney, Sydney, Australia
More information about this series at http://www.springer.com/series/8197
Antonella Longo Marco Zappatore
•
Cloud Infrastructures,
Services, and IoT Systems
for Smart Cities
Second EAI International Conference, IISSC 2017 and CN4IoT 2017
Brindisi, Italy, April 20–21, 2017
Proceedings
123
Editors
Antonella Longo Dario Bruneo
Department of Engineering for Innovation Dipartimento di Ingegneria
University of Salento Università di Messina
Lecce Messina
Italy Italy
Marco Zappatore Rajiv Ranjan
Department of Engineering for Innovation Newcastle University
University of Salento Newcastle upon Tyne
Lecce UK
Italy
Maria Fazio
Massimo Villari DICIEAMA Department
Faculty of Engineering University of Messina
University of Messina Messina
Messina Italy
Italy
Philippe Massonet
Omer Rana CETIC
Cardiff University Charleroi
Cardiff Belgium
UK
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now
known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are
believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors
give a warranty, express or implied, with respect to the material contained herein or for any errors or
omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
On behalf of the Organizing Committee, we are honored and pleased to welcome you
to the second edition of the EAI International Conference on ICT Infrastructures and
Services for Smart Cities (IISSC) held in the wonderful location of Santa Chiara
Convent in Brindisi, Italy.
The main objective of this event is twofold. First, the conference aims at dissemi-
nating recent research advancements, offering researchers the opportunity to present
their novel results about the development, deployment, and use of ICT in smart cities.
A second goal is to promote sharing of ideas, partnerships, and cooperation between
everyone involved in shaping the smart city evolution, thus contributing to routing
technical challenges and their impact on the socio-technical smart cities system.
The core mission of the conference is to address key topics on ICT infrastructure
(technologies, models, frameworks) and services in cities and smart communities, in
order to enhance performance and well-being, to reduce costs and resource con-
sumption, and to engage more effectively and actively with their citizens.
The technical program of the conference covers a broad range of hot topics,
spanning over five main tracks: e-health and smart living, privacy and security, smart
transportation, smart industry, and infrastructures for smart cities. The program this
year also included:
• A special session about challenges and opportunities in smart cities, which cut
across and beyond the single field of interests, such as socio-technical challenges
related to the impact of technology and smart cities evolution.
• A showcase, which represents the other pulsing soul of the conference: a place
where industrial partners, public stakeholders, scientific communities from the
pan-European area can share their experiences, projects and developed resources.
We hope to provide a good context for exchanging ideas, challenges, and needs,
gaining from the experiences and achievements of the participants and creating the
proper background for future collaborations.
• Two exciting keynote lectures held, jointly with CN4IoT, by Prof. Antonio Corradi
and Prof. Rebecca Montanari from the University of Bologna, Italy
During the conference, the city of Brindisi opened the Brindisi Smart Lab, a vibrant
incubator of creativity and ideas, for prototyping and sustaining new start-ups, which
will positively impact on the local smart community.
The second edition of EAI IISSC attracted 23 manuscripts from all around the
world. At least two Technical Program Committee (TPC) members were assigned to
review each paper. Each submission went through a rigorous peer-review process. The
authors were then requested to consider the reviewers’ remarks in preparing the final
version of their papers. At the end of the process, 12 papers satisfying the requirements
of quality, novelty, and relevance to the conference scope were selected for inclusion in
VI Preface
the conference proceedings (acceptance rate: 52%). Three more papers were invited by
the TPC owing to the appropriateness of the presented topics.
We are confident that researchers can find in the proceedings possible solutions to
existing or emerging problems and, hopefully, ideas and insights for further activities in
the relevant and wide research area of smart cities.
Moreover, the best conference contribution award was assigned at the end of the
conference by a committee appointed by the TPC chairs based on paper review scores.
We would like to thank all the many persons who contributed to make this con-
ference successful. First and foremost, we would like to express our gratitude to the
authors of the technical papers: IISSC 2017 would not have been possible without their
valuable contributions.
Special thanks go to the members of the Organizing Committee and to the members
of the Technical Program Committee for their diligent and hard work, especially to
Eng. Marco Zappatore, who deserves a special mention for his constant dedication to
the conference.
We would like also to thank the keynote and invited speakers and the showcase
participants for their invaluable contribution and for sharing their vision with us. Also,
we truly appreciated the perseverance and the hard work of the local organizing sec-
retariat (SPAM Communication): Organizing a conference of this level is a task that
can only be accomplished by the collaborative effort of a dedicated and highly capable
team.
We are grateful for the support received from all the sponsors of the conference.
Major support for the conference was provided by Capgemini Italia and University of
Salento.
In addition, we are grateful to the Municipality and the Province of Brindisi, the
institutions, and the citizens and entrepreneurs of Apulia Region for being close to us in
promoting and being part of this initiative.
Last but not least, we would like to thank all of the participants for coming.
The Second International Conference on Cloud, Networking for IoT systems (CN4IoT)
was held in Brindisi, Italy on April 20–21, 2017, as a co-located event of the
Second EAI International Conference on ICT Infrastructures and Services for Smart
Cities.
The mission of CN4IoT 2017 was to serve and promote ongoing research activities
on the uniform management and operation related to software-defined infrastructures,
in particular by analyzing limits and/or advantages in the exploitation of existing
solutions developed for cloud, networking, and IoT. IoT can significantly benefit from
the integration with cloud computing and network infrastructures along with services
provided by big players (e.g., Microsoft, Google, Apple, and Amazon) as well as small
and medium enterprises alike. Indeed, networking technologies implement both virtual
and physical interconnections among cooperating entities and data centers, organizing
them into a unique computing ecosystem. In such a connected ecosystem, IoT appli-
cations can establish a elastic relationship driven by performance requirements (e.g.,
information availability, execution time, monetary budget, etc.) and constraints (e.g.,
input data size, input data streaming rate, number of end-users connecting to that
application, output data size, etc.)
The integration of IoT, networking, and cloud computing can then leverage the
rising of new mash-up applications and services interacting with a multi-cloud
ecosystem, where several cloud providers are interconnected through the network to
deliver a universal decentralized computing environment to support IoT scenarios.
It was our honor to have invited prominent and valuable ICT international experts as
keynote speakers. The conference program comprised technical papers selected
through peer reviews by the TPC members and invited talks. CN4IoT 2017 would not
be a reality without the help and dedication of our conference manager Erika Pokorna
from the European Alliance for Innovation (EAI). We would like to thank the con-
ference committees and the reviewers for their dedicated and passionate work. None of
this would have happened without the support and curiosity of the authors who sent
their papers to this second edition of CN4IoT.
IISSC 2017 Organization
Steering Committee
Imrich Chlamtac CREATE-NET and University of Trento, Italy
Dagmar Cagáňová Slovak University of Technology (STU), Slovakia
Massimo Craglia European Commission, Joint Research Centre,
Digital Earth and Reference Data Unit, Italy
Mauro Draoli University of Rome Tor Vergata, Agenzia per l’Italia
Digitale (AGID), Italy
Antonella Longo University of Salento, Italy
Massimo Villari University of Messina, Italy
Organizing Committee
General Chair
Antonella Longo University of Salento, Italy
General Co-chair
Massimo Villari University of Messina, Italy
Workshops Chair
Beniamino Di Martino University of Naples, Italy
Workshops Co-chairs
Giuseppina Cretella University of Naples, Italy
Antonio Esposito University of Naples, Italy
Publications Chair
Mario Alessandro University of Salento, Italy
Bochicchio
X IISSC 2017 Organization
Local Chair
Antonella Longo University of Salento, Italy
Web Chair
Marco Zappatore University of Salento, Italy
Panels Chair
Dagmar Cagáňová Slovak University of Technology (STU), Slovakia
Panels Co-chairs
Natália Horňáková Institute of Industrial Engineering and Management,
MTF, Slovakia
Viera Gáťová Slovak University of Technology (STU), Slovakia
Conference Manager
Lenka Koczová EAI, European Alliance for Innovation, Slovakia
Steering Committee
Steering Committee Chair
Imrich Chlamtac CREATE-NET, Italy
Organizing Committee
General Chair
Massimo Villari University of Messina, Italy
Website Chair
Antonio Celesti University of Messina, Italy
Workshops Chair
Giuseppe Di Modica University of Catania, Italy
XIV CN4IoT 2017 Organization
Publications Chairs
Maria Fazio University of Messina, Italy
Philippe Massonet CETIC, Belgium
Local Chair
Antonella Longo University of Salento, Italy
Cold Chain and Shelf Life Prediction of Refrigerated Fish – From Farm
to Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Mira Trebar
IoT and Big Data: An Architecture with Data Flow and Security Issues. . . . . 243
Deepak Puthal, Rajiv Ranjan, Surya Nepal, and Jinjun Chen
IoT Data Storage in the Cloud: A Case Study in Human Biometeorology . . . 253
Brunno Vanelli, A.R. Pinto, Madalena P. da Silva, M.A.R. Dantas,
M. Fazio, A. Celesti, and M. Villari
1 Introduction
Performance monitoring is becoming a more and more important tool in plan-
ning and assessing efficiency and effectiveness of services and infrastructures in
urban contexts. This increasing attention is witnessed also by projects (e.g.,
CITYKeys1 ), standards (e.g., ISO 37120:2014, ISO/TS 37151:2015) and initia-
tives at international level (e.g., Green Digital Charter2 , European Smart City
Index) which push forward the definition of shared frameworks for performance
measurement at city level. Statistical data are capable of more effectively guiding
municipal administrations in the decision making process and foster civic partic-
ipation. They can also impact on the capability to attract private investments,
which may be stimulated by opportunities that are made explicit by quantitative
evidences and comparisons between different municipalities. Also thanks to the
rise of the Open Data culture in public administrations, today statistical datasets
are more frequently available and accessible in machine-readable formats. This
1
http://citykeys-project.eu/.
2
http://www.greendigitalcharter.eu/.
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 3–12, 2018.
https://doi.org/10.1007/978-3-319-67636-4_1
4 C. Diamantini et al.
enables the possibility to adapt to cities methods and solutions exploited for
decades in enterprise contexts to assess the achievement of business objectives.
A recent trend in this respect is to publish statistical data according to the
RDF Data Cube vocabulary3 , a W3C standard for the representation of sta-
tistical datasets in the web. This format follows the Linked Data approach and
conceptually resorts to the multidimensional model [1] adopted in enterprise con-
texts for Data Warehouses, as observed values (e.g., level of CO2) are organized
along a group of dimensions (e.g., time and place, as the measure is taken daily
and each value refers to a specific monitoring station in the city), together with
associated metadata. The publication of performance datasets according to the
Linked Data approach allows to reduce heterogeneity, as measures from different
datasets may be aligned with the same definition of indicator. However, besides
it is a concrete step towards an easier access and interoperability among differ-
ent datasets, appropriate mechanisms to evaluate and compare performances are
yet to come. One of the main reasons is related to the lack of a shared, explicit
and unambiguous way to define indicators. Indeed, no meaningful comparisons
of performance can be made without the awareness of how indicators are cal-
culated. To make an example, if we were interested in comparing the ratios of
delayed trips in two public transportation systems, we would require to under-
stand how such ratios are actually computed, e.g. if the first summed up trips
made by trams and bus, while the second considered only the latter, the risk
would be to derive wrong consequences and take uneffective decisions.
With the purpose to address the above mentioned issues, in this paper we
propose a logic-based approach to enable the comparison of datasets published
by different municipalities as Linked Open Data. The approach is based on the
formal, ontological representation of indicators together with their calculation
formulas. Measures are then declaratively mapped to these definitions in order
to express their semantics. In this way, the ontology serves as a reference library
of indicators that can be incrementally extended. Finally, a set of services, built
on the top of the model and exploiting reasoning functions, offers functionalities
to determine if two datasets are comparable, and to what extent. The rest of
this work is organised as follows: next Section briefly presents a case study
that will be used throughout the paper. In Sect. 3 we discuss an ontology to
formally represent statistical indicators with their calculation formulas, and we
introduce the representation of statistical data according to the RDF Data Cube
vocabulary. These models and languages are exploited in Sect. 4 to provide a
set of services aimed to support analysis and comparisons of Linked datasets.
Finally, in Sect. 5 we provide conclusions and outline future work.
3
https://www.w3.org/TR/vocab-data-cube/.
Comparison of City Performances 5
international level. Several cities have already started to share data about trans-
port services with a larger audience as open data. In the following, we introduce
a case study focusing on bike sharing services provided by two municipalities,
CityA and CityB . The example is a simplified version of actual datasets pub-
lished by a set of US municipalities including New York4 , Chattanooga5 and
many others. In details, let us suppose that each municipality provides a library
of datasets, as follows:
– CityA measures the total distance (in miles) of bike rides, aggregated with
respect to user type (residents/tourists) and time, and the population through
dimension time.
– CityB measures the total distance of bike rides for residents and the total
distance of rides for tourists aggregated with respect to time; it also measures
the population with respect to time.
To make an example about the case study of Sect. 2, the data structure of the
first dataset for CityA includes the following components:
Please note that the prefix “qb:” stands for the specification of the Data
Cube vocabulary8 , “sdmx-dimension:” points to the SDMX vocabulary for
standard dimensions9 , while “cityA:” is a custom namespace for describing
measures, dimensions and members of the dataset for CityA . In order to
make datasets comparable, the approach we take in this work is to rely on
KPIOnto as reference vocabulary to define indicators. As such, instances of
MeasureProperty as defined in Data Cube datasets have to be semantically
aligned with instances of kpi:Indicator, through a RDF property as fol-
lows: cityA:Distance rdfs:isDefinedBy kpi:TotalDistance. In this way,
the semantics of the measure Distance, as used by CityA , will be provided by
the corresponding concept of TotalDistance in KPIOnto.
For what concerns observations, i.e. data values, we report an example about
the measure Distance for CityA , for time December, 5th 2016 (time dimension),
and user type citizen:
cityA:obs001 a qb:Observation;
sdmx-dimension:timePeriod"2016-12-05"^^xsd:date;
cityA:userType cityA:resident;
cityA:Distance 80214;
qb:dataSet cityA:dataset1.
In this Section we discuss a set of services that are aimed to support analysis and
comparisons of statistical datasets. As depicted in Fig. 2, services are built on
top of the Data/knowledge layer, while access to datasets is performed through
SPARQL queries over corresponding endpoints. A single endpoint may serve a
library of datasets belonging to the same municipality. In the first subsection, we
introduce the reasoning framework, which comprises basic logical functions for
formula manipulation, on which the others rely, while in Subsect. 4.2 we focus
on services for dataset analysis and comparison. Further services are available
in the framework and devised to support indicator management, which enable
the definition of new indicators and exploration of indicator structures. For lack
of space, we refer the interested reader to a previous work of ours discussing in
detail these services [7].
8
https://www.w3.org/TR/vocab-data-cube/.
9
http://purl.org/linked-data/sdmx/2009/dimension.
8 C. Diamantini et al.
and tools. Indicators formulas are thus translated to Prolog facts, and a set of
custom reasoning functions is defined to support common formula manipulations
exploited by services discussed in the next subsections, among which:
Such functionalities are built upon PRESS (PRolog Equation Solving System)
[37], a library of predicates formalizing algebra in Logic Programming, which
are capable to manipulate formulas according to mathematical axioms. We refer
interested readers to previous work specifically focused on this reasoning frame-
work [7,10], which includes also computational analyses on efficiency of these
logic functions.
In detail, the service get all indicators firstly retrieves all the MeasureProp-
erties from each library of datasets by executing this SPARQL query to the
corresponding endpoint (line 7):
SELECT ?m ?dataset
WHERE {?dataset qb:structure ?s.
?s qb:component ?c.
?c qb:measure ?m.}
Then, for each measure m the service gets the corresponding KPIOnto indicator
(see line 9) through the query:
SELECT ?ind
WHERE {<m> rdfs:isDefinedBy ?ind.}
Finally, the service calls the logic function derive all indicators (line 5),
which is capable to derive all indicators that can be calculated from the avail-
able measures through mathematical manipulation. Once compatible measures
are found, a similar check is made with respect to dimensions, i.e. firstly the
dimensions related to each compatible measure are retrieved, and finally such
sets are compared in order to find the common subset.
Let us consider the comparison of libraries CityA and CityB . Indicators
from the former are IA ={kpi:Distance, kpi:TotalPopulation}. On the other
hand, CityB includes indicators {kpi:Distance Citizens, kpi:Distance Tourists,
kpi:TotalPopulation}. By using the logical predicate derive all indicators on
this last set, the reasoner infers that IB ={kpi:Distance Citizens, kpi:Distance
Tourists, kpi:TotalPopulation, kpi:Distance, kpi:AvgDistancePerCitizen}. Indeed,
the last two indicators can be calculated from kpi:Distance=kpi:Distance Citizens
+ kpi:Distance Tourists and kpi:AvgDistancePerCitizen= kpi:Distance Citizens
kpi:T otalP opulation . As
a conclusion, the two libraries share the indicator set IA ∩ IB = {kpi:Distance,
kpi:TotalPopulation}. Please also note that without the explicit representation
of formulas and logic reasoning on their structure, only TotalPopulation would
have been obtained. Both indicators are comparable only through dimension sdmx-
dimension:timePeriod. In particular, kpi:Distance is measured by CityA also along
the user type dimension. This means that some manipulation (i.e. aggregation)
must be performed on CityA values before the indicator can be actually used for
comparisons.
Comparison of City Performances 11
The service reports as output, for each solution, the used formula and
the available measures, specifying the corresponding mappings between the
KPIOnto indicators and the specific MeasureProperty names, according to
rdfs:isDefinedBy properties. Output includes also partial solutions (like the first
two), in order to make users be aware of which specific measures are missing.
12 C. Diamantini et al.
References
1. Kimball, R., Ross, M.: The Data Warehouse Toolkit: The Complete Guide to
Dimensional Modeling, 2nd edn. Wiley, New York (2002)
2. Supply Chain Council: Supply chain operations reference model. SCC (2008)
3. Bosch, P., Jongeneel, S., Rovers, V., Neumann, H.M., Airaksinen, M., Huovila,
A.: Deliverable 1.4. smart city kpis and related methodology. Technical report,
CITYKeys (2016)
4. Horkoff, J., Barone, D., Jiang, L., Yu, E., Amyot, D., Borgida, A., Mylopoulos,
J.: Strategic business modeling: representation and reasoning. Softw. Syst. Model.
13(3), 1015–1041 (2014)
5. del Rı́o-Ortega, A., Resinas, M., Cabanillas, C., Ruiz-Cortés, A.: On the definition
and design-time analysis of process performance indicators. Inf. Syst. 38(4), 470–
490 (2013)
6. Buswell, S., Caprotti, O., Carlisle, D.P., Dewar, M.C., Gaetano, M., Kohlhase, M.:
The open math standard. Technical report, version 2.0, The Open Math Society,
2004 (2004). http://www.openmath.org/standard/om20
7. Diamantini, C., Potena, D., Storti, E.: SemPI: a semantic framework for the col-
laborative construction and maintenance of a shared dictionary of performance
indicators. Future Gener. Comput. Syst. 54, 352–365 (2015)
8. SDMX: SDMX technical specification. Technical report (2013)
9. Cyganiak, R., Reynolds, D., Tennison, J.: The RDF data cube vocabulary. Tech-
nical report, World Wide Web Consortium (2014)
10. Diamantini, C., Potena, D., Storti, E.: Extended drill-down operator: digging into
the structure of performance indicators. Concurr. Comput. Pract. Exper. 28(15),
3948–3968 (2016)
11. Etcheverry, L., Vaisman, A., Zimányi, E.: Modeling and querying data warehouses
on the semantic web using QB4OLAP. In: Bellatreche, L., Mohania, M.K. (eds.)
DaWaK 2014. LNCS, vol. 8646, pp. 45–56. Springer, Cham (2014). doi:10.1007/
978-3-319-10160-6 5
Analyzing Last Mile Delivery Operations
in Barcelona’s Urban Freight Transport Network
1 Introduction
Barcelona is considered to be among the smartest cities in the planet. The IESE
ranking [1] puts the city in position 33 with a significant amount of projects
carried on. It is not necessarily the technology which makes Barcelona smart; the
economy, environment, government, mobility, life and people are other indicators
which help defining the city as smart.
Barcelona released an urban mobility plan for 2013/2018, where the need for
a smart platform was pointed out in order to improve the efficiency, effectiveness
and compatibility of freight delivery areas and the distribution of goods to reduce
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 13–22, 2018.
https://doi.org/10.1007/978-3-319-67636-4_2
14 B. Kolbay et al.
2 Related Work
The demand for goods distribution increases proportional to the population,
number of households, and development in tourism. There is a lot of research
related to the management of urban freight in cities. Those include solutions
for pollution, carbon creation, noise, safety, fuel consumption, etc. The main
purposes are generally shaped around reducing travel distances (vehicle routing
algorithms) and minimizing the number of delivery vehicles in the city [5,6].
Other pieces of work are focused on what restrictions should be applied to
vehicle moves in order to control the congestion and pollution level [7]. One of
the most common restriction is the time access restrictions for loading/unloading
Barcelona’s Last Mile Delivery 15
areas [8]. By finding optimal solutions for urban freight management, it is pos-
sible to reduce the pollution and traffic congestion, and minimizing fuel use and
Carbon emissions. With this purpose, we believe that it is important to under-
stand the vehicle drivers and manage their mobility for their satisfaction. We
base our analysis in the observation of the user behaviors for loading/unloading
trucks, rather than stablishing punishment policies for the drivers. Providing
solutions comes after the problem detection and analysis. This is what we do in
this paper, we observe the user behaviors, think about possible reasons of the
behaviors and propose solutions in order to keep win-win strategies for the city.
3 Data
The data set used in this study was obtained through a web service which is
used to export the data of the AreaDUM application (or SMS) developed by
B:SM1 . The time span of the available AreaDUM data sets ranges from January
1st , 2016 to July 15th , 2016. The sample data set consists of roughly 3.7 million
observations described using 14 attributes. Some attributes are not relevant since
they include information of the AreaDUM application itself. The most relevant
attributes for each check-in, apart from the specific Delivery Area ID, are:
– Configuration ID, which tells us about the days when each Area can be used,
the number of parking slots and their size, the amount of time a vehicle can
be parked and the use times for similar Delivery Areas.
– Time, which tells us about the time, day of the week and date of the check-in.
– Plate number, which contains a unique encrypted ID for each vehicle.
– User ID, which links the vehicle with a company.
– Vehicle type, which describes the size and type of vehicle: truck, van, etc.
– Activity type, which describes whether the objective is to carry goods, or to
perform street work, etc.
– District and Neighborhood ID, which tell us about the larger and smaller
administrative geographical area of the AreaDUM parking slot.
After some data cleansing, we ended up with 14 attributes which include:
Delivery Area ID, Plate Number, User ID, Vehicle Type, Activity Type, District
ID, Neighborhood ID, Coordinate, Weekday, Date, Time.
4 Methods
One of the objectives of the paper is to understand the rough data provided
in order to cleanse it if necessary. By exploring it, we noticed that there are a
significant number of check-ins by the same vehicle ID, in the same or close by
Delivery Areas during one day. This is an abnormal behavior because AreaDUM
does not allow making consecutive check-ins in the same Delivery Area. However,
1
The authors want to thank B:SM and, in particular the Innovation team, leaded by
Carlos Morillo and Oscar Puigdollers for their support in this paper.
16 B. Kolbay et al.
We think that it does not make any sense for a user to iterate among the
corners of a crossing of the “Pla Cerda” grid. It will very seldom happen that a
user will go to the opposite corner of a crossing to make a new delivery since the
distance is very short. In the case that they do iterate, we need to understand
the underlying reason for this.
In Fig. 1, we can see that there 4 loading/unloading areas, and they have
their corresponding Delivery Area IDs, whereas they have the same Circle ID.
In order to achieve this, we calculated pair-wised Haversine distance among
all loading/unloading areas in Barcelona. Haversine is the chosen method to
approximate the earth as a sphere, since it works good both for really small and
large distances [11].
The distance matrix is created using Haversine formula, where each row and
column represents a Delivery Area ID. From this distance matrix, we extracted
Barcelona’s Last Mile Delivery 17
the pairs of Delivery Area IDs with distance less than or equal to 50 m. If the
extracted pairs have a common element, we combined these pairs and removed
the common one in order to have only unique elements. After the combination
process, we check the distance between the first and the last element in the list.
If their corresponding distance value in the distance matrix is less than or equal
to 100 m, we keep the last element, otherwise we remove it. As a last step, we
assigned the same id for the delivery areas which are located in the same group.
4.2 Clustering
The next step is to cluster the behaviour of the vehicles by Neighbourhood. The
Hopkins Statistics are applied here as a beginning step to see if the data is clus-
terable. The value of 0.1829171 from Hopkins statistics showed us that we can
reject the null hypothesis and conclude that the data set is significantly cluster-
able [9]. Then, a clustering algorithm was needed in order to group similar neigh-
borhoods by hourly check-ins frequencies in Barcelona. Most of all the clustering
techniques (e.g. k-means, Partitioning Around Medoids, CLARA, hierarchical,
AGNES, DIANA, fuzzy, model-based, density-based and hybrid clustering) were
used for a comparison on the accuracy of results in order to choose the best for
our data.
– Some vehicles are used in household or street work. The time required for the
work takes longer than the maximum allowance, and the workers keep doing
abnormal check-ins.
– There can be some local store owners who have their own vehicles for their
own transport of goods. It is possible that they face the problem for finding a
parking slot. The reason of this situation is a necessity instead of an occupation
purpose.
– The users just use the spaces as free parking for different purposes like having
breakfast after a delivery, etc.
The results show that Public Work and Installation have higher percentages
of disallowed check-ins than the others. This shows that professionals who spend
time in specific locations, need some type of parking space that allows them
managing their tempos in a better way. Transport also show quite a high number
of disallowed check-ins, which may well be showing the case for local store owners
who repeat their check-ins to preserve their parking space.
The new Circle ID that we computed created a total of 1484 circle areas,
whereas we still have 2038 different delivery areas. The combination of both IDs
allowed us for the analysis in the following paragraphs.
In this section, we present the effect of disallowed repeated check-ins removal
using Circle ID. Table 2 shows the percentage of disallowed repeated check-ins
per each district in Barcelona. In other words, these are the percentages of data
we lose, in case that we remove the disallowed repeated check-ins occurred in
the same circle.
Fig. 2. PAM Clustering Results for the dataset before the disallowed check-ins were
removed.
are well clustered. At the bottom of this plot, there is one horizontal line that
located on the left side, which represents a misclustered object.
Two clusters are determined by PAM:
– Cluster 1 consists of 29 neighborhoods from 9 districts,
– Cluster 2 consists of 14 neighborhoods from 7 districts.
All neighborhoods from 2 districts (i.e. Gracia and Sant Andreu) are located
into Cluster 1, whereas the other districts’ neighborhoods are divided into two
clusters.
Figure 3 shows the clusters after removing the disallowed repeated check-ins.
In this case, the number of clusters increased to 9, and it shows us that variation
is significant. On the left side of Fig. 3, there are some silhouette width values of
0, and these clusters have only 1 observation in their clusters. We can basically
say that they are both representative objects for themselves. The ones who are
Fig. 3. PAM clustering results for data without disallowed repeated check-ins
Barcelona’s Last Mile Delivery 21
located alone in the clusters tell us that these neighborhoods are quite different
by hourly check-ins frequency than the others after the removal of disallowed
repeated check-ins.
Nine clusters are determined by PAM:
6 Conclusion
References
1. New York Edges Out London as the World’s “Smartest” City. http://ieseinsight.
com/doc.aspx?id=1819&ar=6&idioma=2
2. Ajuntament de Barcelona. http://ajuntament.barcelona.cat/en/
3. Barcelona Serveis de Municipals (B:SM). https://www.bsmsa.cat/es/
4. AreaDUM Project. https://www.areaverda.cat/en/operation-with-mobile-phone/
areadum/
5. Hwang, T., Ouyang, Y.: Urban freight truck routing under stochastic congestion
and emission considerations. Sustainability 7(6), 6610–6625 (2015)
6. Reisman, A., Chase, M.: Strategies for Reducing the Impacts of Last-Mile Freight
in Urban Business Districts. UT Planning (2011)
7. Yannis, G., Golias, J., Antoniou, C.: Effects of urban delivery restrictions on traffic
movements. Transp. Plan. Technol. 29(4), 295–311 (2006)
8. Quak, H., de Koster, R.: The impacts of time access restrictions and vehicle weight
restrictions on food retailers and the environment. Eur. J. Transp. Infrastruct. Res.
(Print) 131–150 (2006)
9. Banerjee, A., Dave, R.N.: Validating clusters using the Hopkins statistic. In: Pro-
ceedings of the IEEE International Conference on Fuzzy Systems, vol. 1, pp. 149–
153 (2004)
10. Kaufman, L., Rousseeuw, P.J.: Clustering by means of medoids. In: Dodge, Y. (ed.)
Statistical Data Analysis Based on L1 Norm, pp. 405–416 (1987)
11. Shumaker, B.P., Sinnott, R.W.: Astronomical computing: 1. Computing under the
open sky. 2. Virtues of the haversine. Sky Telesc. 68, 158–159 (1984)
A System for Privacy-Preserving Analysis
of Vehicle Movements
1 Introduction
In the smart city’s evolution, embedded systems have played a smaller but no less
important role [1,2]. Daily life is full of these systems, we do not see and/or notice
them but they exist and they are growing in number: ATMs, washing machines,
navigators, credit cards, temperature sensors and so on. Data automatically col-
lected by embedded devices (e.g., sensors) has a great value: typically, such data
are processed and transformed into information (knowledge) thanks to which we
can make decisions that may or not require human participation.
In this paper, we present a system able to collect data of vehicle movements
that can be used for analysis purposes. The system is designed to overcome
possible privacy concerns arising from collecting and processing of data linked to
one individual (i.e., vehicle driver) by the license plate of the vehicle, a problem
very relevant in the literature [3–8]. In particular, we created a license plate
recognition system to track vehicles entering or leaving a particular place. Plates
are not stored in plaintext: an approach based on salt and hash is adopted to
transform plain plate into an apparently random string. However, the approach
is such that the same plate will be transformed into the same string each time
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 23–28, 2018.
https://doi.org/10.1007/978-3-319-67636-4_3
24 G. Lax et al.
the vehicle is tracked. This allows us to enable statistical analysis on stored data
yet maintaining anonymity of drivers and vehicles.
The rest of the paper is organized as follows: in the next section, we describe
the system architecture, the hardware components and the executed protocols;
in Sect. 3, we discuss advantages and limitations of our proposal and draw our
conclusions.
our proposal is equipped with a high resolution Logitech C920 webcam (Fig. 2),
which contains a H.256 hardware encoder to take the workload away from Bea-
gleBone’s processor. The Video4Linux2 (typically called V4L2), a framework
tightly integrated with the Linux kernel, provides drivers necessary for the web-
cam. Our BeagleBone runs a software implementing the data processing logic
that allows us to obtain privacy preserving logs of vehicle entry and exit. In
particular, each time a vehicle enters or leaves the environment, Algorithm 1 is
executed.
In the initialization phase, the system randomly generates a 256-bit string,
named salt, and allocates a persistent memory area, named log, in which statistics
on vehicles movements are stored (typically, this is a file). When a vehicle enters
or leaves the environment, a picture is captured by the webcam and, then, a plate
number recognition procedure is run (Line 1). This procedure uses JavaANPR
[10], an automatic number plate recognition software, which implements algorith-
mic and mathematical principles from the field of artificial intelligence, machine
vision and neural networks. In case the vehicle is entering the environment, the
system computes the hash of the string obtained by concatenating the salt and
the binary representation of the plate number (Line 3).
Concerning this operation, we observe that several hash functions can be
used in this task: for our purpose, we need that it is not possible to find the salt
or p from the knowledge of more hashes. In our implementation, we opted for
the SHA-1 algorithm, which is a widely used hash function producing a 160-bit
hash value [11]. Indeed, although SHA-1 has been found to suffer from some
vulnerabilities that would discourage its use as a cryptographic hash function,
in our application such vulnerabilities are not critic. Moreover, its efficiency and
effectiveness to verify data integrity, make it a good solution for our necessity.
Then, the algorithm proceeds by storing into the log a tuple containing the
type of access of the vehicle (enter of exit), the timestamp of this access, and
the result of the hash computation, named p* (Line 4). This tuple is one of
the records that can be elaborated to extract statistics about vehicle accesses.
Observe that, no reference to the actual plate number is stored, but only the
hash of this number. Moreover, the use of a salt [12] in the hash computation
protects against dictionary attacks versus a list of password hashes and against
pre-computed rainbow table attacks, aiming at guessing the plate number.
Consider now the case in which a vehicle is leaving the environment. In the
optimistic case in which the plate number recognition task can be performed
26 G. Lax et al.
with no error (i.e., the recognized number p coincides with the actual number of
the plate), then it would be sufficient to repeat the operation above and storing
into the log the information exit instead of entry. However, it is possible that
some errors occur in plate number recognition. Consequently, we included in
the algorithm a procedure to mitigate the consequences derived from this error.
Specifically, the first operation done is to compute the set P (Line 6), composed
of all plate numbers differing from p at most of 1 digit.
Now, for each element p of the set P , the hash p* is computed as done
above (Line 8) and a search for this value in the logs related to previous vehicle
entries is carried out (Line 9). If a match is found, then the tuple containing
the information about the exit of the vehicle, the current timestamp, and the
result of the hash computation p* is stored and the algorithm ends (Lines 10 and
11). In words, this operation allows the system to identify the right matching
between the vehicle entry and exit (recall that the actual plate number is never
recorded – thus, this task is not trivial), even when at most 1 digit of the plate
number is wrongly recognized.
Finally, in case no matching is found, the information that an unidentified
vehicle (this is coded by using -1 as plate number) is leaving the environment,
is stored (Line 14).
At the end of the monitoring period, the log will contain a list of accesses
of vehicles to the environment, together with the access timestamp and an
anonymous reference to the number plate.
This list can be used to infer several statistics about the vehicle accesses, such
as the minimum or maximum permanence period of a vehicle in the environment
(how to calculate such statistics from this list is a trivial exercise and is not
discussed here).
A System for Privacy-Preserving Analysis of Vehicle Movements 27
relatively naive binary strategy of assigning label unknown to an exiting car not
matching any entered car. Specifically, assigning probabilities for which car exits
could be a significant improvement especially as these uncertainties could be
reduced with multiple returns of the same vehicle. Finally, another improvement
is related to the use of different and possibly more effective (than JavaANPR)
techniques for plate number recognition, in order to further increase the overall
performance of the system.
References
1. Filipponi, L., Vitaletti, A., Landi, G., Memeo, V., Laura, G., Pucci, P.: Smart
city: An event driven architecture for monitoring public spaces with heteroge-
neous sensors. In: 2010 Fourth International Conference on Sensor Technologies
and Applications (SENSORCOMM), pp. 281–286. IEEE (2010)
2. Merlino, G., Bruneo, D., Distefano, S., Longo, F., Puliafito, A., Al-Anbuky, A.:
A smart city lighting case study on an openstack-powered infrastructure. Sensors
15(7), 16314–16335 (2015)
3. Hoh, B., Iwuchukwu, T., Jacobson, Q., Work, D., Bayen, A.M., Herring, R., Her-
rera, J.C., Gruteser, M., Annavaram, M., Ban, J.: Enhancing privacy and accuracy
in probe vehicle-based traffic monitoring via virtual trip lines. IEEE Trans. Mob.
Comput. 11(5), 849–864 (2012)
4. Li, H., Dán, G., Nahrstedt, K.: Portunes: privacy-preserving fast authentication
for dynamic electric vehicle charging. In: 2014 IEEE International Conference on
Smart Grid Communications (SmartGridComm), pp. 920–925. IEEE (2014)
5. Wu, Q., Domingo-Ferrer, J., González-Nicolá, Ú.: Balanced trustworthiness, safety,
and privacy in vehicle-to-vehicle communications. IEEE Trans. Veh. Technol.
59(2), 559–573 (2010)
6. Zhang, T., Delgrossi, L.: Vehicle Safety Communications: Protocols, Security, and
Privacy, vol. 103. Wiley, Hoboken (2012)
7. Buccafurri, F., Lax, G., Nicolazzo, S., Nocera, A.: Comparing twitter and facebook
user behavior: privacy and other aspects. Comput. Hum. Behav. 52, 87–95 (2015)
8. Buccafurri, F., Lax, G., Nocera, A., Ursino, D.: Discovering missing me edges across
social networks. Inf. Sci. 319, 18–37 (2015)
9. BeagleBoard: Beagle Board Black Website (2016). http://beagleboard.org/
BLACK
10. JavaANPR: Automatic Number Plate Recognition System (2016). http://
javaanpr.sourceforge.net
11. Wikipedia: SHA-1 – Wikipedia, The Free Encyclopedia (2016). https://en.
wikipedia.org/wiki/SHA-1
12. Wikipedia: Salt (cryptography) – Wikipedia, The Free Encyclopedia (2016).
https://en.wikipedia.org/wiki/Salt (cryptography)
13. Gosselin-Lavigne, M.A., Gonzalez, H., Stakhanova, N., Ghorbani, A.A.: A perfor-
mance evaluation of hash functions for IP reputation lookup using bloom filters.
In: 2015 10th International Conference on Availability, Reliability and Security
(ARES), pp. 516–521. IEEE (2015)
Deploying Mobile Middleware for the Monitoring
of Elderly People with the Internet of Things: A Case Study
Abstract. The ageing population and related diseases represent some of the most
relevant challenges in the healthcare domain. All that will lead to an increasing
demand of innovative solutions in order to guarantee a healthy and safe lifestyle
to the elderly. In fact, many researchers are studying the use of Internet of Things
(IoT) technologies in the e-health field. In this paper we report a case study where
a locale middleware for portable devices has been used to facilitate the develop‐
ment of IoT mobile application in this respect, allowing the communication
among different on board sensing technologies. The mobile middleware is built
on top of the WoX (Web of Topics) platform and quickly permits the deployment
of innovation services thanks to its abstraction and user centric model. A valida‐
tion test bed involving 31 elderly people living in Lecce (Italy) has been carried
out for the monitoring of their activities, mainly those connected to positioning
and motility both in indoor and outdoor scenarios. Our approach has demonstrated
a practical way to replace obtrusive monitoring technique (typical of caregivers)
with unobtrusive ones, in order to obtain proactive intervention strategies for a
smart city.
1 Introduction
In the last years, the increase of aged people with chronic diseases will lead to a growing
demand for support digital services. It is estimated that 50% of the population in Europe
will be over 60 years old in 2040, while in the USA one in every six citizens will be
over 65 years old in 2020 [1]. In addition, people over 75 years usually require contin‐
uous monitoring. For this reason, it is necessary to propose new solutions for healthcare
that especially guarantee prevention at different levels of intervention and not only
treatment of diseases.
So different e-health experiments and projects have started, and the use of the Internet
of Things (IoT) paradigm is playing a key role [2]. In fact, IoT integrates all kinds of
sensing, identification, communication, networking and information management
devices and systems, and seamlessly links all the people and things according to their
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 29–36, 2018.
https://doi.org/10.1007/978-3-319-67636-4_4
30 A. Fiore et al.
interests, so that anybody – at anytime and anywhere – through any device and media,
can access any provided information from objects and people to obtain services more
efficiently. Currently, the IoT concept is associated with the introduction of an archi‐
tectural layer that integrates the data provided from many heterogeneous sources [3] (for
hardware, software architecture and communication protocol used). This architectural
layer is called “IoT middleware” and, besides the integration of data, it is also involved
in many other IoT aspects (from networking and communication to security and context
management). The union between e-health application and IoT technologies is prom‐
ising to address the challenges faced by the healthcare sector. For instance, the patients
of a healthcare service can be tracked and monitored by using the ubiquitous identifi‐
cation, sensing and communication capacities. Exploiting the global connectivity of the
IoT, this information can be collected, managed and analyzed more efficiently. Further‐
more, information for healthcare service can be directly provided by patient’s mobile
devices (smartphones, tablets, wearable devices) through Internet or IoT access (WiFi,
3G, LTE, Bluetooth, ANT, ZigBee, LoRa, etc.), guaranteeing security and authentica‐
tion policy. In other words, the IoT technologies will enable the transformation of
healthcare service from caregiver-centric to patient-centric, make it more efficient,
proactive and ubiquitous.
In this paper, we report on how a local model-driven middleware is used to forward
incoming information from patient’s mobile devices toward an IoT platform in order to
monitor the elderly people activities. Furthermore, the benefits of this approach are
discussed and compared to other existing systems.
The paper is organized as follows: Sect. 2 briefly reports on the key related work in
the area of e-health and IoT. Section 3 provides readers with a brief introduction to the
WoX and L-WoX architecture, and the model on which the platform is based.
Section 4 demonstrates the middleware working on a real case study in the context of a
research project. Finally, Sect. 5 summarizes our key messages and sketches future
research directions.
2 Related Work
In the last years, several IoT solutions have been proposed. For lack of space, in this
section we limit our attention to IoT middleware applied to e-health services.
Linksmart [4] is a general-purpose middleware and it has already been tested in the
e-health field as a tool to allow the easy integration of heterogeneous devices in one
solution. The authors illustrate how their solution aims to solve the complexity of a
pervasive environment in order to support medical care routine of patients at home. The
SAI middleware [5], enabling the development of context-aware applications, is also
used for an e-healthcare solution. In fact the middleware is used in a reference application
scenario for patient conditions monitoring, alarm detection and policy-based handling.
In [6], a solution for tracking the daily life activities, by using mobile devices and cloud
computing services, is discussed. The system permits to collect heterogeneous infor‐
mation from sensors located in the house and share them in the cloud. The system
monitors the elderly people and generates reminders for scheduled activities along with
Deploying Mobile Middleware for the Monitoring of Elderly People 31
alerts for critical situations to caregivers and family members, so reducing the health
expenditures. In [7], an IoT based architecture for providing healthcare services to
elderly and incapacitate individuals is proposed. As the underlying technology for
implementing this architecture, 6LoWPAN is used for active communications, and radio
frequency identification (RFID) and near-field communications (NFC) are used for
passive communications. Another platform based on the IoT is proposed in [8]. This
platform resolves different limitations (for example interoperability, security, the
streaming quality of service). Its feasibility has been verified by installing an IoT-based
health gateway on a desktop computer as reference implementation. A solution for
monitoring patients with specific diseases such as diabetes using mobile devices is
discussed in [7]. This system provides continuous monitoring and real time services,
collecting the information from healthcare and monitoring devices located in the home
environment and connected to mobile devices. Always in this area, in [9] is discussed
the potential benefits of using m-IoT in non-invasive glucose level sensing and the
potential m-IoT based architecture for diabetes management.
The above quoted related work is more focused on the technological aspects and
they do not seem to pay a primary attention to the user’s needs. Despite them, the
advantage of the solution we propose is directly connected to the model-driven approach
due to a user centered design. The user, in this case the patients’ needs have guided,
since the beginning, the design process in order to easily develop unobtrusive scenarios
starting from geriatric parameters.
Web of Topics (WoX) [10, 11] is a model-based middleware for the IoT, specifically
aimed at minimizing the language distance between people (end users, developers) and
technology, while at the same time abstracting the multifaceted complexity of the
considered IoT hardware and communication protocols.
Based on a hierarchical publish/subscribe approach, where every entity within the
WoX conceptual framework can also be considered as a broker for other WoX entities,
the Web of Topics makes the development of scalable applications easier by hiding the
heterogeneity of the underlying IoT communication protocols, thus acting as an inter‐
mediate abstraction layer between the Web of Things (WoT) and consumer applications
(Fig. 1). In particular, a WoX entity can generically refer to a wide range of both IoT
hardware nodes and consumer applications (varying from mere single-user applications
to enterprise systems for machine learning and Big Data Analytics), and it can be
modeled as a set of {role; topic} couples. WoX delegates to entities the responsibility
of interacting with topics, both in terms of capabilities towards a given topic (which is
equivalent to providing a service) and needs (which is equivalent to requesting a service).
In this context, the role concept is used to express the entity’s technological (source/
executor/function) and collaborative (capability/need) dimensions within the considered
scenario.
32 A. Fiore et al.
The topic concept, which is at the heart of the WoX approach, is used instead as a
carrier for meaningful information concerning IoT capabilities exchanged between enti‐
ties acting as providers and consumers. Furthermore, WoX topics can be grouped into
three macro-domains:
• Cloud WoX, which includes all the topics residing in Cloud-based environments;
• Local WoX (L-WoX), which includes all the topics residing on mobile devices;
• Embedded WoX (M-WoX), which includes all the topics residing on embedded
systems as, for example, wearable devices, micro-controllers, actuators, weather
stations, etc.
Moreover, each topic consists of:
• A feature of interest (e.g., temperature, which refers to the temperature measured in
a given place).
• A specific URI-identified location associated with the previously mentioned feature
of interest (e.g., italy:apulia:lecce:school:lab:desk1, which refers to a desk situated
in the science laboratory of an ordinary high school in Lecce, Apulia, Italy).
• The current value of such feature (e.g., 64 °C, referring to the temperature example),
which can be updated by any entity capable of providing additional information.
Any concrete or virtual property of the domain that is perceivable, definable, meas‐
urable or controllable – from raw sensor data to more abstract concepts, such as math‐
ematical functions or human behaviors – can be used as a feature. Since WoX entities
are allowed to decide whether forwarding topic updates to parent entities or not, entities
at the edge of the architecture will deal with relatively few, very specific topics, while
entities near the core retain a higher level of knowledge through their access to more
abstract topics. In particular, the L-WoX middleware, being an extension of the cloud
WoX platform, replicates the topic-based, model-driven approach of the Web of Topics
on a local level, and manage the lifecycle of topic instances available for any WoX-
enabled application running on mobile devices. It also mediates the communication
between different on-board sensing technologies (with their heterogeneous, native APIs)
and client applications through a set of plugin adaptors used to update specific topics
Deploying Mobile Middleware for the Monitoring of Elderly People 33
with the incoming data. As a consequence of this approach, topic management respon‐
sibilities are effectively distributed among the involved nodes, thus turning mobile
devices into aggregators for L-WoX entities. Furthermore, in several circumstances it
may also be more efficient to keep low-level topics locally on the device, such as:
• Multiple mobile apps querying or updating the same shared topic;
• Multiple mobile apps referring to different topics, which are updated by the same on-
board sensor.
The City4Age project [12, 13] is a research project co-funded by the European Commis‐
sion under the H2020 program that utilizes data from smart cities and ad-hoc sensors
for the prevention of Mild Cognitive Impairment (MCI) and frailty of aged people. In
particular, behavior change detection and 1-to-1 communication for IoT-enhanced inter‐
vention are some of the project’s key features, following two main areas of strategic
importance for the project:
• Social dimension: through the involvement of urban communities in conjunction
with health services, smart cities can provide an invaluable support to the growing
number of families facing Mild Cognitive Impairment (MCI) and frailty of the
elderly, especially in these times of demographic imbalance and ageing populations
afflicting most of the European countries. Prevention of MCI and frailty-related risks
through early detection of dangerous situations and timely interventions will play a
pivotal role in guaranteeing, in the least-obtrusive possible way, the well-being of
the elderly people, as well as providing a more empathic communication between
the social ICT services and the involved people.
• Technological dimension: City4Age pursues the creation of a highly-innovative
framework of already existing technical components – such as wearable and mobile
devices, sensor networks, systems for data analytics and machine learning – in order
to collect large amounts of potentially heterogeneous data pertaining individuals that
will be used, after several processing phases, to identify large segments of population
at risk as well as to closely monitor few individuals, thus promoting more effective
observing procedures and proactive interventions. This requires the ability to assign
a geriatric meaning to the raw data gathered by the sensors, and to infer knowledge
about the monitored subjects and their behaviors over the time.
The Lecce’s pilot for the City4Age project, which involves the cooperation of 31
volunteer individuals of proper age and situation, focuses on monitoring their activities,
with particular attention paid to those features connected to positioning and motility
both in indoor and outdoor scenarios, and demonstrates a practical way to replace obtru‐
sive monitoring techniques (typical of caregivers) with unobtrusive ones (typical of the
IoT context), while at the same time achieving a high level of proactive intervention
strategies for a smart city.
For the Lecce’s pilot, an IoT Android application, enabling the communication
among different on board sensing technologies (accelerometers, GPS, etc.) and paired
34 A. Fiore et al.
sensors (SensorTag, smart plug, BLE - Bluetooth Low Energy - beacons, etc.), has been
developed on top of a local middleware (L-WoX), which is part of the Web of Topics
platform and allows a fast deployment of innovative services thanks to its abstraction
and user-centric model. The main objective of the developed architecture is to turn the
smartphone into a gateway between the environment and the rest of the WoX platform,
collecting motility and positioning events related to a person that is moving inside a
BLE-monitored location.
In the considered case study (as exemplified in Fig. 2, which describes the overall
architecture of the WoX ecosystem), the user updates a specific topic moving close to
a beacon (e.g., the MOVING_START or MOVING_STOP topics), which sends to the
user’s smartphone a BLE message (referring to the correct feature of the topic) telling
any WoX-compliant listener its current position (e.g., the room-id). The L-WoX service
installed on the smartphone is then able to detect the BLE beacon and forward the local
topic information to any local mobile app subscribed to the considered topic. Further‐
more, the mobile app can update (through 4G connection) the corresponding global topic
located in the Cloud, where the WoX module redirects the data to several repositories
for temporary event persistence, according to their macro-category. Then, a specifically
designed Data Aggregation Module, which continuously listens for WoX Topic updates,
extracts and aggregates the sensors’ data on a daily basis in order to generate a set of
high-level measures (e.g., TM, which refers to the total amount of time spent moving by
the monitored subject) indicating how the user is performing according to certain
criteria.
It is worth noticing that the model-driven approach characterizing the Web of Topics
paradigm is inherently well suited for the intervention strategies laid down within the
framework of the City4Age initiative. These set of strategies and procedures are in turn
based on a hierarchy defined according to the following concepts:
Deploying Mobile Middleware for the Monitoring of Elderly People 35
5 Conclusion
In this paper, we proposed a mobile middleware able to monitor elderly people in their
daily activities. Our solution allows important benefits: (i) compared to other solutions
and the related work, it permits to progressive deliver unobtrusive techniques for elderly
supervising in the e-health field; and (ii), more in general, it can be easily extended in
order to develop innovative services in the smart city context for active and healthy
ageing starting from defined geriatrics factors. Furthermore, the WoX middleware is
able to transparently collect sensor data coming from heterogeneous devices and forward
them to the remote reasoning server, in order to trigger appropriate alarms, generate
notifications, and activate interventions.
As part of future activities, the project will start its testing phases and this will lead
to apply risk detection algorithms on real data related to elderly behaviors. Furthermore,
the WoX platform could be enhanced with a complex reasoning in order to handle the
communications arising from various sources, and conflicting data that need to be
normalized.
Acknowledgments. This work partially fulfills the research objectives of the City4Age project
(Elderly-friendly City services for active and healthy ageing) that has received funding from the
European Union’s Horizon 2020 research and innovation program under the grant agreement No
68973, topic PHC-21-2015.
36 A. Fiore et al.
References
1. Corchado, J.M., Bajo, J., Abraham, A.: GerAmi: improving healthcare delivery in geriatric
residences. IEEE Intell. Syst. 23(2), 19–25 (2008)
2. Miorandi, D., Sicari, S., De Pellegrini, F., Chlamtac, I.: Internet of Things: vision, applications
and research challenges. Ad Hoc Networks 10(7), 1497–1516 (2012)
3. Celesti, A., Fazio, M., Giacobbe, M., Puliafito, A., Villari, M.: Characterizing cloud
federation in IoT. In: AINA Workshops, pp. 93–98 (2016)
4. Jahn, M., Pramudianto, F., Al-Akkad, A.-A.: Hydra middleware for developing pervasive
systems: a case study in the eHealth domain. In: International Workshop on Distributed
Computing in Ambient Environments, Paderborn, Germany, 15–18 September 2009
5. Paganelli, F., Parlanti, D., Giuli, D.: A service-oriented framework for distributed
heterogeneous data and system integration for continuous care networks. In: CCNC 2010 7th
IEEE (2010)
6. Fahim, M., Fatima, I., Lee, S., Lee, Y.-K.: Daily life activity tracking application for smart
homes using android smartphone. In: IEEE 14th International Conference on Advanced
Communication Technology (ICACT), pp. 241–245 (2012)
7. Shahamabadi, M.S., Ali, B.B.M., Varahram, V., Jara, A.J.: Anetwork mobility solution based
on 6LoWPAN hospital wireless sensor network (NEMO-HWSN). In: Proceeding 7th
International Conference Innovation Mobile Internet Services Ubiquitous Comput. (IMIS),
July 2013, pp. 433–438 (2013)
8. Zhang, X.M., Zhang, N.: An open, secure and flexible platform based on Internet of Things
and cloud computing for ambient aiding living and telemedicine. In: Proceeding of
International Conference Computer and Management (CAMAN), May 2011, pp. 1–4 (2011)
9. Villarreal, V., Fontecha, J., Hervás, R., Bravo, J.: Mobile and ubiquitous architecture for the
medical control of chronic diseases through the use of intelligent devices: using the
architecture for patients with diabetes. Future Gener. Comput. Syst. 34, 161–175 (2014)
10. Mainetti, L., Manco, L., Patrono, L., Sergi, I., Vergallo, R.: Web of topics: an IoT-aware
model-driven designing approach. In: WF-IoT 2015, IEEE World Forum on Internet of
Things. Milan, Italy, 14–16 December 2015, pp. 46-51. IEEE, Piscataway, NJ, USA (2015).
doi:10.1109/WF-IoT.2015.7389025, ISBN 978-150900365-5
11. Mainetti, L., Manco, L., Patrono, L., Secco, A., Sergi, I., Vergallo, R.: An ambient assisted
living system for elderly assistance applications. In: PIMRC 2016, 27th Annual IEEE
International Symposium on Personal, Indoor and Mobile Radio Communications, Valencia,
Spain, 4–7 September 2016, pp. 2480-2485. IEEE, Piscataway, NJ, USA (2016). ISBN
978-1-5090-3253-2
12. Paolini, P., Di Blas, N., Copelli, S., Mercalli, F.: City4Age: Smart cities for health prevention.
In: IEEE International Smart Cities Conference (ISC2), 12–15 September 2016, Trento
(2016)
13. Mainetti, L., Patrono, L., Rametta, P.: Capturing behavioral changes of elderly people through
unobtruisive sensing technologies. In: 24th International Conference on Software,
Telecommunications and Computer Networks (SoftCOM), 22–24 September 2016, Split
(2016)
Detection Systems for Improving the Citizen
Security and Comfort from Urban and Vehicular
Surveillance Technologies: An Overview
This paper presents an overview of detection systems for improving the citizen
security and comfort in vehicular and urban contexts. More precisely, many sur-
veillance sensors already equip urban infrastructures for security applications.
However, their exploitations for the development of non-supervised security sys-
tems as well as for proposing comfort-related services remains relatively under-
used. In this paper, we present various emerging surveillance technologies as
well as detection systems that are exploited in the literature for enhancing the
security and the comfort of citizens. Additionally, we present experiments and
results directly related to our designed detection and communication architec-
tures and approaches which aim to support advanced police aid services as well
as advanced driver assistance service.
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 37–45, 2018.
https://doi.org/10.1007/978-3-319-67636-4_5
38 K. Hammoudi et al.
This section presents some existing system architectures supporting urban and
vehicular security and comfort developments (e.g.; surveillance services) through
vision-based detection, computation and communication technologies.
Car black boxes are new trend products for car and motorcycle drivers. A black
box for car is a camera with a storage system that is installed onto the wind-
shield. Similar black box can be installed onto the helmets for motorcyclists (e.g.;
dash cam). The acquisition device permits to collect videos of road environments
corresponding to the vehicle displacements but can also be used to register videos
inside the car. A black box often includes a looped recording functionality which
save its video if an incident occurs. Additionally, the system can be equipped
with a GPS sensor allowing to store different types of information related to the
vehicle driving (e.g.; speed, direction, steering angles). For motorcyclists, it can
be difficult to find low-cost systems, easy to install on helmets and integrating
black box functionalities.
The collected video data are then exploited for clarifying the situations to the
insurance companies in case of accidents. This can be done within the frame of
insurance telematics which stands for the use of telecommunication technologies
directly proposed by the insurance companies in order to remotely manage their
customers. For more information about telematics insurance, we refer the reader
to [6] which describes its advantages and drawbacks.
In this context, several works have been recently proposed which attempt to
exploit black box systems in different innovative manners.
For instance, Prasad et al. [17] proposed to incorporate an automatic process-
ing module into the black box system allowing an objective accident analysis.
The module allows also to immediately send a short alert message to a prede-
fined phone number in case of detected accident. To this end, they proposed
their own black box prototype composed of 12 sensors regulated by a Raspberry
Pi and an Arduino device.
Lee and Yoo [12] proposed to exploit the internal camera of the black box
in order to monitor the driver tiredness through analysis of observed eye states
(e.g.; open or closed).
Park [16] proposed a forensic analysis technique of the car black box. His
work is motivated by the increasing number of insurance frauds caused by the
deletion of data from the black box by the car driver following an accident.
Indeed, in this latter case, the driver attempts to conceal his error. To fix this
issue, the author proposed an investigation tool to restore deleted information
while checking their originality and integrity. Similarly, Yi et al. [25] proposed a
module to check recorded data integrity.
Detection Systems for Improving the Citizen Security 39
Fig. 1. Global Data-flow diagram to develop varied vehicle-based services with vision-
based systems.
plates are then matched with those of stolen vehicles obtained from reference
database of the police. Similarly to the previous application, GPS locations asso-
ciated to the license plate characters extracted on-the-fly will permit then to
immediately determine the approximate location of searched vehicles in case of
matching between license plate characters.
Figure 1 illustrates a data-flow diagram shared by our two presented systems.
This data-flow diagram exposes a generalized bottom-up representation of the
employed major stages that can help to the development of other high-level
services from vision-based feature detection systems. Moreover, an architecture
(based on technologies presented in Subsect. 2.4) we designed for supporting the
experiments of the two presented systems is described in more details in [9].
Fig. 2. Data-flow diagram of a parking occupancy detection service developed for cam-
era pole of parkings.
the reference image in order to initialize the system. Once done, the reference
image as well as the acquired live images are transferred to a remote machine
for computing a dissimilarity score between reference slots and live images of
slots. Next to this stage, a global thresholding of computed scores is applied to
determine the occupancy status of selected parking slots. By this way, we can
determine the number of available parking slots and transmit this information
to drivers located into a surrounding perimeter. This can be done by displaying
this information on dispatched parking screens, on a dedicated website or via a
dedicated application for mobile phone. Figure 2 illustrates a data-flow diagram
of these major stages.
Besides, two strategies were experimented for automatically measuring in
real-time the consistency of parking slot images and taking a slot occupancy
decision. The first one was based on a Canny edge detector (e.g.; see [1]) and
considered the rate of contour points detected into each slots. The second one
was based on robust image-based metrics such as Sum of Absolute Differences
(SAD), Sum of Squared Differences (SSD), Zero-mean Sum of Squared Differ-
ences (ZSSD) (e.g.; [2]) and exploited a dynamical thresholding mechanism.
Both experimented strategies was based on a global threshold. Figure 3 gives an
overview of the developed detection system as well as occupancy results auto-
matically computed in real-time. Details on the computational mechanisms of
this system are described in [7,8,10].
Detection Systems for Improving the Citizen Security 43
Fig. 3. System developed for the automatic detection of available parking slots in real-
time [7, 8, 10]. Slots colorized in green and red represents detected vacant and occupied
slots, respectively (color figure online).
This paper proposes an overview of detection systems for improving the citizen
security and comfort from urban and vehicular surveillance technologies. Indeed,
a large variety of surveillance, detection and communication technologies are
presented through the description of diverse use cases with black box systems for
vehicles, vision-based on-boards systems integrated to vehicles, camera systems
of existing urban infrastructures as well as with Vehicular Ad-Hoc Networks and
Cloud Computing Systems.
Moreover, two vision-based detection prototypes we developed are described
in more details and covers both security and comfort aspects. The devel-
oped security system exploits efficient vision-based detectors already intensively
exploited in the literature. A complexity of the services lie in the combined
action of heterogeneous systems (mobile and static systems, vision-based sys-
tems, communication and computation systems) for cooperatively supporting
these services. The service descriptions stress a generalized data-flow diagram of
the different stages assigned to each system. The developed car parking assis-
tance system deals with analysis from static cameras. The description is focused
on the elaboration of the proposed parking slot occupancy detection system. By
this way, we hope that the highlighted detection systems as well as technological
possibilities will foster the enhancement of new security and comfort services for
the citizens.
44 K. Hammoudi et al.
References
1. Bradski, G., Kaehler, A.: Learning OpenCV: Computer Vision with the OpenCV
Library. O’Reilly Media Inc., USA (2008)
2. Chen, J.-H., Chen, C.-S., Chen, Y.-S.: Fast algorithm for robust template matching
with M-estimators. IEEE Trans. Signal Process. 51(1), 230–243 (2003)
3. Chen, N., Chen, Y., You, Y., Ling, H., Liang, P., Zimmermann, R.: Dynamic urban
surveillance video stream processing using fog computing. In: IEEE International
Conference on Multimedia Big Data (BigMM), pp. 105–112 (2016)
4. Chen, Z., Ellis, T., Velastin, S.A.: Vision-based traffic surveys in urban environ-
ments. J. Electron. Imaging 25(5), 051206–051221 (2016)
5. Fleyeh, H.: Traffic sign recognition without color information. In: IEEE Sponsored
Colour and Visual Computing Symposium (CVCS), pp. 1–6 (2015)
6. Goyal, M.: Insurance telematics. Int. J. Innovative Res. Dev. 57(10), 72–76 (2014)
7. Hammoudi, K., Benhabiles, H., Jandial, A., Dornaika, F., Mouzna, J.: Developing
a vision-based adaptive parking space management system. Int. J. Sens. Wirel.
Commun. Control 6, 192–200 (2016)
8. Hammoudi, K., Benhabiles, H., Jandial, A., Dornaika, F., Mouzna, J.: Self-driven
and direct spatio-temporal mechanisms for the vision-based parking slot surveil-
lance. In: IEEE Science and Information Conference (SAI Computing), pp. 1327–
1329 (2016)
9. Hammoudi, K., Benhabiles, H., Kasraoui, M., Ajam, N., Dornaika, F., Radhakrish-
nan, K., Bandi, K., Cai, Q., Liu, S.: Developing vision-based and cooperative vehic-
ular embedded systems for enhancing road monitoring services. Procedia Comput.
Sci. 52, 389–395 (2015)
10. Hammoudi, K., Benhabiles, H., Melkemi, M., Dornaika, F.: Analyse et gestion de
l’occupation de places de stationnement par vision artificielle. In: French-speaking
Workshop “Gestion et Analyse des donnes Spatiales et Temporelles” (GAST) of the
Conference “Extraction et Gestion des Connaissances” (EGC), pp. 91–98 (2017)
11. Joshi, J., Jain, K., Agarwal, Y.: Cvms : cloud based vehicle monitoring system in
vanets. In: IEEE sponsored International Conference on Connected Vehicles and
Expo (ICCVE), pp. 106–111 (2015)
12. Lee, J., Yoo, H.: Real-time monitoring system using RGB-infrared in a vehicle
black box. Microwave Opt. Technol. Lett. 57(10), 2452–2455 (2015)
13. Li, N., Busso, C.: Detecting drivers’ mirror-checking actions and its application to
maneuver and secondary task recognition. IEEE Trans. Intell. Transp. Syst. 17(4),
980–992 (2016)
14. Magrini, M., Moroni, D., Palazzese, G., Pieri, G., Leone, G., Salvetti, O.: Computer
vision on embedded sensors for traffic flow monitoring. In: IEEE International
Conference on Intelligent Transportation Systems (ITSC), pp. 161–166 (2015)
15. Nguwi, Y., Lim, W.: Number plate recognition in noisy image. In: IEEE Cospon-
sored International Congress on Image and Signal Processing (CISP), pp. 476–480
(2015)
16. Park, D.-W.: Forensic analysis technique of car black box. Int. J. Softw. Eng. Appl.
8(11), 1–10 (2014)
17. Prasad, M., Arundathi, S., Anil, N., Harshikha, H., Kariyappa, B.: Automobile
black box system for accident analysis. In: IEEE Co-sponsored International Con-
ference on Advances in Electronics, Computers and Communications (ICAECC),
pp. 1–5 (2014)
Detection Systems for Improving the Citizen Security 45
18. Saini, M., Alelaiwi, A., Saddik, A.E.: How close are we to realizing a pragmatic
vanet solution? a meta-survey. ACM Comput. Surv. 489(2), 29:1–29:40 (2015)
19. Selvakumar, K., Jerome, J., Rajamani, K., Shankar, N.: Real-time vision based
driver drowsiness detection using partial least squares analysis. J. Sig. Process.
Syst. 85(2), 263–274 (2015)
20. Song, Y., Liao, C.: Analysis and review of state-of-the-art automatic parking assist
system. In: IEEE International Conference on Vehicular Electronics and Safety
(ICVES), pp. 1–6 (2016)
21. Sun, M., Zhang, D., Qian, L., Shen, Y.: Crowd abnormal behavior detection
based on label distribution learning. In: IEEE Sponsored International Conference
on Intelligent Computation Technology and Automation (ICICTA), pp. 345–348
(2015)
22. Viola, P., Jones, M.: Robust real-time face detection. Int. J. Comput. Vis. 57(2),
137–154 (2004)
23. Wang, K., Kärkkäinen, L.: Free hand gesture control of automotive user interface.
US Patent 9239624 B2 (2016)
24. Wang, X., Tang, J., Niu, J., Zhao, X.: Vision-based two-step brake detection
method for vehicle collision avoidance. Neurocomputing 173, 450–461 (2016)
25. Yi, K., Kim, K.-M., Cho, Y.J.: A car black box video data integrity assurance
scheme using cyclic data block chaining. J. KIISE 41(11), 982–991 (2014)
26. Yu, S., Li, B., Zhang, Q., Liu, C., Meng, M.Q.-H.: A novel license plate location
method based on wavelet transform and emd analysis. Pattern Recogn. 48(1),
114–125 (2015)
27. Zhu, S., Hu, J., Shi, Z.: Local abnormal behavior detection based on optical flow
and spatio-temporal gradient. Multimedia Tools Appl. 75(15), 9445–9459 (2015)
IISSC: Smart City Infrastructures
A Public-Private Partnerships Model Based on OneM2M
and OSGi Enabling Smart City Solutions and Innovative
Ageing Services
1 Introduction
A very challenging issue related to smart cities is related to the integration of heteroge‐
neous data and services in order to guarantee an efficient and effective support in the
daily activities of citizen, especially for the elderly people.
Currently, who provides a service for a smart-city often makes available more or
less ‘raw’ and ‘open’ data, stored on more or less ‘raw’ supports (e.g., CSV files or
relational databases). Furthermore, the access to such data by the consumer requires the
knowledge of logical and semantic aspects, the implementation of articulated polling
mechanisms in order to detect data updates, the implementation of filtering, aggregation
and reasoning processes of accessed data due to application needs. Other times, the
service is provided in a fully enclosed manner, through the development of the complete
cycle of collection, formatting, data storage and distribution through a proprietary front-
end. On the contrary, a ‘harmonious’ development of services to citizens would certainly
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 49–57, 2018.
https://doi.org/10.1007/978-3-319-67636-4_6
50 P. Lillo et al.
be facilitated by the availability of open, semantic data, accessible by third parties on-
demand through the adoption of a modularized system supporting the full decoupling
of the components related to the production and the consumption of data.
In such a context, some public and private operators would provide a service of
gathering and delivery of raw/semantic data, assigning to the system the task of
managing issues about marshalling, reasoning and delivery of data to other public and
private operators, in the role of consumers, on their preferred channels; adaptation and
supplying operate symmetrically with respect to the core system by providing respec‐
tively data abstraction and data concretization.
The interaction between systems and external agents must be designed by mini‐
mizing the coupling between components (manufacturer, system, consumer) and
respecting the principles of the “reactive manifesto” [1].
The efforts of the oneM2M Global Initiative [2, 3], an international project involving
eight leading standard ICT bodies (ARIB, ATIS, CCSA, ETSI, TIA, TSDSI, TTA, TTC)
are oriented in the direction of the development of specifications able to simplify and
harmonize the integration of systems on issues related to the IoT and the M2M commu‐
nication. This happens in a highly fragmented context, full of methodologies based on
a plethora of protocols and proprietary solutions that make very difficult the discovery
and the access to data and common services by applications. A lot of works in the
scientific literature are converging on the oneM2M specifications, strengthening infra‐
structural [4, 5] or semantic [6] aspects, but to the best of our knowledge, issues related
to the decoupled and reactive interaction between applications and system have not been
resolved yet.
The goal of this work is to stimulate the development of applications and services
by public and private ‘third parties’, making easier the access, in ‘asynchronous’ and
‘reactive’ mode, involving various channels (e.g., REST, streams, brokers, etc.), not
only to the raw data but exploiting abstractions based on semantics and reasoning.
This paper proposes a modularized architectural model composed of the following
levels: (i) platform (Docker), (ii) middleware orchestration (OneM2M), and (iii) micro-
services (OSGi) [7, 8]. Furthermore, the proposed model is therefore applied in a project
aimed at behavioral monitoring of elderly by means of data gathered using IoT tech‐
nologies whose name is City4Age [9].
The rest of the paper is organized as follows. Section 2 describes the proposed system
architecture. Details about middleware orchestration by adopting OneM2M standard are
reported in Sect. 3, while an micro-services implementation of the proposed system,
based on OSGi, is described in Sect. 4. Section 5 reports a short discussion of a use case
the uses the proposed system. Concluding remarks are drawn in Sect. 6.
The proposed architecture aims at decoupling producer and consumer of data through
the introduction of the concept of Data Producer/Consumer Channels (respectively DP
Channels and DC Channels) among which the system acts as a “smart coupler”.
Figure 1 shows the Level 1 DFD (Data Flow Diagram) with a high level of abstraction,
A Public-Private Partnerships Model 51
The specific DP Channel supplied by the producer may be of different type from that
required by the consumer.
Producers and consumers are connected to the system through multi-channel
adapters able to support the various levels of intelligence, technology and performance
of related agents. For example, a DP (DP* in Fig. 1) can provide a stream of semantic
data, ready to feed the knowledge base (embedded in the process 2.0 in Fig. 1, named
“Data Processing”) with a minimum adaptation effort. Another provider can provide
access to its own data in “pull” mode by equipping the system with credentials in order
to access its DBMS (Data Base Management System) via a public URL. Another can
provide raw data by uploading a CSV file in “push” mode using a REST API supplied
by the system or alternatively in “pull” mode, providing a URI of the CSV file which
updates can be periodically monitored by the system.
On the other hand, the Data Consumer (DC), aiming at offering services to citizens
(but also - as in the case of the consumer DC* in Fig. 1 - the implementation of further
reasoning processes or enrichment of the information), has the freedom to access the
data by choosing the preferred mode via DC Channels related to specific RESTful API,
push notifications, stream, brokering protocols (e.g., MQTT, AMQP, XMPP).
The proposed architecture has the objective to ‘normalize’ the access mechanism to
the system, providing external agents with standardized, modern and responsive ways
to gather and provide data, then giving responsibility to the system about most of activ‐
ities related to issues about coupling, processing and distribution.
The proposed system implements a mechanism of data adaptation from the producer to
the consumer foreseeing, at the same time, a model able to guarantee data processing
through the use of semantic and linguistic tools.
At a higher level of detail, Fig. 2 shows in a Level 2 DFD a possible development
of the Data Processing, steering it towards a highly declarative approach that supports
high levels of system automation.
52 P. Lillo et al.
In practice there are well-established solutions for the aspects of the representation
and the semantic reasoning: the implementation of the Reasoning block (Process 2.2)
can therefore be reasonably reduced to an activity of integration, in the proposed system,
of well-known OWL (Ontology Web Language) technologies and open-sources java
tools such as OWL API [10, 11]; the conclusions inferred from reasoning will be avail‐
able to business logic in the form of events.
Conversely, the implementation of the DSL Processing block (Process 2.3) proposes
an interesting challenge on the ability to use ‘ad hoc’ grammars (and related compilers/
interpreters) in order to support, declaratively and as close as possible to the varius
application domains, the interaction between agents and system about issues of (i)
negotiation of the requested/offered services and (ii) submission of ‘jobs’ to be executed
in the system. A very interesting approach to the question is provided by the Xtext
framework [12].
Note that the process 2.3 does not add paradigmatic elements (rules and job) but
incorporates them by introducing a further abstraction.
3 Middleware Orchestration
The interconnection and exchange of data and services among heterogeneous and
distributed systems are inevitable and it would behoove to use the standards in a manner
as shared as possible, exposing the functionalities to the agents in the most accessible
way.
OneM2M achieves its targets of decoupling and integration by means of a model
based on the concept of “resource” and related CRUD + N functionalities (Create,
Retrieve, Update, Delete and Notify) accessible both through blocking-requests as
through synchronous/asynchronous non-blocking-requests.
The oneM2M specification defines a network topology based on the following node
types: Infrastructure Node (IN), Middle Node (MN), Application Service Node (ASN),
Application Dedicated Node (ADN), and Non-oneM2M Node (NoDN).
A Public-Private Partnerships Model 53
According to its own type, each node maybe contain specific entity types among
those represented in Fig. 3. The proposed architecture fully is able to satisfy the require‐
ments of modularity and scalability enabling integration of different structures
(Providers) through IN nodes that provide the interaction among the infrastructural
services via “Reference Points” of type Mcc’.
The interaction between the new CSF and their interaction with the outside world is
mediated by the exchange of a new set of oneM2M Resources [3], introduced in order
to support, in a oneM2M context, the functional paradigm introduced by the system
proposed in this work. Only some of the most significant Resources are reported:
• DPChannel: description of the data-channel proposed to the system.
• DPChannelRequest: request to create a referenced DPChannel.
54 P. Lillo et al.
The goal of decoupling is to be achieved not only in terms of the implementation aspects
but also in terms of the independence of design/development teams, leaving them free
to design, implement, test and install system components and services, eliminating or
minimizing the interference with other teams and operators but always respecting a
shared application model.
With those expectations, a PaaS (Platform as a Service) model based on Docker [11],
well known technology based on the concept of ‘container’ as a way for isolation of
application components and for the isolation of the components from underlying infra‐
structure, has been adopted.
The ‘containerization’ system based on Docker perfectly matches with a modulari‐
zation technology known as OSGi (Open Service Gateway initiative) [7].
The OSGi specifications make it possible (i) the modularization of packaging, release
and deployment of software components (aka bundles), and (ii) a ‘dynamic’ modulari‐
zation at run-time based on local and remote sharing of micro-services.
By using an integrated development environment (IDE) such as Eclipse, enhanced
with BndTools [14] plugin for support of OSGi specifications, the design/development
team can build its continuous-delivery environment by splitting itself into subgroups,
individually dedicated to the implementation of several modular components (OSGi
bundles).
The development process concludes each iteration with an automatic process that,
starting with the generation of a single JAR file (which contains: OSGi implementation,
microservices, configurations and resources), continues producing the Docker ‘image’
(the container) of the entire application component (including all dependencies required
for its execution) and finally ends with the deployment of the container in a Docker
environment.
Once installed, the container is fully integrated with the system by registering its
offered services and by ‘hooking’ its requested services through the local registration
service and the discovery mechanisms provided by implementations of the Remote
Services specifications defined in the enterprise section of OSGi specifications [8].
As reported in [15], a rather natural approach to the interworking between oneM2M
and OSGi is the implementation of the oneM2M Resources by means of OSGi services
by exposing a hierarchy of Java interfaces with root-interface as reported, below,
mapping its methods on the attributes [15], in the Table 9.6.1.3.1-1, resourceType,
resourceID, resourceName, parentID, creationTime, lastModifiedTime, labels,
universal to all resource types:
A Public-Private Partnerships Model 55
With the aim to validate the system proposed in this work we applied the model to the
intervention mechanism of the City4Age project [16] in the context of the pilot in the
Lecce Municipality.
“City4Age - Elderly-friendly city services for active and healthy ageing”, a project
co-funded by the Horizon 2020 Programme of the European Commission, supports
smart cities to empower social/health services facing MCI and frailty of the elderly
population. In particular, the City4Age project aims to utilize digital technologies at
home and in the city to monitor behaviors of elderly people with the goal of properly
intervene on them in order to delay the onset and progression of MCI and frailty.
As a result of their behavioral patterns analysis, participants will receive interven‐
tions aimed at improving behaviors known to affect risk of onset or worsening of MCI
and frailty. There are two types of interventions: (i) informative intervention character‐
ized by general information related to the same city, and (ii) specific intervention related
to personal risks.
Furthermore, the Municipality of Lecce offers to citizenship, as open-data [17],
information accessible mainly through CSV text files. Emulating the work of a municipal
employee responsible for the publication of information on cultural events offered by
the same city, we provided a structured CSV format for that purpose.
The proposed system, instantiated for City4Age case study, has been set to prepare
a DPChannel resource (‘resource’ in terms of oneM2M paradigm) sending a description
of the provided data-source (DPChannel resource) through: (i) an URI, pointing the CSV
file, (ii) a semantic description of fields (the comma-separated values) and (iii) a
semantic description of the channel.
The system responded by creating a DPChannel resource (with a referencing ID in
the DPChannel response) and automatically set up a polling mechanism with access to
the CSV file every 30 s (default) and consequent updating of a data base.
Acting in the role of operators for City4Age, we therefore asked the system to create
a second DPChannel provided in the form of a REST API to be invoked for the push
(from the outside) of a set of data related to recognition of a behavior of an older person
(e.g. excessive physical inactivity); also in this case we have accompanied the request
with the following description: (i) a semantic description of the input parameters and
(ii) a semantic description of the channel.
56 P. Lillo et al.
Then the system has been ordered to create two DCChannel resources: the first to
receive the continuous stream of behavioral data and the second to perform, via REST
API, filtered requests relating to scheduled cultural events.
Finally, we have implemented and ran an application (outside but connected to the
system) that collects data from the stream (containing, among others, the elderly and the
data type of the detected behavior), via REST queries the system on appropriate events
to chance and sends the elder’s phone a PUSH notification via Amazon SNS containing
the suggested advice.
The adopted validation approach has shown the effectiveness of the proposed archi‐
tecture allowing external agents to seamlessly integrate with the system maintaining a
high degree of decoupling.
6 Conclusions
With the aim of facilitating interaction between producers and consumers of data for the
development of services for the smart-city this article has proposed a modular architec‐
ture based on technologies (OSGi and Docker) and standard (oneM2M) aimed to support
the development of reactive applications based on micro-services. The proposed model
has been applied to the intervention mechanism of a case study (i.e., City4Age project)
focused on services for elderly people showing potential benefits, in term of effective‐
ness, of the approach based on the decoupling between the system and the specific
requirements of access by producers and consumers of data.
Acknowledgment. The City4Age project has received funding from European Union’s Horizon
2020 research and innovation programme under grant agreement No. 689731.
References
1. http://www.reactivemanifesto.org/
2. http://www.onem2m.org/
3. oneM2M-TS-0001 “Functional Architecture”, August 2016. http://www.onem2m.org/
technical/published-documents
4. Swetina, J., Lu, G., Jacobs, P., Ennesser, F., Song, J.: Toward a standardized common M2M
service layer platform: introduction to oneM2M. IEEE Wireless Commun. 21(3), 20–26
(2014)
5. Glaab, M., Fuhrmann, W., Wietzke, J., Ghita, B.: Toward enhanced data exchange capabilities
for the oneM2M service platform. IEEE Commun. Mag. 53(12), 42–50 (2015)
6. Kanti Datta, S., Gyrard, A., Bonnet, C., Boudaoud, K.: oneM2M architecture based user
centric IoT application development. In: 3rd International Conference on Future Internet of
Things and Cloud, 24–26 August 2015
7. The OSGi Alliance. https://www.osgi.org/
8. OSGi Release 6. https://www.osgi.org/developer/downloads/release-6/
9. Paolini, P., Di Blas, N., Copelli, S., Mercalli, F.: City4Age: Smart cities for health prevention.
In: IEEE International Smart Cities Conference (ISC2), 12–15 September 2016, Trento (Italy)
(2016)
A Public-Private Partnerships Model 57
10. http://owlapi.sourceforge.net/
11. Horridge, M., Bechhofer, S.: The OWL API: a Java API for OWL Ontologies. Semant. Web
J. 2(1), 11–21 (2011). Special Issue on Semantic Web Tools and Systems
12. http://www.eclipse.org/Xtext/
13. https://www.docker.com/
14. bndtools.org/
15. HUAWEI: Interworking between oneM2M and OSGi. ftp://ftp.onem2m.org/Meetings/ARC/
20160516_ARC23_Seoul/ARC-2016-026-Possible_Solution_of_Interworking_between_
oneM2M_and_OSGi.ppt
16. Mainetti, L., Patrono, L., Rametta, P.: Capturing behavioral changes of elderly people through
unobtruisive sensing technologies. In: 24th International Conference on Software,
Telecommunications and Computer Networks (SoftCOM), 22–24 September 2016, Split
(Croatia) (2016)
17. OpenData Lecce Municipality. http://dati.comune.lecce.it/
eIDAS Public Digital Identity Systems:
Beyond Online Authentication to Support
Urban Security
1 Introduction
The problem of identity management [1] is related to many applications, among
which physical access control of people and identity auditing and monitoring, are
particularly important in the context of urban security [2–4]. Consider a physical
place where the access of individuals is controlled, for example a museum or an
airport. In the case of a museum, we need that only people with a valid ticket
can access: however, it should be useful to log their identities in order to enable
accountability activities. In contrast, the access to an airport gate should be
granted to users with a valid ticket, only after having verified their identity does
not belong to some black list.
Recently, Regulation (EU) No 910/2014 eIDAS (electronic IDentification
Authentication and Signature) [5] has been issued with the objective of remov-
ing existing barriers to the cross-border use of electronic identification means
used in the Member States for authentication. As this Regulation does not aim
to intervene with regard to electronic identity management systems and related
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 58–65, 2018.
https://doi.org/10.1007/978-3-319-67636-4_7
eIDAS Public Digital Identity Systems 59
In this section, we present the Public Digital Identity System (SPID) framework
[6] and the technical details necessary to understand our proposal. SPID is a
SAML-based [16] open system allowing public and private accredited entities to
offer services of electronic identification for citizens and businesses. SPID enables
users to make use of digital identity managers to allow the immediate verification
of their identity for suppliers of services.
Besides users to identify, the stakeholders of SPID are Identity Providers,
which create and manage SPID identities and Service Providers, public or pri-
vate organizations providing a service to authorized users. Moreover, we have
60 F. Buccafurri et al.
a Trusted Third Party (TTP), which guarantees the standard levels of security
required by SPID and certifies the involved entities (in Italy, it is “The Agency
for Digital Italy”).
To obtain a SPID identity, a user must be registered to one Identity Provider,
which is responsible of the verification of the user identity before issuing the SPID
ID and the security credentials.
A SPID user who needs to access a service sends a request to the Service
Provider that gives this service (this is typically done by a Web browser). Then,
the Service Provider replies with an Authentication Request to be forwarded
to the Identity Provider managing the SPID identity of the user.
When the Identity Provider receives such an Authentication Request, ver-
ifies that it is valid and performs a challenge-response authentication with the
user. In case of successful user authentication, the Identity Provider prepares
the Assertion, a message containing the statement of user authentication for
the Service Provider.
Now, Identity Provider returns to the user the message Response containing
the Assertion, which is forwarded to the Service Provider (typically via HTTP
POST Binding).
All the steps carried out in a SPID-based authentication are represented in
Fig. 1.
eIDAS Public Digital Identity Systems 61
1
The currently standardized biometrics used for this type of identification system are
facial recognition, fingerprint recognition, and iris recognition. Biometric identifica-
tion is adopted in many countries: for example, in USA, beginning on April 1, 2016,
the electronic passport contains relevant biometric information [17].
62 F. Buccafurri et al.
2. Users are equipped with a smart portable device (for example, a smartphone
or a tablet);
3. The device of users can connect to the Identity Provider site.
Our solution integrates the SPID service to authorize the accesses to secured
areas. To describe the approach in a very synthetic way, we can say that the
authorization procedure starts when a user approaches the entrance of a secured
zone, such as an airport turnstile. Here, an access point of the system is placed
and the user is prompted to perform the SPID authentication via his personal
device. The result of this procedure is used by the system to decide if the user
is authorized to access the secured area, depending on his SPID identity.
The system implementing our proposal, whose deployment diagram is illus-
trated in Fig. 3, is composed of two main subsystems:
– User Interface Subsystem (UI, for short), which implements all the features to
allow the user interaction with the system. It is a single module implemented
through an application running on the mobile device of the user.
– Physical-Service Provider Subsystem (PSP, for short), which performs all
verifications to monitor the entrance to secured zone. It consists of three
main modules, namely: PreAuth Module, IDScanner Module, and Processing
Module.
The first module involved in the protocol is the PreAuth Module, which
inquiries users devices approaching the secured zone entrances to initiate the
SPID authentication. It is implemented by using Bluetooth beacons placed
on airport turnstiles to initialize the communication with UI subsystem. The
design choice of using Bluetooth beacons is due to the extremely simple inte-
gration in existing infrastructures. Indeed, this kind of device belongs to a class
of low energy bluetooth hardware transmitters [18,19], often battery powered,
able to broadcast messages and perform basic interactions with smart devices
in close proximity. Typically, Bluetooth beacons are used to issue location-
based actions on devices and have been already adopted in environments where
check-in is needed.
In proximity of the monitored zone, the user device receives via Bluetooth the
authentication request from the PreAuth Module and is prompted to execute
eIDAS Public Digital Identity Systems 63
4 Conclusion
In this paper, we designed and implemented a system based on SPID to mon-
itor physical access of individuals to controlled areas. The exploitation of our
proposal gives two main advantages with respect to a standard management of
accesses (i.e., when a human operator is in charge of verifying user’s identity):
the first advantage is to speed up this process, the second one is to make user’s
identification more robust to human errors.
64 F. Buccafurri et al.
Moreover, filtering mechanisms can be enabled (e.g., only adults may access),
or the use of attribute providers can be involved to remove the need of showing
an electronic ticket (the ticket is associated with the identity).
Concerning pervasiveness, we observe that, even though our implementation
makes use of SPID, our approach supports any identity management system
complaint with eIDAS. This is an important aspect because there are many
cases in which individuals may have heterogeneous nationalities.
As for accountability, it is easily obtained by suitably storing all information
need (for example, besides the identity, also the timestamp of the access).
The last observation is related to the technologies that can be exploited. In
our example, we showed the use of Beacon and QR Code. However, they can be
replaced by other similar technologies, such as Wi-Fi or NFC, or by long-range
UHF RFID systems.
References
1. Buccafurri, F., Lax, G., Nocera, A., Ursino, D.: Discovering missing me edges across
social networks. Inform. Sci. 319, 18–37 (2015)
2. Jara, A.J., Genoud, D., Bocchi, Y.: Big data in smart cities: from poisson to
human dynamics. In: 2014 28th International Conference on Advanced Informa-
tion Networking and Applications Workshops, AINA 2014 Workshops, Victoria,
BC, Canada, 13–16 May 2014, pp. 785–790 (2014)
3. Anttiroiko, A., Valkama, P., Bailey, S.J.: Smart cities in the new service economy:
building platforms for smart services. AI Soc. 29(3), 323–334 (2014)
4. Buccafurri, F., Lax, G., Nicolazzo, S., Nocera, A.: Comparing twitter and facebook
user behavior: privacy and other aspects. Comput. Hum. Behav. 52, 87–95 (2015)
5. European Union: Regulation EU No 910/2014 of the European Parliament and
of the Council, 23 July 2014. http://eur-lex.europa.eu/legal-content/EN/TXT/
HTML/?uri=CELEX%3A32014R0910&from=EN
6. European Union: Regulation EU No 910/2014 of the European Parliament and of
the Council, 23 July 2014. http://ec.europa.eu/growth/tools-databases/tris/en/
index.cfm/search/?trisaction=search.detail&year=2014&num=295&dLang=EN
7. Leitold, H.: Challenges of eID interoperability: the STORK project. In: Fischer-
Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., Zhang, G. (eds.) Privacy and
Identity 2010. IFIP AICT, vol. 352, pp. 144–150. Springer, Heidelberg (2011).
doi:10.1007/978-3-642-20769-3 12
8. Cuijpers, C., Schroers, J.: eidas as guideline for the development of a pan European
eid framework in futureid. In: Open Identity Summit 2014. vol. 237, pp. 23–38.
Gesellschaft für Informatik (2014)
9. Edwards, A., Hughes, G., Lord, N.: Urban security in Europe: translating a concept
in public criminology. Europ. J. Criminol. 10(3), 260–283 (2013)
10. Zhang, R., Shi, J., Zhang, Y., Zhang, C.: Verifiable privacy-preserving aggregation
in people-centric urban sensing systems. IEEE J. Sel. Areas Commun. 31(9), 268–
278 (2013)
11. Krontiris, I., Freiling, F.C., Dimitriou, T.: Location privacy in urban sensing net-
works: research challenges and directions (security and privacy in emerging wireless
networks). IEEE Wirel. Commun. 17(5) (2010)
eIDAS Public Digital Identity Systems 65
12. Niu, B., Zhu, X., Chi, H., Li, H.: Privacy and authentication protocol for mobile
RFID systems. Wireless Pers. Commun. 77(3), 1713–1731 (2014)
13. Forget, A., Chiasson, S., Biddle, R.: Towards supporting a diverse ecosystem of
authentication schemes. In: Symposium on Usable Privacy and Security (Soups)
(2014)
14. Doss, R., Sundaresan, S., Zhou, W.: A practical quadratic residues based scheme
for authentication and privacy in mobile RFID systems. Ad Hoc Netw. 11(1),
383–396 (2013)
15. Habibi, M.H., Aref, M.R.: Security and privacy analysis of song-mitchell RFID
authentication protocol. Wireless Pers. Commun. 69(4), 1583–1596 (2013)
16. Wikipedia: Security Assertion Markup Language – Wikipedia, The Free Ency-
clopedia (2016). https://en.wikipedia.org/w/index.php?title=Security Assertion
Markup Language&oldid=747644307
17. Security, H.J.H.: Visa Waiver Program Improvement and Terrorist Travel Preven-
tion (2016). https://www.congress.gov/bill/114th-congress/house-bill/158/text
18. Miller, B.A., Bisdikian, C.: Bluetooth Revealed: The Insider’s Guide to an Open
Specification for Global Wireless Communication. Prentice Hall PTR, New Jersey
(2001)
19. Bluetooth, S.: Bluetooth Specification (2017). https://www.bluetooth.com/
specifications/bluetooth-core-specification
20. Soon, T.J.: Qr code. Synth. J. 2008, 59–78 (2008)
Knowledge Management Perception in Industrial
Enterprises Within the CEE Region
Abstract. Smart Cities work with many data collected from various sources.
The data are useless unless people know how to process them effectively. The
aim of the submitted paper is to find out what the attitudes of the employees
working in industrial enterprises in the CEE region are towards the Knowledge
Management and it also focuses on finding the means of possible improvements
of the Knowledge Management implementation. In the first part, definition and
importance of the knowledge is explained for better understanding of the dealing
issue. The second part describes our questionnaire survey with sample of 650
respondents. Selected survey results are presented and interpreted in the following
section. Main research findings and recommendations can be found in the fourth
part. The last part summarizes all previous parts of the article. From the survey
results can be concluded that there is a significant relationship between knowledge
management performance and the ease of use of knowledge management tools.
1 Introduction
People use technology in their everyday life. Constant progress in science bring new
possibilities for improving way of living. City is a permanent human settlement with
boundaries. Smart Cities are results of combination of people, technology and processes
that try to effectively solve problems which are developing daily. Data are needed for
searching solutions. The large amount of data needs to be processed for further use,
changing to information. In the final level, the information change to knowledge which
make possible for human to resolve given issues.
In a rapidly changing work environment, organizations face challenges of how to
manage their knowledge assets efficiently in order to generate market value and to gain
competitive advantage. The focus for knowledge influences almost all parts of the
organization such as its strategy, products, processes and ways of the workflow organ‐
isation. Thus, knowing what to manage as a knowledge is a critical issue. While having
this in mind, it is necessary to distinguish between information and knowledge, between
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 66–75, 2018.
https://doi.org/10.1007/978-3-319-67636-4_8
Knowledge Management Perception in Industrial Enterprises 67
Explicit knowledge, on the one hand, refers to the formal, systematic language, the
rules and procedures that an organization follows. This kind of knowledge can be trans‐
ferred and therefore can be a subject of education and socialization. Knowledge-based
Systems also work with explicit knowledge.
Properties of explicit knowledge are following:
• Ability to disseminate, to reproduce, to access and re-apply throughout the organization.
• Ability to teach, to train.
• Ability to organize, to systematize, to translate a vision into a mission statement, into
operational guidelines.
• Transfer knowledge via products, services, and documented processes [5].
Tacit knowledge, on the other hand, is mainly based on lived experiences and there‐
fore is difficult to identify and to transfer. Deeply rooted in action, commitment and
involvement in a specific context, it refers to personal qualities such as cognitive and
technical elements inherent to the individual [6].
Tacit knowledge properties are following:
• Ability to adapt, to deal with new and exceptional situations.
• Expertise, know-how, know-why, and care-why.
• Ability to collaborate, to share a vision, to transmit a culture.
• Coaching and mentoring to transfer experiential knowledge on a one-to-one, face-
to-face basis [5].
The term “knowledge management” is relatively new. Its emergence as a manage‐
ment concept is the result of the recognition of “knowledge” as an intangible yet very
valuable corporate asset which needs systematic attention and careful managing in order
to get the maximum value from it [7].
According to the Awad, Ghaziri, knowledge management (KM) is a newly emerging
interdisciplinary business model that has knowledge within the framework of an organ‐
ization as its focus. It is rooted in many disciplines, including business, economics,
psychology, and information management. It is the ultimate competitive advantage for
Knowledge Management Perception in Industrial Enterprises 69
From the previous facts can be concluded that the knowledge management has posi‐
tive influence on various areas of the enterprise, including strategy creation, problem
solving, ideas, knowledge and innovation opportunities improvement. Knowledge
management helps organization to compete competition and build memory of the organ‐
ization for further operation.
The main purpose of the research is to give us a perspective, whether the employees
working in the company have a background about knowledge management problematic
and if so, how do they perceive the KM and KM tools. Whether they think it’s important
to use knowledge management tools, and what is their attitude to using such kind of
tools.
We will also test whether they think it’s important for them and the company to have
a methodology that keeps and spreads the knowledge within the company.
Main reason however is to find the “weakest link” in the industrial enterprise when
it comes to knowledge management tools usage. Find whether employees are unable or
unwilling to use the opportunities given by knowledge management implementation.
These factors need to be identified closely in order to undertake corrective actions that
could make them understand and use the possibilities given by knowledge management
systems.
Because of the variety of attitudes and possible answers many of the questions in the
survey, using the Likert scale and evaluating the questions quantitatively we did not
find satisfactory. Therefore we decided to build up the questionnaire to be subsequently
evaluated qualitatively. This kind of approach is often more time consuming, but the
data obtained is much more meaningful.
The final survey was carried on in Slovakia and in Czech Republic. We mainly targeted
medium (50–249 employees) and large (250 + employees) enterprises, since small
enterprises (0–49 employees) are not likely to have knowledge management systems,
nor are likely to have the interest to invest in these systems.
The companies participating in the questionnaire were chosen at random, while they
had to fulfill two basic requirements:
• To be an industrial enterprise/have an assembly in-house/
• To have at least 50 employees
Companies were from all the regions of Slovakia and two regions of Czech Republic.
The survey was divided into 16 questions out of which 12 were only with one possible
answer and 4 had multiple answers possible. In addition 8 of the questions had an opened
option available, where the participant could propose his own answer.
Correctly filled surveys were obtained from 144 companies where 650 employees
participated. The employees were chosen at random across all the departments and
Knowledge Management Perception in Industrial Enterprises 71
positions to get the most objective idea about the knowledge management perception
within the industrial companies and to identify the hindrances to KM.
As one may see from Fig. 3, even in the companies with working KM system the biggest
challenge that employees face is the poor knowledge sharing within the organization.
This is closely followed by “reinvention of the wheel” which however is usually caused
by the poor knowledge sharing. Question number 9, that has not been mentioned in this
article showed, that 82% of the employees struggle to get relevant documents or infor‐
mation within the company from several hours up to several days. According to the
72 I. Szilva et al.
survey, this often makes them listless and that could be the main cause of the reinvention
of the wheel.
Figure 4 suggests that almost half of the employees participating in the survey believe
that knowledge creation is solely the job of the top management. These opinions are
imperative to be changed by continuous information throughout the entire company
structure, to make employees aware that knowledge creation is to be done from bottom
to top and not vice versa. In this question, the employees have often chosen the oppor‐
tunity to write their own suggestion. Often an idea about having “knowledge owners”
occurred. Those would be employees specifically responsible about a tacit knowledge
regarding a certain part of the manufacture and production.
Fig. 4. Distribution of employee opinions regarding the knowledge creation within the company.
Knowledge Management Perception in Industrial Enterprises 73
Figure 5 shows two biggest issues when it comes to KM tools implementation. The first
one is convincing the employees to add their tacit or explicit knowledge to the KM system
and the other one is making other people using these information. This is why the main
target for the KM tools should be its ease of use and simplicity in contribution. Tools that
would meet these requirements would ensure that the access to the relevant data for the
employees would be reduced from several hours and days to several dozen of minutes.
Figure 6 shows the most common problems of the employees that are using KM
tools in their daily work. Most often they claim that the tools are complicated and they
do not reflect the common practices at the workplace. This might be solved by having
the above mentioned “knowledge owner” that employees may refer to with their
Fig. 6. Distribution of most common problems faced by the employees while using KM tools.
74 I. Szilva et al.
questions or concerns when the knowledge tools would not be clear or they would need
an explanation or suggest an improvement.
Following are the major findings of the study, carried out in Slovakia and the Czech
Republic to find the perception of employees working in various industrial enterprises
regarding the Knowledge Management:
• Most of the employees in industrial companies have high school diploma followed
by employees with university education. Only a tiny fraction has lower or higher
education level achieved.
• Majority of the employees are males 31–53 years old.
• The awareness about Knowledge Management systems and tools within the compa‐
nies could be worked on, since almost 1/3 of the employees did not know about the
processes to be followed.
• Getting to the relevant KM tools consumes unreasonably long, that effects the effi‐
ciency and quality of the manufacture and other processes.
• Employees meet with seemingly two opposing problems. They experience poor
sharing of the knowledge within the organization, but at the same time they experi‐
ence an information overload with information that often are not too relevant.
• Employees often believe that top management is responsible for knowledge creation.
This might mean that the management does not send a proper signal to the employees
to contribute to and use various tools of knowledge management.
• Two biggest issues that came out from the survey when it comes to usage and creation
of knowledge is that it’s either hard to convince people to record they knowledge and
on the other hand it’s also not easy to convince employees to use guidelines created.
• Employees believe that a well-functioning KM could benefit to less error rates in
production and decrease downtimes.
• Often mentioned fact in the open-ended questions was that the KM processes and
tools often do not reflect the common practices in their workspaces. They are often
not possible to follow with the tools and capacities they dispose with.
• Most of the employees feel that one of the most essential type of information to be
captured is the knowledge and experience of the skilled workers.
5 Conclusion
This research has been carried on mainly for the purpose of finding out, what the attitudes
of the employees working in industrial enterprises in the CEE region are towards the
Knowledge Management. It also focused on means of possible improvements of the KM
implementation when it comes to daily use of the tools provided to the employees by
KM. From the results of the study it can be concluded that there is a significant rela‐
tionship between knowledge management performance and the ease of use of KM tools,
clarity of knowledge management processes. These variables can jointly predict the
Knowledge Management Perception in Industrial Enterprises 75
knowledge management efficiency and performance. It can also be concluded from the
results, that the knowledge management can become an effective and strategic instru‐
ment for achieving organizational objectives. Organizational learning however must
depend on every employee contributing and that message needs to be clearly delivered
from the top management to the rest of the employees.
Acknowledgement. The paper is a part of the project H2020 project RISE-SK with title Research
and Innovation Sustainability for Europe which was approved as an institutional project with
foreign participation.
References
1. Watson, I.: Applying Knowledge Management. Morgan Kaufmann Publishers, San Francisco
(2003)
2. Haslinda, A., Sarinah, A.: A review of knowledge management models. The Journal of
International Social Research 2(9), 187–198 (2009)
3. Kim, D.: The link between individual and organizational learning. Sloan Management Review
(1993)
4. Nonaka, I., Takeuchi, H.: The Knowledge-Creating Company: How Japanese Companies
Create the Dynamics of Innovation. Oxford University Press, New York (1995)
5. Dalkir, K.: Knowledge Management in Theory and Practice. The MIT Press, Massachusetts
(2011)
6. Baets, W.: Knowledge Management and Management Learning: Extending the Horizons of
Knowledge-Based Management. Springer, Marseille (2005). doi:10.1007/b136233
7. Davenport, T.H., Prusak, L.: Working Knowledge: How organizations manage what they
know. Harvard Business School Press, Boston (2000)
8. Awad, E.M., Ghaziri, H.M.: Knowledge Management. Dorling Kindersley (India) Pvt. Ltd.,
New Delhi (2008)
9. Mothe, J., Foray, D.: Knowledge Management in the Innovation Process. Kluwer Academic
Publisher, Massachusetts (2001)
10. Ruggles, R., Holtshouse, D.: The knowledge advantage. Capstone Publishers, New
Hampshire (1999)
11. Goodall, B.: The Penguin Dictionary of Human Geography. Penguin, London (1987)
Cold Chain and Shelf Life Prediction
of Refrigerated Fish – From Farm to Table
Mira Trebar(&)
Abstract. Fresh perishables are normally stored and distributed with a proper
cold chain control in the supply chain from farm to retail. Usually, the con-
sumers break the cold chain after the point of sale. The question is whether
consumers are aware of requirements during the transport to and storage at
home. The handling conditions and temperature changes can significantly
decrease the shelf life and cause faster spoilage of food. The study presents two
examples of shelf life prediction. The first one is based on temperature mea-
surements of fish covered with ice in a Styrofoam box with supported infor-
mation of environment temperatures in the cold store, uncooled car and
refrigerator. In the second, measurements from first phase of storage on tem-
peratures (0 °C–4 °C) were used with assumption of fish stored later on higher
temperatures without ice. The results show important shortening of shelf life
after the point of sale.
1 Introduction
Mainly, shelf life studies of seafood were performed by using kinetic models for
growth of spoilage bacteria in various conditions. Higher temperatures increase rates of
growth which is the reason for the use of cold chain monitoring in the supply chain.
Consumers prefer buying fresh fish which is high quality food product that is labeled
with capture date information [1]. The survey presented the conclusion that it is very
important to give consumers the information on sensory and microbiological shelf life.
Many shelf life prediction tools run simulations to provide that information in possible
and expected scenarios based on real cold chain data [2]. Chemical methods were used
to specify the shelf life at different storage temperatures. The study provided the
information that the exposition time of fish at the point of sale (PoS) and in consumers’
household is critical.
New technologies, time temperature integrators (TTI) were used to provide pre-
dictive modeling of seabream shelf life in the dynamic temperature conditions [3].
Tests were validated by experimental measurements of microbiological spoilage
results. Radio frequency identification (RFID) can be successfully used in cold chain
monitoring to provide temperatures of environment conditions and also products in the
package [4]. The information could be used at the point of sale to calculate dynamic
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 76–83, 2018.
https://doi.org/10.1007/978-3-319-67636-4_9
Cold Chain and Shelf Life Prediction of Refrigerated Fish 77
shelf life and presented to consumer on mobile devices. The development of sensors
combining biochemical and microbial spoilage is getting a significant value in food
supply chain [5]. The smart quality sensor was tested to measure quality and predict its
progress in fresh cod under commercial ice storage conditions.
Unfortunately, the important phase of consumers handling fish is still not adequately
evaluated and the information of remained shelf life before the preparation of fish isn’t
exactly specified and presented. Lately, consumers are well informed about the use of a
slogan “From Farm to Fork” in connection to the place of origin and food traceability.
All that is regulated by international and regional legislation and standards to provide the
food quality and safety supported with shelf life information to consumers. The
industrial relevance of selection and use of the optimal TTI smart label for monitoring
food products allows reliable estimations of the remaining shelf life leading to improved
management of the cold chain from production to the point of consumption [6].
Usually, consumers have the general information of food handling conditions
during the storage. They also know the domestic refrigerator temperatures and home
storage time of chilled food. The awareness of all that and their behavior at home have
a great impact on food safety. A study was considering the above mentioned facts to
couple them with a general rule that could be incorporated in shelf life studies [7] and
safety risk assessment.
The focus of the presented cold chain study is the importance of integration of
temperature monitoring during supply chain processes and modelling the transport and
home conditions of consumers. Predicted shelf life results can be presented on mobile
devices with recommendations of further storage and use of perishable food products.
Implementations of various fixed and mobile sensor systems are available in cold stores
and logistic processes and temperature measurements are collected and stored by
supply chain partners. The last and most important step of sharing and making data
public to perform detailed analysis is missing. The results could give all partners the
approval of their work and provide confidence about the food quality and provenance
to consumers.
This paper is organized as follows. Section 2 gives a description of cold chain
divided in two phases. The first one is part of the controlled supply chain up to the
point of sale and the second one continues during the consumers handling of product.
In Sect. 3, the corresponding case study of temperature measurements with shelf life
prediction results is presented. In the end, conclusions are made in Sect. 4.
2 Cold Chain
Cold chain (CC) systems are required in the food supply chain (SC) to monitor the
handling conditions of products during processing, packaging, transport and storage
phases to the point of sale. Various wireless sensors, other sensor and identification
devices are connected to the internet and used to collect environment temperatures on
regular basis [8]. Lately, they are well known as Internet of Things (IoT) and used in
the supply chain.
Mostly, for perishable food real time data is provided in a form of temperature and
sometimes humidity measurements from warehouses during the storage period and
78 M. Trebar
from trucks during short or long transport phases. Companies can use the data for their
own management (automated reporting, improvements and alarms). Rarely, partners in
the supply chain exchange that data and use it for determination of rejection problems
at the delivery of food.
At the point of sale, consumers are not aware of the information collected or
exchanged in the supply chain. They don’t know whether there was something wrong
with the product available for sale and can only believe what they are told. Usually,
they are not aware of the importance of required conditions after purchase, including
transport and storage at home. In case of highly perishable food products for which the
cold chain was broken by different storage conditions it is very important to evaluate
and consider changed shelf life information.
Cold Chain
Fig. 1. Cold chain monitoring – temperatures are measured separately in each phase of the
supply chain from the farm to the retail or other locations of sale (TT – transport temperatures;
TW – warehouse temperatures; TS – retail temperatures).
t t t
Cold Chain
Fig. 2. Cold chain monitoring – temperatures are measured successively in all phases of the
supply chain from the farm to the retail or other locations of sale.
Cold Chain and Shelf Life Prediction of Refrigerated Fish 79
T (˚C) T (˚C)
TT TW TS
TT1 TF
t t
Fig. 3. Cold chain assumptions – temperatures are not measured after the sale. They could be
estimated for the specific storage condition.
3 Shelf Life
fish temperatures measured during three phases of supply chain, starting in cold store
(CS) with temperatures around 0 °C, continuing in transport (TR) in the car with high
temperatures up to 30 °C and finally in the home refrigerator (R) with temperatures
between 5 °C and 8 °C.
Fig. 4. Cold chain monitoring – environment and fish temperatures were measured in three
phases including cold store (CS = 25 h), transport (TR = 2 h), refrigerator (R = 18 h).
Table 1. High quality shelf life for seabass related to storage time and temperature.
T (°C) X T (°C) X T (°C) X T (°C) X
0 8 8 2.7 16 1.1 24 0.75
1 6.5 9 2.35 17 1 25 0.73
2 5 10 2 18 0.92 26 0.71
3 4 11 1.75 19 0.89 27 0.69
4 3.5 12 1.5 20 0.85 28 0.67
5 3.35 13 1.4 21 0.83 29 0.62
6 3.15 14 1.3 22 0.8 30 0.6
7 3 15 1.2 23 0.78 31 0.57
Table 2. Quality shelf life (QSL-remain) calculations of seabass stored in Styrofoam box in
days/hours (d/h).
Test QSLC-CS (d/h) QSLC-TR&R (d/h) QSL-remain (d/h)
Fish (Fig. 4) 2/0 0/20 5/4
Environment (Fig. 4) 2/6 2/10 3/8
Fish: CS(0 °C) + TR(28 °C) + R(7 °C) 2/0 2/16 3/8
Fish: CS(0 °C) + TR(10 °C) + R(7 °C) 2/0 2/2 3/22
First two tests show QSL results based on real fish and environmental temperature
measurements shown in Fig. 4. The QSL-remain differs for approximately two days
which indicates that environment temperatures aren’t adequate information according
to the cold chain conditions of fish in the Styrofoam box.
Next two tests included estimated temperatures of fish stored in the Styrofoam box
without ice. In the first one, the prediction results for temperatures in CS (0 °C), TR
(28 °C) and R (7 °C) are the same for the QSL-remain of environmental measure-
ments, but they are different for QSLC-CS and QSLC-TR&R according to alterations
between real measurements and estimations. For lower estimates of temperatures
during transport phase (TR = 10 °C) is the QSL-remain improved for 16 h. This
indicates how is minimized the remained shelf life for 14 h in only two hours on 18 °C
higher temperatures.
The impact of temperatures and time in the period of consumers transport and the
storage of fish before the consumption is very important. Considering the assumption
that supply chain handling conditions correspond to standardized regulations (fish
package includes melted ice to sustain the required humidity and temperature T = 0 °C)
and the remained shelf life is available at the point of sale, the consumers can be
convinced about the fish quality. Usually, they are not aware of the fact how they affect
the faster spoilage of fish afterwards. For each labeled fish, displayed for sale [11], are as
a part of mandatory information included storage conditions (temperature) and one of
82 M. Trebar
the following information available as expiry date, best before date or use by date. In
addition, food operators provide the voluntary information like date of catch/harvest,
nutrition declarations and other information.
Table 3 provides calculations of remained shelf life to conform to the quality the
consumer anticipates for the food he consumes. It provides and additional information
to be included together with the verification of place of origin and other traceability
information in mobile applications. Shelf life calculations are based on the information
that will be received at the time of purchase concerning storage conditions of fish
(T = 0 °C), total shelf life equals 8 days, and QSLSC-remain was 5 days.
Table 3. The impact of time and temperatures on shelf life calculations during transport and
storage phases at consumers home.
TR (Time, T) (h, °C) QSLC-TR (d/h) R (T = 7 °C) (d) QSLC-R (d/h) QSL-remain (d/h)
1: 0, 10, 20, 30 0/1, 0/4, 0/10, 0/14 1 2/16 2/7, 2/4, 1/22, 1/18
2 5/8 –0/9, –0/12, –0/18, –0/22
2: 0, 10, 20, 30 0/2, 0/10, 0/23, 1/8 1 2/16 2/6, 1/22, 1/9, 1/0
3: 0, 10, 20, 30 0/3, 1/5, 2/20, 4/0 1 2/16 2/5, 1/3, –0/12, –1/16
In Table 3 are described three tests depending on time of transport (TR Time = 1,
2, 3 h) with four examples of possible temperatures (T = 0, 10, 20, 30 °C) during that
time linked to fish in environmental conditions. Storage of fish in refrigerator (T = 7 °C
and R Time = 1, 2 days) is included in each test.
The analysis of presented results show high importance of shortest possible time
during transport and lowest temperatures. In three hours on temperature T = 30 °C the
consumed QSL equals four days. In case of QSL-remain calculations for R = 2 days
are shown negative values which indicate the expiry date of product. The QSL-remain
is highly related with both parameters (T, Time) and should be considered by con-
sumers which means that environment temperatures aren’t adequate for cold chain
conditions of fish.
4 Conclusion
The cold chain monitoring of fish supply chain was used to provide the shelf life
predictions in the first phase from fish farm to the point of sale. These results should be
presented to consumer because they do not have the impact on them but should be
informed about the QSL_remain. Additionally, the next very important phase of
consumers’ involvement in cold chain from point of sale to the final preparation of food
is analyzed with various options of time and temperatures scenarios to be used for
remained shelf life calculations. The results can be used to give recommendations of
how fish should be stored and for how long they can keep the quality and freshness of
fish after the purchase.
Cold Chain and Shelf Life Prediction of Refrigerated Fish 83
Acknowledgments. This work has been supported by Slovenian research agency under ARRS
Program P2-0359 Pervasive computing and in collaboration with Fonda.si which provided the
data.
References
1. Ostli, J., Esaiassen, M., Garitta, L., Nostvold, B., Hough, G.: How fresh is fish? Perceptions
and experience when buying and consuming fresh cod fillets. Food Qual. Prefer. 27, 26–34
(2013)
2. Gogou, E., Katsaros, G., Derens, E., Alvarez, G., Taoukis, P.S.: Cold chain database
development and application as a tool for the cold chain management and food quality
evaluation. Int. J. Refrig 52, 109–121 (2015)
3. Tsironi, T., Stamatiou, A., Giannoglou, M., Velliou, E., Taoukis, P.S.: Predictive modelling
and selection of time temperature integrators for monitoring the shelf life of modified
atmosphere packed gilthead seabream fillets. LWT Food Sci. Technol. 44, 1156–1163
(2011)
4. Trebar, M., Lotrič, M., Fonda, I., Pleteršek, A., Kovačič, K.: RFID data loggers in fish
supply chain traceability. Int. J. Antennas Propag. 2013, 1–9 (2013)
5. Garcia, R.M., Cabo, L.M., Herrera, R.J., Ramilo-Fernandez, G., Alonso, A.A., Balsa-Canto,
E.: Smart sensor to predict retail fresh fish quality under ice storage. J. Food Eng. 197, 87–97
(2017)
6. Giannoglou, M., Touli, A., Platakou, E., Tsironi, T., Taoukis, P.S.: Predictive modeling and
selection of TTI smart labels for monitoring the quality and shelf-life of frozen food. Innov.
Food Sci. Emerg. Technol. 26, 294–301 (2014)
7. Roccato, A., Uyttendaele, M., Membre, J.M: Analysis of domestic refrigerator temperatures
and home storage time distributions for shelf-life studies and food safety risk assessment.
Food Res. Int. (2017). doi:10.1017/j.foodres.2017.02.017
8. Heising, J.K., Boekel, M.A.J.S., Dekker, M.: Simulations on the prediction of cod (Gadus
morhua) freshness from an intelligent packaging sensor concept. Food Pack. Shelf Life 3,
47–55 (2015)
9. Limbo, S., Sinelli, N., Riva, M.: Freshness decay and shelf life predictive modelling of
European sea bass (Dicentrarchus labrax) applying chemical methods and electronic nose.
LWT Food Sci. Technol. 42, 977–984 (2009)
10. Roberts, W., Cox, J.L.: Proposal for standardized core functionality in digital
time-temperature monitoring SAL devices. A White Paper by the temperature Tracking
Work Group of the SAL Consortium (2003)
11. A Pocket Guide to the EUs new fish and aquaculture consumer labels, Publications Office of
the European Union (2014). https://ec.europa.eu/fisheries/sites/fisheries/files/docs/body/eu-
new-fish-and-aquaculture-consumer-labels-pocket-guide_en.pdf. ISBN 978-92-79-43893-6.
doi:10.2771/86800
A HCE-Based Authentication Approach
for Multi-platform Mobile Devices
Abstract. Mobile devices are able to gather more and more functionalities useful
to control people’s daily life facilities. They offer computational power and
different kinds of sensors and communication interfaces, enabling users to
monitor and interact with the environment by a single integrated tool. Near Field
Communication (NFC) represents a suitable technology in the interaction
between digital world and real world. Most NFC-enabled mobile devices exploit
the smart card features as a whole: e.g., they can be used as contactless payment
and authentication systems. Nevertheless at present heterogeneity in mobile and
IoT technologies does not permit to fully express potentialities of mobile devices
as authentication systems, since most of the proposed solutions are strictly related
to specific technological platforms. Basing on smart payment card approach,
Europay, MasterCard e VISA (EMV) protocols and Host Card Emulation (HCE)
technology, the current work proposes a distributed architecture for using NFC-
enabled mobile devices as possession factor in Multifactor Authentication (MFA)
systems. The innovative idea of the proposal relies on its independence with
respect to the specific software and hardware technologies. The architecture is
able to distribute tokens to registered mobile devices for univocally identifying
user identity, tracing its actions in the meanwhile. As proof of concept, a real case
has been implemented: an Android/iOS mobile application to control a car central
locking system by NFC.
1 Introduction
In smart cities, advanced systems as sensing technologies and smart IoT devices, are
addressed to improve and automate processes within a city [1], trying to enhance and
ease citizens daily life: several real cases show that IoT technologies support added-
value services for the administration of the city and for the citizens [2]. On the other
hand, smart cities call for newer technical solutions and best-practice guidelines. In this
regard, the presented paper analyses an innovative solution by which smartphones can
be used as authentication system instead of common physical key: the smartphones can
replace smart card, badges, tokens, and other long-standing, but often uncomfortable,
methods of identification and security, providing to users a single mean of
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 84–92, 2018.
https://doi.org/10.1007/978-3-319-67636-4_10
A HCE-Based Authentication Approach for Multi-platform Mobile Devices 85
authentication, their own smartphones. Such solution has several applications in smart
cities: one need only think to the buildings with access control systems based on badge.
Another example is related to car sharing: the proposed solution offers a simple way to
implement the car locking system that can recognise the user by means of only his
smartphone, tracking also his movements during the use of the car.
Nowadays, the smartphone is clearly the collector of people’s virtual social network.
Nevertheless, research and industry offer exciting possibilities with respect to interaction
between smartphones and the real world, mainly related to home automation and the
Internet of Things (IoT) scenarios, thanks to the embedded sensors and the communi‐
cation interfaces. In this sense, the smartphone is taking on the features of several daily
life objects, acting as proxy for the interactions between people and environment.
Modern security systems adopt the so-called Multifactor Authentication (MFA)
paradigm: authentication and security are guaranteed combining more than one method
of authentication from independent categories of credentials to verify the user’s identity
in its mission-critical transactions. An authentication factor is a category of credential
used for identity verification. The three most common categories are often described as
something the user knows (the knowledge factor), something the user has (the possession
factor), and something the user is (the inherence factor). Typical instances of the afore‐
mentioned factors are, namely, the password, the security token, and the biometric
verification.
During the last years, several work in scientific and technical literature have focused
on using mobile devices as tools for providing authentication factor facilities in MFA
systems, namely [3–6].
In a mobile MFA system, NFC is the way forward: NFC Card emulation mode is
well suited for mobile identification-based scenarios as possession factor [7–9]. Further‐
more, NFC Card emulation mode is compatible with pre-existing smartcard-based
authentication systems, at present widely distributed.
However, the widespread use of proprietary technologies in the mobile sector makes
it difficult to use smartphones as universal tool for interacting with physical world.
Specifically, NFC interface is not freely exploitable: iOS applications cannot leverage
software tools, such as SDK APIs, to control NFC interface, and there is no way to use
NFC in Apple devices, except by means of Apple Pay wallet. Moreover, NCF suffers
for well-known security issues, but both industrial and research studies identified solu‐
tions for facing them [10–13]. So, there is the dire need of a solution that enables users
to take advantage of such IoT technology independently from the specific smartphone
platform, in order to enable the implementation of the virtual world typical scenarios in
real life. As an example, Table 1 compares four recognised industrial solutions for
second factor authentication system by means of mobile devices. It is worth noting that
only one of them uses NFC technology and it is compatible only with Android devices.
Moreover, none of them is compatible with pre-existing authentication systems.
The presented study tries to overcome the problem of using smartphones as authen‐
tication means and credential category in a mobile MFA system independently from
their specific operative system and software/hardware restrictions, maintaining all the
necessary security requirements. The core idea is to use the recent Host Card Emulation
(HCE) technology, through which the cloud generates and distributes virtual smart card
86 L. Manco et al.
to mobile devices, enabling the smartphone to use NFC card emulation mode to commu‐
nicate with smart devices and identify itself emulating the virtual card received by the
HCE cloud. Such approach also permits to authenticate the user throughout the NFC
transactions by means of a software solution, while leveraging cryptographic processes
traditionally used by hardware-based secure elements without the need for a physical
secure element.
The goal of the study is to create a system that permits to a mobile application to
communicate with a smart device via NFC interface, transmitting to it user data useful
for its authentication by the system. As specified in the paper introduction, the main
obstacle is related with iOS platforms, since for such system NFC interface control is
exclusively delegated to Apple Pay wallet. On the other hand, Android OS makes avail‐
able specific APIs able to totally control smartphone NFC transceiver. So, the challenge
is to implement in the smart device a software component able to read and accurately
interpret the instructions received from the smartphone, regardless of which is the
smartphone OS between the two considered. Shortly, the original contribution of the
paper relies on the fact that the created system is platform-agnostic: the designed archi‐
tecture enables both Android and Apple devices to be used as authentication factor,
bypassing the limit imposed by Apple devices.
Finally, the proposed architecture has been validated by means of a proof-of-concept:
a prototype able to control the car locking system by means of the user smartphone as
second authentication factor.
The rest of the paper is organized as follows. Section 2 describes the design of the
system architecture. Here the authors analyse the software and hardware architecture
model and the information flowing through it. Section 3 presents the proof-of-concept
validating the architecture.
2 System Architecture
Fig. 1. Architecture deployment diagram: the mobile device authenticate itself to the Smart
Device using the token provided by the HCE server and passing it through a communication
channel compliant with NFC standard. The Smart Device hosts a custom implementation of EMV
standard.
The main idea is based on the use of HCE technology, through which the cloud
generates and distributes virtual smart card, also known as token, to mobile devices. The
NFC card emulation mode allows the smartphone to emulate contactless card by means
of such tokens. Another common secure system commonly used for card-emulation
mode in NFC contactless transactions is the Secure Element, a chip embedded directly
into the device’s hardware, or in a SIM/UICC card provided by network operators, in
which can be found the secure tokens and execution environment. Differently from this
last, HCE moves the secure components to the cloud and avoids any hardware restriction.
As shown in Fig. 1, in the presented architecture a mobile application receives and
manages the virtual card from a HCE server. More specifically, within iOS platforms
such application is exclusively represented by Apple Pay wallet, while for Android
platforms it can be a dedicated application also. Through the NFC card emulation mode,
88 L. Manco et al.
such mobile application can communicate with the Smart Device using the HCE-
generated virtual smart card.
In order to perform the communication with the smartphone, the Smart Device
grounds on three main software modules. The first one is the interface for the commu‐
nications with the smartphone. It is physically implemented on the device though a
dedicated driver library aimed to interface the smart device with its own NCF antenna
and to reorganise the data received from the smartphone. On the other side, there is the
interface for controlling the actuator, the second module implemented within the smart
device.
The third module is completely dedicated to overcome the iOS restrictions relating
to the use of NFC. It is a custom implementation of EMV standard, an open-standard
set of specifications for smart card payments, also known as chip cards, and payment
terminals for reading them. The EMV specifics are based on various standards, such as
ISO/IEC 7816 for contact cards payment and ISO/IEC 14443 for contactless cards ones.
Relating to the present study, the Smart Device in the architecture embeds the imple‐
mentation of the ISO/IEC 14443 standard, since it is the protocol involved in the Apple
Pay wallet contactless payment processes. Therefore, in such architecture the Smart
Device acts as a sort of Electronic Funds Transfer at Point of Sale (EFTPOS), which
receives a payment request from the wallet and performs some resulting actions. By
means of such module, the Smart Device uses payment information received from
smartphone to authenticate the Apple Pay wallet users. Differently with respect to a
standard EFTPOS, in this case the module does not initialize a payment process for an
authenticated user, but it invokes the Actuator Controller module.
Fig. 2. Information flow describing the steps in using mobile devices as authentication factor.
A HCE-Based Authentication Approach for Multi-platform Mobile Devices 89
The user loads his own personal information in the smartphone by means of the
ad-hoc Android mobile application or the Apple Pay wallet, depending on the
considered operative system. In the latter case, such information have to be strictly
related to a physical smart card, since the mobile application are dealing with a
mobile wallet. What matters is that, ones loaded into the smartphone, such informa‐
tion are transmitted to the HCE platform and they do not rely anymore on the smart‐
phone, coherently with the required security standards.
The HCE cloud platform stores the personal information for the registered users and
it generates a linked virtual smart cards, so that mapping the user/smartphone pair to the
virtual smart cards exploitable for NFC communication. Next, the HCE platform
provides the smartphone with the generated virtual smart card.
Once obtained the virtual smart card, the smartphone can establish a communication
channel with the smart device by the NFC card emulation mode. During a first phase,
the smartphone sets up the smart device to get it trusting the virtual smart card infor‐
mation. Subsequent smartphone/smart device communication streams are aimed to acti‐
vate the bound actuator.
Furthermore, the virtual smart card can be shared among users, so that sharing the
access to the physical resources and the cloud architecture can keep trace of the whole
actions performed by user, by means of a synchronizing service.
3 Proof of Concept
Fig. 3. Electric scheme of the connection block between the Arduino board and the locking motor
Coherently with the workflow depicted in the Fig. 2, the user authenticates itself to
smart device, communicating with it by NFC card emulation mode. The smart device
verifies the user identity basing on the data set during a preparation phase and, in case
of success, it activates the two relays. They are disposed and connected to the circuit so
Fig. 4. Implementation of the prototype in a real case: the car locking system is not blocked (i).
When the smartphone is moved closer the NFC antenna embedded in the car door (ii), the blocking
system gets locked (iii)
A HCE-Based Authentication Approach for Multi-platform Mobile Devices 91
The presented work introduced a software and hardware solution able to implement a
multifactor authentication system in which possession factor is represented by the user
NFC-compliant smartphone. The security token useful for the second factor in the
authentication system is handed out to user mobile device by a HCE-based cloud soft‐
ware component. Mobile device communicate such token for authenticating the user via
NFC card emulation mode, so that emulating a real smartcard.
The described strategy is compatible with both Android and iOS mobile platforms,
bypassing the iOS restrictions in using NFC features. It can be also seamless integrated
in pre-existing smartcard-based authentication systems, thanks to the adoption of the
card emulation mode.
The model validation has included the implementation of a prototype able to control
the car locking system by means of the user smartphone as second authentication factor.
The experimentation showed the proper functioning of the developed solution.
The future work concern the improvement of the features for the developed HCE
system by adopting the newest techniques on the cloud topic.
References
1. Hancke, G.P., de Carvalho e Silva, B.: The role of advanced sensing in smart cities. Sensors
13(1), 393–425 (2013). Multidisciplinary Digital Publishing Institute, Switzerland
2. Zanella, A., Bui, N., Castellani, A., Vangelista, L., Zorzi, M.: Internet of Things for smart
cities. IEEE Internet Things J. 1(1), 22–32 (2014)
3. Aloul, F., Zahidi, S., El-Hajj, W.: Two factor authentication using mobile phones. In: 2009
IEEE/ACS International Conference on Computer Systems and Applications, pp. 641–644
(2009)
4. Smith, M., Tassone, J., Holmes, D.: Method and system for providing identity, authentication,
and access services. US 9076273 B2, 07 July 2015
5. Mandalapu, A., Raj, L.D.: An NFC featured three level authentication system for tenable
transaction and abridgment of ATM card blocking intricacies. In: 2015 International
Conference and Workshop on Computing and Communication (IEMCON), pp. 1–6 (2015)
92 L. Manco et al.
6. Chen, W., Hancke, G.P., Mayes, K.E., Lien, Y., Chiu, J.-H.: NFC mobile transactions and
authentication based on GSM network. In: 2010 Second International Workshop on Near
Field Communication, pp. 83–89 (2010)
7. Adukkathayar, A., Krishnan, G.S., Chinchole, R.: Secure multifactor authentication payment
system using NFC. In: 2015 10th International Conference on Computer Science & Education
(ICCSE), pp. 349–354 (2015)
8. Ivey, R.G.F., Braun, K.A., Blashill, J.: System and method for two factor user authentication
using a smartphone and NFC token and for the automatic generation as well as storing and
inputting of logins for websites and web applications. 14/600391, 20 January 2015
9. Subpratatsavee, P., Sriboon, W., Issavasopon, W.: Automated car parking authentication
system using NFC and public key cryptography based on android phone. Appl. Mech. Mater.
752–753, 1006–1009 (2015)
10. Armando, A., Merlo, A., Verderame, L.: Trusted host-based card emulation. In: 2015
International Conference on High Performance Computing & Simulation (HPCS), pp. 221–
228 (2015)
11. Cavdar, D., Tomur, E.: A practical NFC relay attack on mobile devices using card emulation
mode. In: 38th International Convention on Information and Communication Technology,
Electronics and Microelectronics (MIPRO), pp. 1308–1312 (2015)
12. Oh, S., Doo, T., Ko, T., Kwak, J., Hong, M.: Countermeasure of NFC relay attack with
jamming. In: 12th International Conference & Expo on Emerging Technologies for a Smarter
World (CEWIT), pp. 1–4 (2015)
13. Urien, P.: New direction for open NFC trusted mobile applications: the MOBISIM project.
In: IEEE Conference on Communications and Network Security (CNS), pp. 711–712 (2015)
IISSC: Smart Challenges and Needs
Smart Anamnesis for Gyn-Obs: Issues and Opportunities
The medical history taking is the most common task performed by physicians. That is
why Engel and Morgan defined it “the most powerful, sensitive and versatile instrument
available to the physician” [1].
Scientific discoveries and technological innovations have deeply changed the way
to perform diagnoses and to treat diseases. But neither scientific nor technological
advances in medicine have changed the fact that a “good” history taking contributes
significantly to problem detection, diagnostic accuracy and patient health outcomes. By
the medical history, physicians acquire 60–80% of the information that is relevant for a
diagnosis [2] and the history alone can lead to the final diagnosis in 76% [3].
History taking and communication skills programs have become cornerstones in
medical education over the past 30 years and are implemented in most US, Canadian,
German and UK medical schools [4]. However, history taking cannot be represented by
specific and universal rules since it is highly contextual, depending on situation, patient
and physician, cultural characteristics and other similar factors.
The medical history does not involve only the current situation of the patient. Very
important are similarly the past medical history, the drug history, the family history (in
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 95–104, 2018.
https://doi.org/10.1007/978-3-319-67636-4_11
96 L. Vaira and M.A. Bochicchio
order to find out if there are any genetic conditions within the family) and the social
history, including all the patient’s background: smoking, alcohol, habits, etc.
Having access to all these information is not a trivial task.
Physicians need to remember many questions relating to the management of each
condition and omitting an important question can actually compromise the diagnosis.
For example, studies show that 50% of psychosocial and psychiatric problems are missed
during medical consultations [5] and that 54% of patient problems and 45% of patient
concerns are neither elicited by the clinician nor disclosed by the patient [6].
Computer-assisted history taking systems (CAHTS in the following) are tools that
aim to aid physicians in gathering data from patients to inform a diagnosis, a treatment
plan or both [7].
Although CAHTS were first described in the 1960s [8], there is still uncertainty about
the impact of these methods on medical history data collection, clinical care and patient
outcomes, hence they often remain underused in routine clinical practice [9].
Bowling in [10] describes that the various CAHTS typologies depend on three inter‐
related factors:
• the information technology used to collect the information (e.g. personal computer,
personal digital assistant, Internet, telephone, etc.);
• the administration mode (e.g. administered by an interviewer or self-administered);
• the presentation channel (e.g. auditory, oral or visual).
The author presented the different and serious effects that the administration mode
for example can have on data quality. Indeed, a very important aspect to take into account
when dealing with the medical history taking, is represented by the social desirability
bias [11]: a factor defined as the effect of disturbance that comes into play when the
patient, responding to an interview, has a chance to give answers which may be deemed
to be “more socially acceptable” than others in order to look more “normal” as possible.
If patients use an electronic device for a CAHTS, they are less likely to falsify data
when compared with those using pen-and-paper, as demonstrated by 4 randomized
controlled trials [12–15].
In particular, self-administered computer-assisted interviewing is perceived favor‐
ably by patients because computer systems cannot be judgmental towards sensitive
behavioral data. Therefore, CAHTS are particularly useful in eliciting potentially sensi‐
tive information (e.g. alcohol consumption, psychiatric care, sexual health and gyneco‐
logical health).
In this paper we discuss the issues and opportunities for the adoption of a smart
approach to anamnesis in the context of maternal health and wellbeing as well as fetal
growth monitoring. The approach is in line with the current evolution of smart hospitals
for smart cities and it is mainly based on the possibility to to gain as much data as possible
directly at the source in order to have high quality information and to increase the overall
effectiveness of physician’s time and of the visit itself.
Smart Anamnesis for Gyn-Obs: Issues and Opportunities 97
2 Motivation
Perinatal period is a very delicate stage of life for both mother and families. Maternal
and fetal constant monitoring during the whole pregnancy is a critical aspect since it
may detect early alarm signals for a wide range of pathologies in order to promptly treat
potential complications and to avoid unnecessary obstetric interventions at the time of
delivery.
Physicians refer to standard reference charts to evaluate the fetal growth develop‐
ment but such reference values are characterized by a series of limitations making
difficult to use them as a standard for diagnoses, such as data obsolescence, methods
heterogeneity, lack of data, hospital-based samples, exclusion criteria, missing of several
important factors to take into account (lifestyle, familial aspects, physiological and
pathological variables, etc.) that may lead to inaccurate diagnoses of fetuses as small
(SGA) or large (LGA) for gestational age.
The lack of data is the major limitation. Considering that every year in the world
there are about 160 millions of newborns, the huge amount of data potentially
involved in the analysis of fetal growth and maternal wellness should provide a
comprehensive analysis able to avoid false-positives and false-negatives diagnoses
and to prevent undue anxiety in families which typically leads to unnecessary and
expensive further investigation.
Unfortunately, a global strategy which addresses the lack of data issue does not exist.
Data are still missing and this can be due to several reasons which are not only of tech‐
nical nature [16]:
• the local nature of traditional data collections, managed by bureaucratic units;
• the legitimate conflict of interest among physicians (practicing defensive medicine),
patients (interested in health protection) and Health Administrations (focused on cost
reduction);
• the lack of adoption of proper data harvesting strategies and techniques.
Defining new approaches to data collection in order to analyze, visualize and share
information that could be useful in decision-making processes and hence to take
advantage from the big amount of accumulating clinical data in order to extract useful
knowledge and understand patterns and trends within the data.
Mothers and families are very sensitive to all aspects concerning the fetal wellbeing
and are hence highly motivated to change their lifestyle and their approach to healthcare.
This means that, although physicians may be reluctant to use electronic devices for data
gathering that could require extra time, patients have a clear and valid reason to spend
additional time to address their own problems and so there is no need to educate them
on how the adoption of a new approach to data collection would improve the entire
system.
98 L. Vaira and M.A. Bochicchio
3 Our Approach
In the era of Internet of Things (IoT), smart devices can help to transform clinical practice
and hence to improve the delivery of care. As specified in [17], in order to collect data
from all possible sources directly from the field, new data harvesting techniques can be
adopted:
1. form-based input on Web pages and mobile App;
2. new generation mobile devices, which typically are packed with sensors (e.g. GPS,
gyroscopes, accelerometers, touch-sensitive surfaces, microphones and cameras) or
have physical interfaces that allow the connection of external modules (e.g. blood
pressure cuffs, blood glucose monitoring) or wearable sensors (e.g. heartbeat and
contraction monitoring) linked to smartphone to harvest and monitor data;
3. direct connection to medical equipment (e.g. medical imaging machines, medical
ventilators, medical monitors, etc.) with automatic DICOM metadata decoding and/
or signal processing techniques;
4. automatic data extraction from medical documents, both printed and digital,
provided to patients after medical test or extracted from large digital archives;
5. data-scraping techniques able to emulate a human agent interacting with the user
interface of a non-interoperable software in order to insert/extract relevant data from
electronic documents (web pages, electronic forms, etc.), for specific purposes or
for massive ingestion;
6. smart digital devices (e.g. for weight and height measurement, speech-to-text soft‐
ware, etc.) provided to physicians to simplify data collection at the clinic visit.
For sake of simplicity, in the rest of the paper we will refer to these techniques with
the name of channels.
In the gynecological and obstetrical sector, channel 1 (form-based input adoption)
is more appropriate for patients which in general have a strong motivation to spend their
time to describe in detail and to precisely address their own problems. This is a time-
intensive channel and for this reason it is rarely adopted by physicians. Furthermore,
physicians do not trust very much the ways their patients may collect data via electronic
devices since patients’ participation can cause an overflow of irrelevant or trivial infor‐
mation, due to the fact that patients are typically unable to assign to data the appropriate
significance (i.e. its medical meaning). Physicians typically handle medical information
on paper and hence avail themselves of collaboration of clinicians to transcribe hand-
written worksheets.
Channel 1 has been deeply exploited as a sort of “self-reported interview pre-visit”:
the patient fills out a questionnaire a few days before the visit by using a personal
computer, a smartphone or a tablet.
Such questionnaire is composed by two main parts: a profile section which includes:
• personal data (e.g. name, date and place of birth, age, maternal and paternal ethnicity,
place of residence, educational background, job, contact information like email and
phone, etc.);
Smart Anamnesis for Gyn-Obs: Issues and Opportunities 99
• medical data (e.g. potential pathologies and infections which can be developed during
pregnancy, before pregnancy or can be due to genetic conditions within the family);
• biometric data (e.g. weight, height, etc.);
• lifestyle data (smoking, nutrition, physical activity, alcohol consumption, hobby,
allergies, intolerances, etc.). A screenshot of the profile section is showed in Fig. 1.
renal disease, cancer, etc. A surgical history with emphasis on abdominal procedures
or orthopedic procedures involving the pelvis is also taken into consideration.
Once filled out, the questionnaire is available for consultancy to the Secretariat of
the hospital or of the medical office in order to create the patient report. This interface
allows to suggest in real-time to physicians which part of the anamnesis has to be
completed and which values are “borderline”. The quality of inserted data is guaranteed
by means of data validation checks. In this way, gathering such data does not need to
add to physicians’ time and, in principle, permits more time for the patient to discuss
their actual health problem rather than routine aspects of medical history with physi‐
cians.
Channel 2 (new generation mobile devices adoption) is based on the possibility to
exploits the benefits that can be achieved by adopting wearable sensing devices for health
monitoring which allow to sync data with the smartphone. New generation mobile
devices are typically packed with sensors which allow to harvest data. There are many
devices already on the market for fitness and wellness that use consumer-facing appli‐
cations which can be easily incorporated into clinical practice in order to help both
patients and physicians monitor vital signs and symptom.
During the third trimester of pregnancy, mothers can wear on the belly a small and
flexible device in order to continuously monitor the baby kicks by uploading automat‐
ically the information to smartphones. Other kinds of sensors can allow to detect and
record the baby’s heartbeat and to measure the frequency and duration of contractions
in order to provide an early indication of baby’s and mother’s health.
The direct involvement of mothers allows to constantly monitor their baby’s fetal
activities and alert their caregivers when something seems out of the norm.
In case of high-risks pregnancy, this channel allows to provide physicians an obser‐
vation over time of heart rate variations in order to develop a sort of heartbeat history
during pregnancy.
Channels 3, 4, 5 and 6 allow a direct and automatic data transfer eliminating hence
the need for human entry. Direct data capture improves data ingestion and reduces
potential sources of errors leading hence to a greater accuracy.
Channel 3 (direct connection to medical equipment) allows data collection directly
at the source, i.e. at the output of the medical equipment used for the assessment of fetal
biometric parameters, such as the traditional Ultrasound machine. In this case, the
adopted standard for the distribution and viewing of medical images is the DICOM
(Digital Imaging and Communications in Medicine) standard, which allows obtaining
discrete values directly from its headers.
The direct connection with the ultrasound machine allows to gather data (acquired
pictures and biometric measures) during the visit and to manage and integrate them in
the patient history at the time of visit with no need for human entry.
In our case, channel 4 (automatic data extraction from medical documents) is based
on the adoption of the Optical Character Recognition (OCR) technique, which allows
to analyze and extract textual data typically included in the ultrasound pictures which
are usually accompanied by measures (biometric parameters with the corresponding
values), derived data (gestational age measured in weeks and days) and other info
(ultrasound machine model, exam date, etc.).
Smart Anamnesis for Gyn-Obs: Issues and Opportunities 101
This channel is used in strict connection with the previous one, since it allows to
automatically assemble several parameters starting from the pictures acquired by the
ultrasound machine and stored into the central system.
Channel 5 (data-scraping techniques adoption) is mainly based on software techni‐
ques able to emulate a human agent interacting with the user interface of a non-intero‐
perable software in order to insert/extract relevant data for specific purposes or for
massive ingestion.
This channel is very helpful in the starting stage, when a first data ingestion and data
integration process is needed. Physicians have information about several patients on the
personal computer in their office. Such channel allows to retrieve all data directly from
web pages or electronic forms with no need for human entry.
Channel 6 (smart digital devices adoption) is in line with the current evolution of
Internet of Things in which smart devices can help to transform clinical practice and to
improve the delivery of care as in the “connected health”.
Medical offices can be transformed into “smart rooms” which simplifies and enriches
the collection of patients’ data.
The maternal weight gain monitoring during pregnancy is a critical matter the health
of pregnancy and for the maternal and fetal long-term health. It depends on several
factors such as the woman’s weight before pregnancy, the woman’s height, the type of
pregnancy (one baby or twins), etc.
A precise and continuous monitoring can serve to evaluate whether patients are
gaining less than the recommended amount of weight (this is associated with delivering
a baby who is too small, with consequences like difficulty starting breastfeeding,
increased risk for illness, etc.) or if they are gaining more than the recommended amount
of weight (this is associated with having a baby who will born too large, which can lead
to delivery complications, cesarean delivery, obesity during childhood, etc.).
Medical offices can be equipped with smart and low-cost instruments which can
allow to obtain and automatically store precise measures during visits.
A WiFi scale, using the 802.11g wireless standard, is able to transmit and store data
relating to the weight, fat mass, lean mass and body mass index (BMI) of patients who
stand on the scale itself. Such measures are automatically taken before the visit starts in
order to collect them and to analyze the variation over time. The same is true for meas‐
uring height: a digital stadiometer is able to quickly, easily, and accurately measure the
height of patients.
In Fig. 2 the overall architecture of the proposed approach is presented. It can be
divided into two main parts:
• the patient area, which includes all the possible devices (PC, notebook, tablet and
smartphone) that a mother can adopt to perform the web-based questionnaire and to
follow her own visits and monitoring reports;
• the medical office area, which includes all the possible devices (ultrasound machine,
PC, scale and stadiometer) that physicians can exploit to capture data directly at the
source and to store data on a centralized server.
102 L. Vaira and M.A. Bochicchio
4 Conclusions
The benefits coming from the combined adoption of the six above-mentioned channels
in order to gather as much data as possible before, during and after each gynecological/
obstetrical visit are manifold: it saves physicians’ time; it improves delivery of care to
patients having special needs; it facilitates and expand the data collection, especially in
case of potentially sensitive information (e.g. sexual history, alcohol consumption, etc.);
it allows an easier availability and access to medical information for both patients and
physicians and can also facilitate patients check their own data.
On the other hand, the extensive adoption of these methods in real medical settings
is hindered by the lack of technical experience which frustrates both patients and physi‐
cians leading them to the preference for pen-and-paper methods; current regulations
related on privacy and confidentiality and strong defensive medicine reasons.
The different channels have been tested separately in the context of two different
projects: one carried out in collaboration with the the Operative Unit of Clinical Path‐
ology of the main hospital of Lecce, in Italy, dealing with the possibility to exploits data-
scraping techniques able to insert/extract massive amount of data from electronic docu‐
ments in order to reduce the occurrence of inappropriate exams requests [18] and another
one in collaboration with the Department of Gynecology and Obstetrics of the main
hospital of Lecce, in Italy, dealing with the possibility to create dynamic and personal‐
ized fetal growth curves more appropriate for diagnostic purposes, by exploiting OCR
techniques for massive data ingestion [19]. Channels have been subsequently adapted
Smart Anamnesis for Gyn-Obs: Issues and Opportunities 103
to the specific features of gynecological and obstetrical studies. As future work, we plan
to perform the integration of the different technologies in a real case.
References
1. Engel, G.E., Morgan, W.L.: Interviewing and Patient Care. Saunders, Philadelphia (1973)
2. Roshan, M., Rao, A.P.: A study on relative contributions of the history, physical examination
and investigations in making medical diagnosis. J. Assoc. Phys. India 48(8), 771–775 (2000)
3. Peterson, M.C., Holbrook, J.H., Von Hales, D., Smith, N.L., Staker, L.V.: Contributions of
the history, physical examination, and laboratory investigation in making medical diagnoses.
West. J. Med. 156(2), 163–165 (1992)
4. Keifenheim, K.E., Teufel, M., Ip, J., Speiser, N., Leehr, E.J., Zipfel, S., Herrmann-Werner,
A.: Teaching history taking to medical students: a systematic review. BMC Med. Educ. 15,
159 (2015)
5. Davenport, S., Goldberg, D., Millar, T.: How psychiatric disorders are missed during medical
consultations. Lancet 2, 439–441 (1987)
6. Palermo, T.M., Valenzuela, D., Stork, P.P.: A randomized trial of electronic versus paper
pain diaries in children: impact on compliance, accuracy, and acceptability. Pain 107, 213–
219 (2004)
7. Pringle, M.: Preventing ischaemic heart disease in one general practice: from one patient,
through clinical audit, needs assessment, and commissioning into quality improvement. Br.
Med. J. 317(7166), 1120–1123 (1998). discussion 1124
8. Mayne, J.G., Weksel, W., Sholtz, P.N.: Toward automating the medical history. Mayo Clinic
Proc. 43(1), 1–25 (1968)
9. Pappas, Y., Anandan, C., Liu, J., Car, J., Sheikh, A., Majeed, A.: Computer-assisted history-
taking systems (CAHTS) in health care: benefits, risks and potential for further development.
Inform. Prim. Care. 19(3), 155–160 (2011)
10. Bowling, A.: Mode of questionnaire administration can have serious effects on data quality.
J. Publ. Health 27(3), 281–291 (2005)
11. Cash-Gibson, L., Pappas, Y., Car, J.: Computer-assisted versus oral-and-written history
taking for the management of cardiovascular disease (Protocol). Cochrane Database Syst.
Rev. 3, Art. no. CD009751 (2012)
12. Tiplady, B.A., Crompton, G.K., Dewar, M.H., Böllert, F.G.E., Matusiewicz, S.P., Campbell,
L.M., Brackenridge, D.: The use of electronic diaries in respiratory studies. Ther. Innov.
Regulatory Sci. 31(3), 759–764 (1997)
13. Gaertner, J., Elsner, F., Pollmann-Dahmen, K., Radbruch, L., Sabatowski, R.: Electronic pain
diary: a randomized crossover study. J. Pain Symptom Manage. 28(3), 259–267 (2004)
14. Lauritsen, K., Degl’, Innocenti A., Hendel, L., Praest, J., Lytje, M.F., Clemmensen-Rotne,
K., Wiklund, I.: Symptom recording in a randomised clinical trial: paper diaries vs. electronic
or telephone data capture. Control. Clin. Trials 25(6), 585–597 (2004)
15. Bulpitt, C.J., Beilin, L.J., Coles, E.C., Dollery, C.T., Johnson, B.F., Munro-Faure, A.D.,
Turner, S.C.: Randomised controlled trial of computer-held medical records in hypertensive
patients. Br. Med. J. 1(6011), 677–679 (1976)
16. Vaira, L., Bochicchio, M.A., Navathe, S.B.: Perspectives in healthcare data management with
application to maternal and fetal wellbeing. In: 24th Italian Symposium on Advanced
Database Systems (SEBD 2016), Ugento, Lecce, 19–22 June 2016 (2016)
17. Bochicchio, M.A., Vaira, L.: Fetal growth: where are data? It’s time for a new approach. Int.
J. Biomed. Healthc. 4(1), 18–22 (2016)
104 L. Vaira and M.A. Bochicchio
18. Vaira, L., Bochicchio, M.A.: Can ICT help to solve the clinical appropriateness problem? An
experience in the Italian public health. J. Commun. Comput. 12(6), 303–310 (2015)
19. Bochicchio, M.A., Vaira, L.: Are static fetal growth charts still suitable for diagnostic
purposes? In: 2014 IEEE International Conference on Bioinformatics and Biomedicine
(BIBM), Belfast, UK, 2–5 November 2014 (2014). doi:10.1109/BIBM.2014.6999260
Mobile Agent Service Model
for Smart Ambulance
1 Introduction
– Smart environment to follow the users as they move through different smart
spaces [2].
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 105–111, 2018.
https://doi.org/10.1007/978-3-319-67636-4_12
106 S. Alami-Kamouri et al.
In this paper, we aim first to give a flashback of the use of mobile agents in
telemedecine and then the use of mobile agents in smart environment. We will
focus on the use of mobile agent in the connected ambulance.
The paper is organized as follows: Sect. 2 describes the concept of smart
ambulance, what this new ambulance is going to bring and using mobile agent
in telemedecine. Section 3 presents our proposed model of smart ambulance using
mobile agents. Finally, the paper is concluded in Sect. 4.
Related Works: Here we will talk about the different existing projects and
ideas on the ambulance of the future:
Smart Pods: the objectives of Smarts Pods are to understand current models
of emergency care and provide the equipment and space they need to carry out
more affective assesment and treatment on scene, thus minimising the number
of patients admitted to hospital [5].
Recent studies have shown that mobile agents model facilitate medical and
telemedecine applications. Its efficiency is due to its autonomy, capacity for
adaptation and ability to communicate with other agents.
Smart Ambulance 107
Mobile Agent: There are several types of mobile agent. In our case, we will
use the following types of Mobile Agents:
– lightweight agents: small agents that have the ability of a very short dis-
placement due to a very short transmission times because of their low cost
bandwidth. These lightweight agents can migrate to any accessible item before
it disappears.
– heavy agents: These agents are called heavy because of the size of the exe-
cutable code and that of the transported data. They perform a task requiring
lengthy periods of local treatments.
Related Works: In this part, we will provide examples of work that links
mobile agents to telemedicine:
– Knowing the hospital to which the patient is associated thanks to the name
of the patient.
– knowing the urgent care (current state) to be administered to the patient in
the ambulance before arriving to hospital, according to his previous conditions
and the information in his medical file.
108 S. Alami-Kamouri et al.
– Prepare in advance the resuscitation room and the staff at the appropriate
department of the hospital to which the patient is attached.
– Adequate device according to the declared state of the patient.
We will present a service model (Figs. 1 and 2) concern the case where the
patient is attached to one hospital where he has his medical history.
(a) Once the patient is taken by the ambulance, the nurse identifies the patient
and take his health parameters such as heartrate, body temperature, blood
pressure, level of blood.
(b) This data are sent to the Hospital Central Server to see if the patient is
attached to a hospital or not.
(c’) Hospital Central Server sends to the ambulance the name of the hospital of
the patient.
(c) Hospital Central Server sends to the appropriate hospital the name of the
patient.
(d) The appropriate service looks for the patient’s medical record to prepare
the staff and sends to the ambulance the recommendations of patient.
(d’) The ambulance takes the patient to the appropriate hospital.
(e) Appropriate service sends to the nurse of the ambulance the first care to
make while awaiting the arrival of the patient.
Smart Ambulance 109
1- When the patient is in the ambulance, the nurse takes the patient’s name
and current state (health parameters such as heart rate, body temperature,
blood pressure, level of blood).
2- This data is recorded in the Local Agent in the ambulance.
3- The LightWeight Agent retrieves this data(patient Name + current data).
4- and thanks to its migration capacity, the LightWeight Agent migrates to the
Hospitals Central Server, where is stocked which contain for each patient, the
name and the hospital to which it is attached.
5- The LightWeight Agent looks in the database if the patient’s name is already
registered with its appropriate hospital.
Once found and thanks to the capacity of mobile agent to clone, the agent
will clone to be able to do 2 tasks at the same time:
6’- Send to the ambulance the name of the hospital to which the patient is
attached,
7’- and record the name of the appropriate hospital in the local agent to take
the patient directly to it.
110 S. Alami-Kamouri et al.
Nowadays, everyone talks about the concept of Smart Cities and Smart Envi-
ronment. For a city to become smart, it must begin by improving various areas
that are part of the city like the health sector that is experiencing several prob-
lems. In this article we studied the case of Smart Ambulance, since the current
ambulance knows many problems of design, problem of safety, etc.
Smart Ambulance is an ambulance that must be connected and able to
respond quickly to emergencies and must be loaded with the latest healthcare
technologies. The major challenges of the Smart Ambulance is to be able to
diagnose the patient’s condition in the ambulance and be able to communicate
his condition to the hospital and have his medical record in a real time.
Our proposal is to send the data in real time and to set up an architecture
that allows this using Mobile Agents which thanks to their autonomy and their
ability to migrate and to clone allow to face these challenges.
We opted to work with the Mobile Agent model because of its ability to move,
it allows to reduce as far as possible the remote communications to mobile agent
transfers only and during the collection of Information in distributed databases
and in the management of networks, it reduces the consumption of bandwidth.
There are several types of mobile agents and in this article we have choosen
3 types: local agent, light agent and heavy agent.
– The lightweight agents ensure the exploratory part, i.e. the agent migrates
from one server to another to fetch the information and then retrieve it and
each time it repeats the same behavior.
– The local agent ensures the reconstruction part, this part to which the light
agent addresses when depositing his information.
– The heavy agent has the same principle of the light agent except that, as the
name indicates, it carries several tasks.
In our future work, we will focus on the security aspects of our proposed
service models.
Smart Ambulance 111
References
1. Schoder, D., Eymann, T.: Technical opinion: the real challenges of mobile agents.
Commun. ACM 43(6), 111–112 (2000)
2. Marsa-Maestre, I., Lopez-Carmona, M.A., Velasco, J.R., Navarro, A.: Mobile agents
for service personalization in smart environments. J. Netw. 3(5), May 2008
3. Bagga, P., Hans, R.: Applications of mobile agents in healthcare domain. Int. J.
Grid Distrib. Comput. 8(5), 55–72 (2015)
4. Smart Ambulance European Procurers Platform. http://www.smartambulance
project.eu
5. Hignett, S., Jones, A., Benger, J.: Portable and mobile clinical pods to support the
delivery of community based urgent care. In: Include09 Conference London, UK,
April 2009
6. Hsu, W.-S., Pan, J.-I.: Secure mobile agent for telemedicine based on P2P networks.
J. Med. Syst. 37(3), 9947 (2013)
7. Chuan-Jun, S., Chang-Yu, C.: Mobile Agent Based Ubiquitous Health Care (UHC)
monitoring Platform. Advances in Humain Factors and Ergonomics Series, Chap.
64. CRC Press, Boca Raton (2010)
Extension to Middleware for IoT Devices,
with Applications in Smart Cities
1 Introduction
The ever increasing interest in the Internet of Things (IoT) and its immense
growth over the last years [1–3], has led to the implementation of various com-
puting devices of very small size (intended to be incorporated into various ‘smart’
objects), as well as numerous modules intended to enhance the functionality of
these devices.
An important type of these kind of modules is the one for wireless network
connectivity modules. With the technological progress in wireless communication
already be in its 4th generation (4 G / LTE-A) of cellular networks and directed
towards the 5th generation (5 G) of wireless networking, respective wireless net-
working modules are implemented for objects of the IoT (in addition to these for
Wi-Fi, etc.) [4]. An important feature of the wireless modules is their diversity,
both in terms of the wireless technology used, and the way they are implemented
(design, chipsets, etc.).
Programming of these modules is usually done at a very low level, and this
is generally “tied” to the chipset used. So the programs, in general, are not
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 112–118, 2018.
https://doi.org/10.1007/978-3-319-67636-4_13
Extension to Middleware for IoT Devices, with Applications in Smart Cities 113
2.1 Wubby
Wubby (pronounced Wha-bee) [9] is a software platform that simplifies the devel-
opment of IoT devices by providing a programming environment that supports
Python code execution directly in the devices microcontroller. This introduces
several advantages: (a) It allows a broader set of developers to contribute, giving
them the opportunity to design and develop new everyday objects based on a
popular programming language like Python. (b) It speeds up the development
process (c) It reduces development costs (d) It results in smarter, interopera-
ble everyday objects Wubby separates hardware from software, abstracting the
hardware complexity, while at the same time allowing developers to contribute
by writing simple python scripts, rather than having to deploy the whole device
image.
2.2 Architecture
Wubby Cloud : provides all the services for application deployment and backend
device management
Wubby Client: allows a user to control and configure each device. This can be
either a smart phone app or web service.
Wubby IDE : platform independent development environment that allows easy
application development (debugging, code uploading, simulation, etc.) with
Wubby.
the same way as iOS Apple Store and Google Play and the access to it is possible
from the Wubby Clients (Web or Android/iOS Apps). For this purpose, owners
of Wubby devices (end users) can register them in the Wubby Cloud in which
they can select one of the compatible applications to install in their devices.
3.2 Benefits
Wubby already offers several benefits, as it:
– reduces the overall development time, offering a much simpler programming
environment, language syntax and restrictions,
– provides a separation between software and hardware, thus making applica-
tions (scripts running on top of the middleware) re-usable among different
hardware platforms,
– reduces the after-sales support needs,
– dramatically broadens the developer audience that is able to contribute in the
development of such applications,
– adds intelligence at the device level, contributes in the efficiency of device-
cloud communications, reducing the amount of data that needs to be trans-
ferred, as a pre-processing phase is executed at the lowest level, and
Extension to Middleware for IoT Devices, with Applications in Smart Cities 117
– supports networking using various WiFi and BTLE modules. With the pro-
posed extension, support will be added for networking using mobile (cellu-
lar network wireless modules), which is of great importance for Smart Cities
applications.
References
1. Castillo, A., Thierer, A.D.: Projecting the growth and economic impact of the
Internet of Things. Economic perspectives, pp. 1–10 (2015)
2. Verizon: State of the market: Internet of Things 2016, pp. 1–24 (2016)
3. Popescu, G.H.: The economic value of the industrial Internet of Things. J. Self-
Governance Manag. Econ. 3, 86–91 (2015)
118 C. Bouras et al.
4. Wang, S., Hou, Y., Gao, F., Ji, X.: A novel IoT access architecture for vehicle
monitoring system. In: 3rd IEEE World Forum on Internet of Things, pp. 639–
642. IEEE, Reston (2016)
5. Python. http://www.python.org
6. Gubbi, J., Buyya, R., Marusic, S., Palaniswami, M.: Internet of Things (IoT): a
vision, architectural elements, and future directions. Elsevier Future Gener. Com-
put. Syst. 29, 1645–1660 (2013)
7. Tao, F., Cheng, Y., Xu, L.D., Zhang, L., Li, B.H.: CCIoT-CMfg: cloud computing
and Internet of Things-based cloud manufacturing service system. IEEE Trans.
Industr. Inf. 10, 1435–1442 (2014)
8. Rao, B.B.P., Saluia, P., Sharma, N., Mittal, A., Sharma, S.V.: Cloud computing for
Internet of Things & sensing based applications. In: 6th International Conference
on Sensing Technology, pp. 374–380. IEEE, Kolkata (2012)
9. Wubby documentation. http://www.wubby.io/docs
10. Cho, H., Kyung, C.-M., Baek, Y.: Energy-efficient and fast collection method
for smart sensor monitoring systems. In: International Conference on Advances
in Computing, Communications and Informatics, pp. 1440–1445. IEEE, Mysore
(2013)
11. Postolache, O.A., Dias Pereira, J.M., Silva Girao, P.M.B.: Smart sensors network
for air quality monitoring applications. IEEE Trans. Instrum. Meas. 58, 3253–3262
(2009)
An Analysis of Social Data Credibility for Services Systems
in Smart Cities – Credibility Assessment and Classification
of Tweets
Iman Abu Hashish ✉ , Gianmario Motta, Tianyi Ma, and Kaixu Liu
( )
Keywords: Smart cities · Smart citizens · Social data · Twitter · Twitter bot ·
Credibility · Veracity · Classification · Social media mining · Machine learning
1 Introduction
Smart cities rely on a wider and wider range of Internet information, which includes
sensor data, public data, and human generated data, as social networks and crowdsourced
data [1]. In human generated data, relevance and credibility need to be addressed since
in social networks, feeds can be propagated without being controlled nor organized.
Our research addresses credibility evaluation techniques for smart mobility support
systems. The targeted online social media is Twitter, a widely popular social media as
well as a news medium. It enables its users to send and read short messages named
Tweets. It is a platform for live conversations, live connections and live commentary.
It is accessed daily by 313 million active users with 1 billion of unique visits monthly
to websites with embedded Tweets [2]. Twitter users express their opinions, share their
thoughts, celebrate religious events, discuss political issues, create news about ongoing
events, and provide real time updates about ongoing natural disasters, etc. In addition,
Twitter is a rich source for social data because of its inherent openness to public
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 119–130, 2018.
https://doi.org/10.1007/978-3-319-67636-4_14
120 I. Abu Hashish et al.
consumption, clean and well-documented API, rich developer tooling and broad appeal
to users [3].
We approach the issue of credibility when, in the larger project called IRMA (Inte‐
grated Real-Time Mobility Assistant), we started to consider feeds coming from social
networks as information sources for mobility information systems. That issue implied
credibility assessment, and on another side Big Data technologies given the huge number
of feeds in social networks [4].
In section two, we compare the previous implementations. In section three, we illus‐
trate the methodology, and we continue in section four with a comprehensive explana‐
tion of our implementation. In section five, we discuss our results, and section six
sketches conclusion and future work.
Most of the related works can be divided into two categories: Classification-based
analysis as [11–14] adopting supervised classification, [15] or unsupervised classifica‐
tion or a hybrid of the both [16], and Pattern-based analysis as [17].
An Analysis of Social Data Credibility for Services Systems in Smart Cities 121
3 Methodology
Here below we illustrate the steps of our methodology, which includes (Sect. 3.1)
Problem Definition and (Sect. 3.2) Proposed Algorithm.
3.2.4 Classification
The classification process starts by classifying Tweets in terms of credibility, type as
spam and rumor, and finally origin like human accounts and Twitter harmful bots. This
phase is divided into two sub-phases, Credibility Classification classifies Tweets into
credible or incredible depending on the features obtained. Type and Origin Classification
124 I. Abu Hashish et al.
analyzes the incredible Tweets furthermore by classifying them as rumor or spam, and
as humans or bots account.
4 Implementation
The first two phases, Features Extraction and Features Analysis, were implemented
using social media mining techniques while the final two phases, Features Selection and
Classification, were implemented by using machine learning algorithms explained as
follows. (See Fig. 3).
Social data, or human generated data, are big, unstructured and noisy with abundant
social relations such as friendships, followers, following, etc. Consequently, using
Social Media Mining enables combining social theories with statistical and data mining
methods for extracting useful and meaningful data. In our implementation, we used
Python and Twitter API.
As a final step, we created a robot account to enrich the dataset with diverse contents
and more robotic behaviors. The developed bot searches Twitter API by using a
keyword, once the results are found, the bot retweets them, favorites them, follows their
creator and adds the user accounts to a list, thus, reflecting a typical robotic behavior.
5 Evaluation
To test performance and efficiency of the proposed algorithm, we performed three tasks:
(1) classifying based on credibility, (2) classifying based on type and (3) classifying
based on origin, including the following steps:
• Loading the dataset to Weka and assigning the target label as a class.
• Applying Features Selection phase by using correlation attribute evaluator as a subset
evaluator and Ranker as a search method.
• Performing the Classification Phase by using the desired classifier.
indicates whether the user account is verified or not. Obviously, a verified account is taken
for granted as an official account to propagate credible feeds regarding a specific topic.
Other selected features show that, when the user account has a profound network that inter‐
acts with what the user propagates, in terms of retweeting or following the user, the level
of credibility and trust are higher, which explains the other features that were selected (see
Table 4).
a robotic behavior by following many accounts, leading to a very small ratio between
the friends and the followers count. The reason behind this ranking (see Table 5) is that,
rumored and spammed Tweets are most likely to be originated by bot accounts. Finally,
what distinguishes a rumored Tweet from a spammed one is how fast it gets propagated,
thus, the retweet_count attribute.
Once the features are selected for each task, the Classification phase directly follows.
The results obtained for each task are detailed in Table 7. As can be seen from the detailed
accuracy results (see Figs. 4 and 5), Credibility and Type tasks provided higher accuracy
measures than our baselines, [20] and [12] respectively. At the best of our knowledge,
this is the first classification of Tweets with respect to their origins. Thus, considering
only features that can be extracted and analyzed to investigate the robotic behavior, our
algorithm looks accurate.
Our work intends to provide a profound and comprehensive analysis on social data
credibility, and, specifically, on Tweets credibility assessment. The feeds vary very
much, they may represent a thought, a mood or an opinion, as well as on-going political
news, sports, and natural disasters. Unlike other social networks, Twitter is an open
nature that enables everyone to publish thoughts that reach a wide range of people.
Because of these elements, credibility is relevant.
Therefore, we propose a comprehensive analysis from three points of view; Tweets’
credibility, type and origin. Our analysis, which is implemented using Social Media
Mining Techniques and Machine Learning Algorithms with Weka Software, includes
four phases;
An Analysis of Social Data Credibility for Services Systems in Smart Cities 129
References
1. Motta, G.: Towards the Smart Citizen, New and smart Information Communication Science
and Technology to support Sustainable Development (NICST) (2013)
2. Twitter Statistics. https://about.twitter.com/company
3. Russel, M.A.: Mining the Social Web, 2nd edn. O’Reilly, Beijing (2014)
4. Motta, G., Sacco, D., Ma, T., You, L., Liu, K.: Personal mobility service system in urban
areas: the IRMA project. In: 2015 IEEE Symposium on Service-Oriented System Engineering
(2015)
5. Okoli, C., Schabram, K.: A guide to conducting a systematic literature review of information
systems research. SSRN Electron. J.
6. Gupta, M., Zhao, P., Han, J.: Evaluating event credibility on Twitter. In: Proceedings of the
2012 SIAM International Conference on Data Mining, pp. 153–164 (2012)
7. Gupta, A., Kumaraguru, P., Castillo, C., Meier, P.: TweetCred: real-time credibility
assessment of content on Twitter. In: Aiello, L.M., McFarland, D. (eds.) SocInfo 2014. LNCS,
vol. 8851, pp. 228–243. Springer, Cham (2014). doi:10.1007/978-3-319-13734-6_16
8. Sikdar, S., Kang, B., Odonovan, J., Hollerer, T., Adah, S.: Understanding information
credibility on Twitter. In: 2013 International Conference on Social Computing (2013)
130 I. Abu Hashish et al.
9. Namihira, Y., Segawa, N., Ikegami, Y., Kawai, K., Kawabe, T., Tsuruta, S.: High precision
credibility analysis of information on Twitter. In: 2013 International Conference on Signal-
Image Technology & Internet-Based Systems (2013)
10. Batool, R., Khattak, A.M., Maqbool, J., Lee, S.: Precise tweet classification and sentiment
analysis. In: 2013 IEEE/ACIS 12th International Conference on Computer and Information
Science (ICIS) (2013)
11. Castillo, C., Mendoza, M., Poblete, B.: Information credibility on Twitter. In: Proceedings
of the 20th International Conference on World Wide Web (WWW 2011) (2011)
12. Sahana, V.P., Pias, A.R., Shastri, R., Mandloi, S.: Automatic detection of rumored tweets and
finding its origin. In: 2015 International Conference on Computing and Network
Communications (CoCoNet) (2015)
13. Zhang, Q., Zhang, S., Dong, J., Xiong, J., Cheng, X.: Automatic detection of rumor on social
network. In: Li, J., Ji, H., Zhao, D., Feng, Y. (eds.) NLPCC 2015. LNCS, vol. 9362, pp. 113–
122. Springer, Cham (2015). doi:10.1007/978-3-319-25207-0_10
14. Al-Dayil, R.A., Dahshan, M.H.: Detecting social media mobile botnets using user activity
correlation and artificial immune system. In: 7th International Conference on Information and
Communication Systems (ICICS) (2016)
15. Sivanesh, S., Kavin, K., Hassan, A.A.: Frustrate Twitter from automation: how far a user can
be trusted? In: International Conference on Human Computer Interactions (ICHCI) (2013)
16. Gupta, A., Kaushal, R.: Improving spam detection in online social networks. In: International
Conference on Cognitive Computing and Information Processing (CCIP) (2015)
17. Wang, S., Terano, T.: Detecting rumor patterns in streaming social media. In: IEEE
International Conference on Big Data (Big Data) (2015)
18. Morris, M.R., Counts, S., Roseway, A., Hoff, A., Schwarz, J.: Tweeting is believing? In:
Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work
(CSCW 2012) (2012)
19. Twitter API Documentation. https://dev.twitter.com/overview/documentation
20. Kang, B., O’donovan, J., Höllerer, T.: Modeling topic specific credibility on Twitter. In:
Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces (IUI
2012) (2012)
21. Twitter Support: Using Hashtags in Twitter. https://support.twitter.com/articles/49309
Data Management Challenges for Smart Living
1 Introduction
Improving the quality of life of citizens and supporting Public Administrations
(PA) and energy providers to deliver innnovative services through the adoption
of modern technologies are primary goals of modern Smart Cities [1]. Citizens
become actively part of their city life, providing suggestions, opinions and com-
ments about administration actions (e.g., through e-Participation tools). They
receive timely information about their city, the effects of PA actions, public as
well as private energy consumptions, and information about the environmental
conditions where they live (pollution and public security). On the other hand,
PA has new tools and techniques to deeply understand dynamics of phenom-
ena that characterise the administrated city, being able to take actions that
might improve citizens daily life. Furthermore, energy providers might have the
opportunity of implementing smart grids for improving their delivery services
and save costs. In this framework, the national research project Brescia Smart
Living (BSL) - MIUR “Smart Cities and Communities and Social Innovation”
is currently being performed. The main goal of the BSL project is to move from
a model based on a single monitored entity (a street, the electrical supply grid,
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 131–137, 2018.
https://doi.org/10.1007/978-3-319-67636-4_15
132 D. Bianchini et al.
the hydric system) to an integrated view of the Smart City. Every aspect is
seen as part of a more complex system, and different types of data have to be
collected, properly integrated and organized in order to provide new services to
both citizens and PA. The effectiveness and the quality of the services is enabled
by applying advanced solutions for managing large amounts of data. Data about
energy consumptions on the electrical and methane supply grids, hydric sys-
tem, street lighting, heating are collected from proper sensors and technological
equipment of modern city, according to the Internet of Things (IoT) paradigm.
These data are stored within proprietary platforms managed by single energy
providers. Further information coming from external sources (e.g., weather and
pollution data) are integrated and organized in integrated platforms in order to:
(a) aggregate information at city level, mainly devoted to energy providers and
Public Administration to give a global view of data concerning the smart city;
(b) enable personalized access to data of interest for private citizens at the level
of single district, building or apartment. Data from social media are integrated as
well, where citizens become themselves data producers through their comments,
suggestions and preferences. To meet research goals, the project is being devel-
oped over the following phases: (i) collection and identification of requirements
from citizens, energy providers and Public Administration, through the submis-
sion of proper questionnaires; (ii) design and specification of functionalities, in
terms of use cases focused on services provided to the actors of integrated plat-
forms; (iii) design of data models underlying the platforms; (iv) implementation
and experiments. Experiments will be performed on two districts identified in
Brescia, Italy. The aim is to provide an integrated observatory over the Smart
City, for different kinds of information, at different levels of aggregation, for het-
erogeneous although interleaved categories of users (citizens, energy providers
and Public Administration). In this paper we focus on challenges that raised in
the project for managing data in the context of a Smart City and we provide
some hints about possible solutions to address these issues.
This paper is organized as follows: in Sect. 2 we discuss the functional archi-
tecture of Brescia Smart Living project; Sect. 3 presents data management issues;
Sect. 4 lists related projects; conclusions and future directions are sketched in
Sect. 5.
2 General Architecture
Figure 1 shows an overview of the general architecture adopted in the BSL
project. Data are collected from both energy consumption domain (electricity,
heating, hydric and natural gas supply grids) and urban services domain (garbage
collection and security). This information is stored within domain-specific plat-
forms, owned by the energy and service providers that are participating to the
project. Both historical data and (near) real-time data collected from sensors
installed on-field (home automation hardware equipment, new generation meters
for supply grids, wearable devices for security and safety monitoring) have been
considered. The architecture also includes data coming from external sources,
for weather forecasting and air pollution estimation.
Data Management Challenges for Smart Living 133
Domain-specific platforms
Safety and
Electrical Methane Supply
Hydric System Security
Supply Grid Grid Data
Data Platform Operation
Data Platform Platform
Center
External sources
(weather data,
Data coming from domain specific platforms and external datasources are
organized within integrated platforms through a Platform Service Bus, that is in
charge of managing message exchange between all the architectural components.
Communication between domain-specific datasources and integrated platforms
may be either synchronous (for instance, concerning historical data of supply
grids) or asynchronous (e.g., events raised from home automation hardware or
wearable devices, to be promptly showed on the platforms).
The visualisation of average energy consumptions (properly aggregated to
preserve citizens’ privacy), the visualisation of information about the pollution,
the statistical data on crime rate, information about the status of garbage collec-
tion points are in charge of the Global Integrated platform. Its role is to provide a
global view at city level about consumption and services for citizens and Public
Administration. The platform also supports PA to take decisions given the cur-
rent environmental conditions (e.g., weather and pollution conditions, to take
effective and timely actions to preserve the health of fragile citizens).
Citizens can register themselves and access services of the Local Integrated
platform. This platform enables the visualisation of the personal energy con-
sumptions, as well as comparison of these data against benchmarks and average
values locally at district/building/apartment level. It also enables citizens of
smart homes to monitor, control and analyse data collected by smart plugs and
the new generation meters on supply grids.
Finally, mobile applications and a dashboard of proper Key Performance
Indicators will be designed and implemented on top of the integrated platforms.
134 D. Bianchini et al.
sensors deployed all around the city might get uncalibrated for some reasons.
The system must be able to detect and manage this wrong data to avoid a
decision-making process on wrong basis. A full digital description of the plants
(including information about measurement accuracy of each device, certificate
of calibration when available) and an history of measurements are needed to
achieve this target. Using this set of data, it is possible to design algorithms
able to analyze the system behavior and to provide suitable metrics that can
highlight malfunctions as well as any data integrity issues, in a similar way as
Intrusion Detection Systems (IDS) in cybersecurity identify network attacks.
System scalability and interoperability. The system must have the ability
to integrate new data-collecting sub-systems or to create new sub-systems
exploiting the existing hardware. A city grows in long periods and the data-
collecting technologies advance faster, therefore managing the heterogeneity
of the technologies over time is fundamental to build a robust system. Interop-
erability between different systems into the smart city must be achieved [6]. In
the BSL project, to address this issue, a new generation of Discovery Services
should be developed. As the Domain Name System (DNS) records matching
information about IPs and domains, a Service Discovery System (SDS) for
IoT will record the topology and the characteristics of the network nodes,
including their communication, measurement and actuation capabilities. In
this way, a new node is able to recover the resources it needs, ensuring flexi-
bility, interoperability and reuse of existing hardware [5].
4 Related Work
Compared to recent and on-going Smart Cities projects, BSL provides a wider
spectrum of services (as summarised in the previous sections), considering not
only the energy consumption data, but also crime rate data, as well as data
about environmental conditions. Moreover, BSL brings together the integration
platforms at multiple levels, thus giving relevance both to the Smart Cities ser-
vices meant for PA and the ones designed for citizens (also considering support
for decisions devoted to fragile users).
Smart Cities projects. The Optimus project [7] aims to support Public
Administration to optimise energy consumption. The focus here is on a semantic-
based data acquisition module, to integrate heterogeneous data sources into a
relational database, and Decision Support System to enforce an energy manager
while taking his/her decisions. The project has been tested on three pilot cases
in Italy (city of Savona), Spain (Sant Cugat del Vallès) and The Netherlands
(Zaanstad). Similarly, the BESOS project [8] is focused on the implementation
of distributed Energy Management Systems (EMS) for energy saving. BESOS is
mainly devoted to PA and energy service companies, while the citizens’ involve-
ment is more marginal compared to the BSL project. The SFpark project [9] is
focused on public transportation for the city of San Francisco. The project aims
to provide advanced data mining and planning facilities for the PA to improve
urban public mobility services. Primary goal of the Res Novae project [10] was
136 D. Bianchini et al.
to provide an integrated platform for visualising and monitoring the energy con-
sumption at a city scale. The Res Novae project has been tested in the city of
Bari, Italy. The ROMA project [11] aims to integrate data from heterogeneous
sources (security, mobility and weather) to increase city resilience and support
the PA in management of emergency situations.
Enabling technological infrastructure. The research topics addressed within
Brescia Smart Living and related projects also rely on the availability of integra-
tion platforms and infrastructures that are able to address the data management
issues we underlined in this paper. Available platforms must be based on stan-
dards, be scalable and modular, flexible enough to allow ease extension of func-
tional and non functional requirements imposed by the dynamic environment
of Smart Cities. Oracle and IBM proposed their own solutions for implement-
ing Smart Cities projects. In particular, the Oracle Smart City Platform pro-
vides a set of front-office functions over multiple communication channels (e.g.,
telephone, web, chat), big data management solutions and analytical tools. It
has been configured for several projects (such as the SFpark project mentioned
above). IBM proposed a platform that integrates many tools for managing data
in the context of smart cities, like the Intelligent Operations Center IOC [12],
providing functions to visualise data on a tabular, graphical and map-based inter-
face, SPSS [14] for stochastic and predictive analysis of data and CPLEX [13] for
solving optimisation problems. IOC has been applied in the Res Novae project
and has been chosen also for implementing the global integration platform within
the BSL project. Compared to these Smart Cities platforms, middleware solu-
tions provide communication drivers, data management facilities and APIs, but
require additional development efforts to create new applications and services
on top of them. Indra proposed a cross-platform and multi-device middleware
called SOFIA2 [15]. This middleware is devoted to the development of smart
applications that use real-time information according to a big data approach.
It has been applied in two pilot projects for the cities of La Coruña and Turin.
Tridium Niagara Framework [16] offers a development platform that connects
and translates data from nearly any device or system, managing and optimizing
performance when dealing with heterogeneous data formats and protocols. It
also enables the development of software objects to manage data at cloud and
edge computing.
References
1. Khatoun, R., Zeadally, S.: Smart cities: concepts, architectures, research opportu-
nities. Commun. ACM 59(8), 46–57 (2016)
2. Chauhan, S., Agarwal, N., Kar, A.: Addressing big data challenges in smart cities:
a systematic literature review. Info 18(4), 73–90 (2016)
3. Bagozi, A., Bianchini, D., De Antonellis, V., Marini, A., Ragazzi, D.: Summari-
sation and relevance evaluation techniques for big data exploration: the smart
factory case study. In: Dubois, E., Pohl, K. (eds.) CAiSE 2017. LNCS, vol. 10253,
pp. 264–279. Springer, Cham (2017). doi:10.1007/978-3-319-59536-8 17
4. Li, Y., Dai, W., Ming, Z., Qui, M.: Privacy protection for preventing data over-
collection in smart city. IEEE Trans. Comput. 65(5), 1339–1350 (2016)
5. Bellagente, P., Ferrari, P., Flammini, A., Rinaldi, S.: Adopting IoT framework for
Energy Management of Smart Building: a real test-case. In: 2015 IEEE 1st Interna-
tional Forum on Research and Technologies for Society and Industry (RTSI), Turin,
Italy, pp. 138–143 (2015). doi:10.1109/RTSI.2015.7325084, ISBN 978-1-4673-8166-
6
6. Ahlgren, B., Hidell, M., Ngai, E.: Internet of Things for smart cities: interoperabil-
ity and open data. IEEE Internet Comput. 20(6), 52–56 (2016)
7. OPTIMising the energy USe in cities with smart decision support systems. http://
optimus-smartcity.eu
8. Building Energy decision Support systems fOr Smart cities. http://besos-project.
eu
9. San Francisco Park project. http://sfpark.org
10. Res Novae Project. http://resnovae-unical.eu
11. Resilience enhancement Of a Metropolitan Area. http://www.progetto-roma.org
12. IOC - Intelligent Operations Center. http://www-03.ibm.com/software/products/
it/intelligent-operations-center
13. C-PLEX Optimizer. http://www-01.ibm.com/software/commerce/optimization/
cplex-optimizer/
14. SPSS. http://www-01.ibm.com/software/it/analytics/spss/
15. Indra, SOFIA2 Web site. http://sofia2.com/
16. Tridium Niagara Framework. http://www.tridium.com/en/products-services/
niagaraax
Conference on Cloud Networking for
IoT (CN4IoT)
Investigating Operational Costs
of IoT Cloud Applications
1 Introduction
The Cluster of European Research Projects on the Internet of Things [1] defined
the Internet of Things (IoT) as a dynamic global network infrastructure with
self-configuring capabilities based on standard and interoperable communication
protocols. Things in this network interact and communicate among themselves
and with the environment by exchanging data and information sensed, and react
autonomously to events and influence them by triggering actions with or without
direct human intervention. Recent trends and estimations call for an ecosystem
that provides means to interconnect and control these devices. With the help
of cloud solutions, user data can be stored in a remote location, and can be
accessed from anywhere. There are more and more PaaS cloud providers offering
IoT specific services (e.g. Amazon AWS IoT Platform, Azure IoT Suite). Some
of these IoT features are unique, but every PaaS provider addressing IoT has
the basic capability to connect to and store data from devices.
In this paper first we analyze the pricing schemes of four corresponding
providers: the Microsoft Azure IoT Hub, the IBM Bluemix platform, the Ama-
zon AWS IoT and the Oracle’s IoT platform. We compare their pricing methods,
and perform cost-efficient calculations for a concrete IoT application of a smart
city use case to help users to better understand their operation. We also compare
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 141–150, 2018.
https://doi.org/10.1007/978-3-319-67636-4_16
142 E.E. Kalmar and A. Kertesz
these selected IoT Cloud providers by estimating service costs for operating an
application of a smart city use case. We also validate our cost estimation by
simulating the smart city scenario in the IBM Bluemix Platform.
The remainder of this paper is as follows: Sect. 2 introduces related
approaches in the field of IoT Clouds. Section 3 presents the pricing schemes
of four providers, and Section Sect. 4 details our method to estimate resource
usage costs and its results. Section 5 presents real cost usage validations for a
concrete provider, and the contributions are summarized in Sect. 6.
2 Related Works
The integration of IoT and clouds has been envisioned by Botta et al. [2] by
summarizing their main properties, features, underlying technologies, and open
issues. A solution for merging IoT and clouds is proposed by Nastic et al. [3].
They argue that system designers and operations managers face numerous chal-
lenges to realize IoT Cloud systems in practice, due to the complexity and diver-
sity of their requirements in terms of IoT resources consumption, customization
and runtime governance. They propose a novel approach to IoT Cloud that
encapsulates fine-grained IoT resources and capabilities in well-defined APIs in
order to provide a unified view on accessing, configuring and operating IoT Cloud
systems, and demonstrate the framework for managing electric fleet vehicles.
Atzori et al. [4] examined IoT systems in a survey. They identified many
application scenarios, and classified them to five application domains: trans-
portation and logistics, healthcare, smart environments (home, office, plant),
personal and social, finally futuristic domains. They described these domains in
detail, and defined open issues and challenges to all of them. Concerning privacy,
they stated that a lot of information about a person can be collected without
the person being aware, and control on all such information is impossible with
current techniques.
Based on these works we selected the smart city environment to investigate
further, and to provide operational cost estimations at different providers. The
following section define our model of cost calculations based on publicly available
pricing information.
In this section, we introduce and compare pricing models of IoT Cloud providers.
We considered the following, most popular providers: (i) Microsoft and its IoT
platform called Azure IoT Hub [5], (ii) IBM’s Bluemix IoT platform [7], the
services of (iii) Amazon (AWS IoT) [9], and (iv) Oracle’s IoT platform [8]. We
took into account the prices publicly available on the websites of the providers
and when we found it necessary we asked for further information or clarifications
via email from the providers. The calculation of the prices depends on different
Investigating Operational Costs of IoT Cloud Applications 143
methods. Some providers bill only according to the number of messages sent,
while others also charge for the number of devices used. The situation is very
similar if we consider the virtual machine renting or application service prices.
One can be charged after GigaByte-hour (GB-hour) (uptime) or according to
a fix monthly service price. This price also depends on the configuration of the
virtual machine or the selected application service, especially the mount of RAM
used or the number of CPU cores or their clock signal.
In our model we consider a real world smart city use case for cost estimations
with following parameters: total number of sent messages in a certain period of
time, the number of devices used, and the capacity of the virtual machine used
to provide gateway services. We estimated how our application would be charged
after a whole month of uptime running in the cloud of the providers mentioned
before. In our model, the total cost of executing an application consists of two
price categories: (1) IoT and device prices and (2) cloud side prices. In case (1),
we may be charged after the tier (a package) used or only after the resources
used. The latter is also called “pay as you go” billing method, it means that we
only pay for what we really use. At some particular providers, we need to pay
for both of these two methods. Moreover, there are message prices as well. If
we pay for a tier (if it is possible at the particular provider) then the price of a
message is not so important because the tier includes prices of a fix number of
messages. However, the price of the tier depends on the number of messages we
want to send; more messages are covered by bigger tiers. If we use a provider
with a “pay as you go” category, then the price of a message becomes more
important. In some cases, we are charged after data exchanged not the number
of messages sent but the data used can also be covered by a tier. Finally, it
may occur that we need to pay for the number of devices used. To run an IoT
application we also need to pay for a virtual machine or application/compute
service or runtime to operate a gateway service – covered by case (2). There can
be a fix monthly price for a service but GB-hour price can be charged as well. In
144 E.E. Kalmar and A. Kertesz
our investigation, we considered the most popular cloud providers, the pricing
categories and their availability at different providers are depicted in Fig. 1. Our
investigation estimates prices for executing the smart city application for traffic
light control to compare the pricing methodology of the providers.
Azure IoT Hub [5] charges one after the chosen edition/tier. Figure 2 details
the available Tiers and also shows the size restriction for messages. This means
that there are intervals for the number of messages used in a month. Azure
also comes with some extras when we start to use its services, as well as some
of the providers do so, but we do not take extras into consideration because
we investigate general situations. There is a restriction for message sizes which
depends on the chosen tiers. One can choose from four tiers, Free, S1, S2, S3.
Each of them vary in price and the total messages allowed per day. Message size of
the Free tier also differs from the other tiers. In the Free edition, devices can only
send a lot smaller messages than in the other editions. Regarding to the cloud
side prices we need to count with an application service price and there is no
GB-hour price because the service is in full uptime. We have the opportunity to
choose from a wide variety of configurations, selecting the number of processor
cores, RAM used and storage capacity, affecting the price of the application
service.
As depicted in Fig. 3, the IBM Bluemix IoT platform’s pricing method follows
completely the “pay as you go” method, and it can be read in Bluemix’s pricing
sheet under the Internet of Things section and at Internet of Things platform
[7]. Bluemix only charges after the MegaBytes (MB) exchanged. We differentiate
three categories and each of them comes with a different price per MB. There are
three categories for the data used in MegaBytes and each category has its own
price per the MBs exchanged. The more MBs we use and thus select a bigger
category, the less price per MB we get. Working with Bluemix we need to pay
for the runtime as well to run our applications. It is configurable, depends on
the number of instances and the RAM used, and has a fix monthly price. On
the top of that, we will be charged for GB-hour price, too.
Amazon’s IoT platform can also be classified as a “pay as you go” service. Its
billing method [9] works out incredibly easily. Prices are based on publishing cost
(the number of messages published to AWS IoT) and delivery cost (the number
of messages delivered by AWS IoT to devices or applications). A message is a
512-byte block of data and the pricing in EU and US regions denotes $5 per
Investigating Operational Costs of IoT Cloud Applications 145
Concerning cloud-based cost requirements of our smart city use case, we esti-
mated that about 2–4 GBs of RAM and 2 CPU cores could run our application
smoothly. We also collected pricing information for these cloud gateway services
from the providers’ official sites. The pricing of Azure’s application service can
be found at [10], Bluemix’s runtime is in its pricing sheet under the Runtimes
section [7], Amazon EC2 On-Demand prices are described at [11] and we can
find the pricing of Oracle’s compute service at [12]. We used the prices of the
Metered Services. By clicking on the Buy Now button next to Metered Services
sign we can navigate to a detailed pricing calculator [13].
Fig. 5. Virtual machine related configuration and prices for our use case
To perform our cost estimation we chose a use case of a traffic light control sys-
tem in a smart city. This scenario represents a relatively large system, its detailed
information concerning a monthly operation period is depicted in Table 1. We
use 128 devices referring to a study and implementation of a smart city in
Messina [14]. We perform the estimation for running the application for a whole
month (744 h mean 31 days). We worked with message sizes up to a maximum
of 0.05 KiloBytes (KB). Devices send messages in every 20 s which means 180
messages in an hour. From the previously mentioned value we can assume that
Investigating Operational Costs of IoT Cloud Applications 147
Devices 128
Device type Telematic
Message size 0.05
Messages/month/device 133920
Total messages/day 552960
Total messages/month 17141760
MB exchanged/month 837
Messages transferred/device/hour 180
Test duration (days) 31
Full uptime (hours) 744
to get the total number of messages per device for the whole month we need to
determine the messages sent by a device during a day (180 * 24) and then multi-
ply it by the number of days (31) while we run this scenario resulting in 133920
messages per month per device. The total messages per day is counted by the
number of the messages sent by a device during a day (180 * 24) multiplied by
the number of devices (128), so the result is 552960. Furthermore, we can count
out the total messages per month including all the devices by just easily multiply
the number of devices by the number of messages per month per device which
means 128 * 133920 = 17141760. We can estimate the exchanged Megabytes if
we multiply the number of total messages per month by the message size given
in KBs so we then divide with 1024 and then we get the result of 837 MBs.
Our estimated calculations are shown in Table 2. In our investigation Azure
seemed to be really expensive compared to the other providers. Bluemix and
Amazon cost less than a half of the price of Azure, and Oracle is just a little
cheaper than Azure.
Bluemix provides a live event log for devices where we can trace the actual
incoming messages from them. Figure 6 shows an example event log for a message
received. In our simulated case, each device sends 180 messages in an hour to
the IoT gateway service. The data usage of 180 messages was 61.44 KBs accord-
ing to the Bluemix metering. This means that 0.3412 KB (i.e. 61.44/180) was
logged by Bluemix for a message in contrast to the originally created text file
with size of 0.053 KB. From this point, we can count that using 128 devices
we have 17141760 messages for the whole month. We can calculate the total
data exchanged by multiplying the number of the total messages with the size
of one message which gives 5711.688 MBs. This is significantly larger than the
estimated amount because of the additional information added to messages by
Bluemix (that we found out later). Bluemix charges up 0.00097 e for each MB
exchanged, so it means 5.54 e after that nearly 6 thousand MBs exchanged.
This price is also larger than the estimated one (∼0.81 e) as well as the amount
Investigating Operational Costs of IoT Cloud Applications 149
of data exchanged. The cloud side prices are the same as we estimated. The con-
clusion is that we need to pay some more Euros than estimated due to the larger
message size the Bluemix system introduces, otherwise our prior calculations
were right.
6 Conclusions
Data users produce with IoT devices are continuously posted to online services,
which require the use of cloud providers to efficiently handle, and meaningfully
visualize these data. Users also need to be aware of corresponding cost introduced
by service providers, which can be very diverse.
In this work, we investigated pricing schemes of four popular IoT Cloud
providers to help users to better understand their operation. We also performed
usage cost calculations for a concrete IoT use case of a smart city, and com-
pared them by estimating service costs for operating this application. Finally,
we validated our cost estimation by simulating the smart city scenario in the
IBM Bluemix Platform.
In general, we can conclude that Bluemix and Amazon is the cheapest due
to the cheap message prices of Bluemix and the cheap virtual machine-related
prices in case of Amazon. If we want to use a large number of devices, Oracle
should be avoided, because of its expensive device prices. Nevertheless, for small
systems Azure can be a good choice.
References
1. Sundmaeker, H., Guillemin, P., Friess, P., Woelffle, S.: Vision, challenges for real-
ising the Internet of Things. CERP IoT - Cluster of European Research Projects
on the Internet of Things, CN: KK-31-10-323-EN-C, March 2010
2. Botta, A., de Donato, W., Persico, V., Pescape, A.: On the integration of cloud
computing and Internet of Things. In: The 2nd International Conference on Future
Internet of Things and Cloud (FiCloud-2014), August 2014
3. Nastic, S., Sehic, S., Le, D., Truong, H., Dustdar, S.: Provisioning software-defined
IoT cloud systems. In: The 2nd International Conference on Future Internet of
Things and Cloud (FiCloud-2014), August 2014
4. Atzori, L., Iera, A., Morabito, G.: The Internet of Things: a survey. Comput. Netw.
54(15), 2787–2805 (2010)
5. Pricing of Microsoft Azure IoT Hub, December 2016. https://azure.microsoft.com/
en-gb/pricing/details/iot-hub/
6. IBM Bluemix IoT Platform, December 2016. https://www.ibm.com/
cloud-computing/bluemix/internet-of-things
7. Pricing of IBM Bluemix IoT Platform, December 2016. https://console.ng.bluemix.
net/?direct=classic/#/pricing/cloudOEPaneId=pricing&paneId=pricingSheet
150 E.E. Kalmar and A. Kertesz
1 Introduction
With the growing maturity of Internet of Things (IoT) concepts and technolo-
gies over the last years, we see an increase in IoT deployments. This increase
fosters the integration of IoT devices and Cloud-based applications to realize
information aggregation and value-added services. While the first IoT devices
were only capable of emitting sensor data, today more and more IoT devices
as well as network infrastructure, e.g., routers, provide computational resources
to process and store data. To cope with the large volumes of data originating
from IoT sensors in geographically distributed locations, IoT devices can pool
their resources to create Fogs based on the Fog computing paradigm [1]. Fogs
are inspired by the principles of Clouds [8], such as virtualization and pooling of
resources but due to their location on the edge of the network and the limited
amount of computational resources, they can not support the concepts of rapid
elasticity and broad network access. However, they excel at other aspects such
as low latency, location awareness, or data privacy [1].
Today’s software applications are typically running in public Clouds, like
Amazon EC2, or within private Clouds. Clouds provide on-demand resource
scalability as well as easy software maintenance and have become the de-facto
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 151–161, 2018.
https://doi.org/10.1007/978-3-319-67636-4_17
152 C. Hochreiner et al.
standard for today’s software deployments. However, the Cloud computing par-
adigm is also confronted with several challenges, especially in the area of the
IoT.
Due to the fact that applications are running in a remote data center, it
is required to transfer all data to these data centers over the Internet. This
approach is feasible for applications that process low volumes of data, but it
becomes infeasible for IoT scenarios. In IoT scenarios, sensors produce large
volumes of data, which cannot be processed by today’s network infrastructure
in real-time. Cloud-based applications are furthermore often in conflict with
tight security restrictions since data owners want to ensure that no privacy-
sensitive data leaks outside their companies’ premises [9]. Therefore, it is often
not feasible to transfer data to Cloud-based applications, because the data owner
cannot control the data access after the data leaves the premises of the company.
To solve these challenges, it is required to refrain from deploying Cloud-based
applications only in centralized remote data centers, but to consider a federated
cloud architecture [2], and evolve Cloud-based applications into nomadic appli-
cations. In contrast to Cloud-based applications, which often require dedicated
runtime environments, nomadic applications are self-contained software applica-
tions, which can transfer themselves autonomously among Fogs and Clouds and
run directly on hypervisors due to unikernel architectures [7].
The idea is inspired by the principles of temporary workers or consultants,
who move from one workplace to another to perform activities and led to the
architectural design of mobile agents [5]. Mobile agents carry out operations
in close proximity to the data source to reduce latency and network traffic [6].
These properties make them a perfect fit to tackle today’s challenges for the IoT.
Nevertheless, although mobile agents have been proposed around two decades
ago, this concept never took off, mainly due to security considerations and the
need for dedicated execution environments [4].
Since the initial proposal of mobile agents, the technological landscape has
advanced and the Cloud computing paradigm has fostered the technical foun-
dation for the execution of arbitrary applications on virtualized and pooled
resources. Given the technical requirements for nomadic applications, Fogs pro-
vide a controlled and secure execution environment, which is managed by the
owner of the data. Therefore, nomadic applications allow for a more efficient
data transfer among IoT devices and processing applications due to the elimi-
nation of the transfer over the Internet. Furthermore, due to the fact that Fogs
are often fueled by IoT devices who are already within the control of the data
owner, it is also feasible to enforce strict privacy policies.
Although nomadic applications are the most promising approach to process
privacy-sensitive data on already existing computational resources on the edge
of the network, there are still a number of challenges, like data transfer or data
recovery, which need to be resolved as discussed in Sect. 5.
The remainder of this paper is structured as follows: First, we provide a
short discussion on the related work in Sect. 2. Then, we provide a motiva-
tional scenario in Sect. 3. Based on the motivational scenario we identify several
Nomadic Applications Traveling in the Fog 153
requirements and present the foundation of our system design in Sect. 4. Fur-
thermore, we discuss open research challenges in Sect. 5 before we conclude the
paper in Sect. 6.
2 Related Work
The only recent manifestation of Fog computing, which is often also consid-
ered as edge computing, leads to a plethora of definitions (e.g., [1,3,11,12]).
Literature as well as the OpenFog Consortium1 consider Fog computing as an
evolution of Cloud computing, which extends established data centers with com-
putational resources that are located at the edge of the network. This enables
Fog computing to bridge the currently existing geographic gap between IoT
devices and the Cloud by providing different deployment locations to cater for
the different requirements of data processing applications, e.g., low latency or
cost efficiency [1].
Bonomi et al. [1] propose a layered model, which categorizes the different
computing platforms ranging from embedded sensors from IoT devices over field
area networks to data centers, where each layer provides a distinct level of qual-
ity of service. Furthermore, Dastjerdi et al. [3] provide a reference architecture
for Fog computing that can be used to leverage the computational resources at
the edge of the network and outline several research challenges, such as security
and reliability. Another challenge for the realization of Fogs is the heterogene-
ity of devices that are used to build Fogs [12]. Nevertheless, there is already
some preliminary work, such as the LEONORE framework proposed as part
of our previous work, which accommodates the diversity for the deployment of
applications on IoT devices [13].
data as well as any other factory-specific configuration data. This data is then
processed using the Fog, i.e., pooled and virtualized computational resources
provided by different IoT devices. After completing its task, the application is
ready to relocate to the next working assignment based on the information from
a remote assignment list.
Due to the fact that the recalibration operation at Factory 1 generated new
insights, which can be used to refine the recalibration operation, it is required to
transfer these insights to the knowledge base. This transfer may contain a large
amount of non-privacy-sensitive data, which does not exhibit any time-related
transportation constraints and can be carried out independently from the travel
route of the nomadic application as shown by migration A in Fig. 1.
These two operations are repeated twice, to also recalibrate the machines
for Factory 2 respectively Factory 3 and collect the obtained insights in the
knowledge base for future refinement. At the end of the journey, the nomadic
application returns to the knowledge base to update its configuration and to be
ready for future recalibration activities.
4 System Design
Fog Cloud
Fog
Applicaon Housing
Fog
Nomadic Applicaon
Applicaon Evoluon
Shared Data
Deposit Data
Application Evolution. To comply with the need for continuous evolution for
nomadic applications, the housing component furthermore needs to provide the
possibility to update existing applications at any time and to inform applica-
tions that are running on Fogs about the update. These updates can be only
applied when the applications have finished their tasks they must not be altered
Nomadic Applications Traveling in the Fog 157
while processing data. To solve this challenge, we require the application hous-
ing component to support versioning of applications to facilitate the update of
applications in the housing component. The applications are required to regularly
check whether a new version is available. If this is the case, stateful applications
need to return to the application housing to transfer their application data to
the evolved application and then they are able to continue their work.
For stateless applications, i.e., applications without application data
attached, the housing component updates the shadow copy. The stateless appli-
cation checks each time, before it moves to another location, whether the appli-
cation has evolved. If this is the case, the application returns to the housing
component to apply the updates or is discarded and new instances of the appli-
cation are spawned from the updated shadow copy.
Application Census and Recovery. The application census and recovery compo-
nent keeps track of all nomadic applications that are currently running on Fogs.
This census is required to ensure that stateful applications are not replicated
at any time. Besides the census functionality, this component is also required
to provide a recovery mechanism for stateful applications. This recovery mecha-
nism needs to decide whether a stateful operation will or will not return from the
Fog, either due to an application crash or due to the fact that the Fog is perma-
nently disconnected from the network. In this situation, the recovery mechanism
restores the application alongside with its application data from a previous point
in time and appoints the reconstructed application as the new sole instance of
the nomadic application.
Besides the application data and the owner’s data, applications also require
remote data repositories. Here, the data storage infrastructure needs to support
two different scenarios. For the first scenario, the applications must be able to
have a shared data storage, which can be used to either communicate with Fog
providers, to retrieve new next job assignments or to share non-privacy-sensitive
information with all replicas of an application.
This shared repository is required to be ACID-compliant and to support
different access policies, e.g., a permissive one for the job assignments and a
restricted one to restrict the access to a specific application. In addition to the
shared repository, the application also requires a storage location, where it can
deposit any arbitrary information that is generated while operating in one of the
Fogs. This storage also needs to be partitioned into different segments, which can
be only accessed by the assigned application. In contrast to the shared repository,
this data storage only requires eventual consistency and any data transfer to this
storage does not need to comply with any time-based restrictions. Nevertheless,
the data needs to be persisted and provided at any point in time. The Fog
operator needs to be able to check, whether the data transferred to the shared
repository and the data deposit complies with the owner’s policies and does not
contain any privacy-sensitive information.
On-site Data. Although the on-site data can be either static or dynamic, both
data types must not leave the Fog, i.e., the premises of the data owner. Therefore,
the data needs to be protected by methods originating from information rights
management. These methods ensure that the data can be only read within the
Fog and any data leakage outside the Fog renders the data useless due to the
information rights management restrictions.
Shared Data. For the realization of the shared data, we propose a storage
infrastructure similar to Amazon S3, where each application and potential repli-
cas are able to read and write from a dedicated storage bucket. This storage
infrastructure furthermore provides a high and instant availability regarding
read and write operations, which distinguish the shared data from the deposit
data.
Deposit Data. The deposit data follows the principles from Amazon Glacier. This
makes it possible to store arbitrary information at any point in time, but it may
Nomadic Applications Traveling in the Fog 159
take some time until the information is persisted within the storage respectively
accessible for the application to be read. Furthermore, it is required to implement
a data transport layer, which ensures both the integrity of the data as well as
the compliance with the policy of the data owner at any time.
5 Discussion
Even though nomadic applications build on established principles and concepts,
we have identified four research challenges, which are essential to realize them:
Scalable Privacy Protection for On-Site Data. One of the most important
challenges is the development of a scalable privacy protection mechanism for
dynamic data. While there are already solutions to enforce fine-grained access
policies for static data, like office documents, there is still a lack of solutions
for dynamic data. This is mainly due to the large volume of data that needs to
be processed in real-time. Furthermore, it needs to be ensured that malicious
applications cannot store any of this privacy-sensitive information within their
own application data and transfer the data out of the owner’s premises. Here,
the data transport layer needs to ensure that only non-privacy-sensitive data,
which is permitted to be sent to the deposit data, is transferred out of the Fog.
Support Different Speeds in the Transport Layer. Due to the fact that
the network transfer capabilities among the Fogs, application infrastructures,
and storage facilities are limited, it is required to design a transport scheduling
algorithm that ensures that each entity, i.e., a nomadic application or data, is
transferred as efficiently as possible. Therefore it is required to design a transport
protocol, which allows the different entities in the system to negotiate the trans-
portation capabilities. This protocol should allow delaying those entities, which
are not required to be transferred in a time critical manner, such as nomadic
applications without any further job assignments returning to the housing com-
ponent or deposit data. Nevertheless, it must be guaranteed that no entity suffers
starvation in terms of transportation capabilities and all entities reach their des-
tination eventually.
160 C. Hochreiner et al.
6 Conclusion
The growing use of IoT devices enables the emergence of the Fog computing
paradigm which represents an evolution of the Cloud-based deployment model.
In this paper, we identify opportunities of Fogs and introduce the concept of
nomadic applications. These nomadic applications promise to solve both the
challenges regarding the constantly growing volume of data and to enable a
tight control of the data’s privacy. In our future work, we will further develop the
infrastructure for nomadic applications and apply them to real world scenarios.
References
1. Bonomi, F., Milito, R., Zhu, J., Addepalli, S.: Fog computing and its role in the
Internet of Things. In: Proceeding of the MCC workshop on Mobile Cloud Com-
puting, 1st edn., pp. 13–16. ACM (2012)
2. Celesti, A., Fazio, M., Giacobbe, M., Puliafito, A., Villari, M.: Characterizing cloud
federation in IoT. In: 30th International Conference on Advanced Information Net-
working and Applications Workshops (WAINA), pp. 93–98. IEEE (2016)
3. Dastjerdi, A.V., Gupta, H., Calheiros, R.N., Ghosh, S.K., Buyya, R.: Fog Com-
puting: principles, architectures, and applications. In: Buyya, R., Dastjerdi, A.V.
(eds.) Internet of Things, pp. 61–75. Morgan Kaufmann (2016)
4. Kotz, D., Gray, R.S.: Mobile agents and the future of the internet. Oper. Syst.
Rev. 33(3), 7–13 (1999)
5. Lange, D.B., Mitsuru, O.: Programming and Deploying Java Mobile Agents Aglets.
Addison-Wesley Longman Publishing Co., Inc., Boston (1998)
6. Lange, D.B., Mitsuru, O.: Seven good reasons for mobile agents. Commun. ACM
42(3), 88–89 (1999)
7. Madhavapeddy, A., Mortier, R., Rotsos, C., Scott, D., Singh, B., Gazagnaire, T.,
Smith, S., Hand, S., Crowcroft, J.: Unikernels: library operating systems for the
cloud. ACM SIGPLAN Not. 48(4), 461–472 (2013)
Nomadic Applications Traveling in the Fog 161
8. Mell, P., Grance, T.: The NIST definition of cloud computing (2011)
9. Schleicher, J.M., Vögler, M., Inzinger, C., Hummer, W., Dustdar, S.: Nomads-
enabling distributed analytical service environments for the smart city domain. In:
International Conference on Web Services (ICWS), pp. 679–685. IEEE (2015)
10. Schulte, S., Hoenisch, P., Hochreiner, C., Dustdar, S., Klusch, M., Schuller, D.:
Towards process support for cloud manufacturing. In: 18th International Enterprise
Distributed Object Computing Conference (EDOC), pp. 142–149. IEEE (2014)
11. Shi, W., Dustdar, S.: The promise of edge computing. Computer 49(5), 78–81
(2016)
12. Vaquero, L.M., Rodero-Merino, L.: Finding your way in the fog: towards a com-
prehensive definition of fog computing. ACM SIGCOMM Comput. Commun. Rev.
44(5), 27–32 (2014)
13. Vögler, M., Schleicher, J.M., Inzinger, C., Dustdar, S.: A scalable framework for
provisioning large-scale IOT deployments. Trans. Internet Technol. (TOIT) 16(2),
11 (2016)
Fog Paradigm for Local Energy
Management Systems
1 Introduction
computing being used in the context of smart grids for load balancing [39] and
real time processing of energy data [5].
However, it is necessary to better understand how to cope with the inter-
mittent characteristics of the different elements of such a built environment,
and handle the large amounts of data that is generated from various monitoring
infrastructure associated with such an environment. Predictive models should
also be incorporated in the coordination mechanism in order to deal with the
associated uncertainties and increase the control efficiency. From a building man-
ager’s perspective, these algorithms must be able to facilitate different control
strategies according to the overall coordination objective and enhance the (self-)
awareness of the system. In this paper we propose a cloud based local energy
management system that is used (i) to flatten the demand profile of the building
facility and reduce its peak, based on analysis that can be carried out at the
building or in its vicinity (rather than at a data center); (ii) to enable the par-
ticipation of the building manager in the grid balancing services market through
demand side management and response. Furthermore, the Local Energy Man-
agement System (LEMS) is extended using the Fog computing paradigm for the
holistic management of buildings, energy storage and EVs.
2 Related Work
Over the year an efficient energy management system has been the focus of
research for building or set of building using smart meters. Researchers have
suggested improvement to energy management system by focusing and incorpo-
rating components such as building energy management system, home energy
management system, shifting of energy load and looking at dynamic pricing [10].
To over come challenges in conventional energy management system, such as cen-
tral point of failure or scalability due to limited memory and limited bandwidth
to handle large request [3]. Researchers have proposed cloud based energy man-
agement system. Keeping these challenges in mind and to overcome them a cloud
based demand response system was proposed that introduced data centric com-
munication and topic based communication models [13]. Their model was based
on a master and slave architecture in which the smart meters and energy man-
agement system at home acted as slave where as the utility acted as masters. The
authors advocated that a reliable and scalable energy management system can
be built using their model. In another approach the energy management system
was built by considering the energy pricing to be dynamic [16]. While building
this model the authors considered the peak demand of the building and incor-
porated the dynamic pricing while handling customer requests. [29] proposed an
architecture for control, storage, power management and resource allocation of
micro-grids and to integrate cloud based application for micro-grids with exter-
nal one. The bigger and distributed the smart grid infrastructure become the
more difficult it is to analyse real time data from smart meters. [44] suggested
a cloud based system is most appropriate to handle analysis or real time energy
data from smart meters. In another approach power monitoring and early warn-
ing system facilities were provided using cloud platform [12]. [34] proposed a
Fog Paradigm for Local Energy Management Systems 165
mobile agent architecture for cloud based energy management system to handle
customer request more efficiently. [32] proposed a dynamic cloud based demand
response model to periodically forecast demand and by dynamically managing
available resources to reduce the peak demand.
The concept of Fog computing has emerged to tackle issues relating to cloud
latency, location awareness and improve quality of service for real time streamed
data. Fog computing has been used in the context of smart grids in a number of
ways. However, while using fog computing it is quite important to understand the
energy sustainability for devices that are part of fog network. To address proper
energy efficient management strategies, [9] proposed an energy-aware manage-
ment strategy model that is able to improve the energy sustainability of entire
federated IoT cloud ecosystem. Furthermore, fog computing has been applied in
smart grids for enabling energy load balancing applications to run at the net-
work edge, especially in close proximity to smart meters within micro-grids [39].
Fog-based infrastructure is used in this case to estimate energy demand, identify
potential energy generation and the lowest energy tariff at that instant – this
information is used to switch to alternate sources of energy in a dynamic man-
ner. Furthermore, a fog infrastructure is also used as collectors, to gather energy
related data, and subsequently process this in real time and generate command
signals to consumer devices [5]. At the network edge the actual computational
processing carried out is limited, while most of the data is pushed to a cloud data
center to generate real time reports and for visualisation. Fog-based systems have
also been used to control building energy consumption. In this scenario, sensors
installed at various places read data related to humidity, temperature and vari-
ous gases in the building atmosphere. Based on these readings, the sensors work
together to reduce the temperature of the building by activating or deactivating
various electrical devices [33].
Mobile Fog was proposed to tackle latency issues for geo-spatial distributed
applications [11], comprising of a set of computational functions that an applica-
tion executing on a device can invoke to carry out a task. However the functions
supported by a mobile Fog infrastructure are not general purpose, but are appli-
cation specific. Similarly, to tackle latency issues another proposal was presented
in [26] by focusing on the placement and migration method of Fog and Cloud
resources. This work describes how complex event processing systems can be
used to reduce the network bandwidth of virtual machines that are migrating.
In mobile Fog concept a local cloud is formed by combining capacity across
neighboring nodes of a network [24] and one resource within these nodes acts as
a local resource coordinator. In [24], the authors propose a framework to share
resources based on the availability of particular service-oriented functions. [20]
combine smart grid, Cloud, actuator and sensors together for a reliable fog com-
puting platform. To address demand side response within a smart grids, [21]
worked on maximizing the benefits to both consumer and power companies
using a game theory approach. Their work was based on a Stackelberg game
[14] between the consumers and the energy companies so that both the parties
were able to satisfy their utility functions. In a similar approach, a cloud based
166 A. Javed et al.
framework was presented for a group of energy hubs, to manage two way com-
munication between energy companies and consumers to reduce peak to average
ratio of total electricity – while at the same time requiring consumers to pay
less by using a game theory [31]. Similarly, [8] describe an approach for reduc-
ing the energy consumption by investigating the interaction of consumers and
energy companies. Based on the interaction of the consumer and power compa-
nies they proposed an energy schedule based on game theory, the aim of which
was to reduced the peak to average power ratio. In another approach a user
aware demand management system was proposed [43] that manages residential
load while taking into account user preferences. The proposed model used both
energy optimization model and game theoretical model to maximize user saving
without causing any discomfort.
The concept of cloud based energy system has been used to over come chal-
lenges that a conventional micro-grid based energy management system faces.
However, while moving to cloud is advantages it has some challenges as well and
one of them being loss of communication failure between the cloud based energy
management system and endpoints (electric vehicles and energy storage units).
We address this issue by suggesting how fog paradigm can be used along with
cloud based energy management system to overcome situation of communication
failure in a dynamic evironment.
(a) Architectural layout of Cloud based Local Energy Ma- (b) Architecture of Local Energy Management System
nagement System and its components using Fog computing
tool was developed in order to work with the LEMS, which estimates the electric-
ity demand of a building over a particular time period. This software tool uses
historical data (collected from actual building use) and the weather measured in
the proximity of the building.
Figure 1b gives an overview of the main components of LEMS. LEMS can be
broadly divided into two modes of operation: peak shaving and demand response.
The LEMS operates in timesteps during which the system is considered static,
i.e. changes are only discovered at the end of the timestep. A timestep of 15 min
is used in this work, providing a trade-off between a dynamic (semi-real time)
and a reliable operation that allows the frequent capture of the building condi-
tions and minimizes risk of communication lags. Data about EVs located at the
building, such as their battery capacity, state of charge (SoC), expected discon-
nection times, charging/discharging power rate, charging/discharging schedule
and available discharge capacity, is requested from charging stations upon the
connection of every EV. Information regarding the available capacity, state of
charge (SoC), charging/discharging power rate and charging/discharging sched-
ule is requested from every ESU. This information is stored in a database, and
is accessed from the LEMS after every timestep in order to define future power
set points for charging stations.
A pre-forecast analysis stage was included in order to increase the perfor-
mance of the demand forecast tool, by including weather information in the
forecasting process [22]. The objective of this stage is to identify the optimal
number of weather attributes to be considered by the model to improve accu-
racy. Using historical local weather data and building energy consumption data,
an analysis was performed in order to calculate the correlation of the available
weather data with the electricity demand of each building. The forecast model
used an artificial neural network, implemented using the WEKA toolkit [27].
Electricity consumption on a random day was forecasted for each building for
every timestep. The forecast accuracy was calculated using the mean absolute
percentage error (MAPE) metric. For each building the set of attributes that
resulted in the least MAPE was selected as the optimal one. The model per-
forms a day-ahead power demand forecast using the optimal ANN configuration
suggested by the pre-forecast analysis model [22].
LEMS has been deployed on the CometCloud [6] system. CometCloud
enables federation between heterogeneous computing platforms that can sup-
port complete LEMS work, such as a local computational cluster with a public
cloud (such as Amazon AWS). There are two main components in CometCloud:
a master and (potentially multiple) worker node(s). In its software architecture,
CometCloud comprises mainly three layers: the programming layer, a manage-
ment layer and a federation or infrastructure layer. The programming layer iden-
tifies tasks that needs to be executed, the set of dependencies between tasks that
enables a user to define the number of resources that are available along with
any constraints on the usage of those resources. Each task in this instance is
associated with the types of LEMS operation supported, or whether a demand
forecast needs to be carried out. In the management layer policy and objectives
168 A. Javed et al.
are specified by the user that help in allocating the resources to carry out tasks.
In addition to allocation of resources this layer also keeps a track of all tasks
that are generated for workers, and how many of these have been successfully
executed [7]. In the federation layer a lookup system is built so that content
locality is maintained and a search can be carried out using wildcards [23]. Fur-
thermore a “CometSpace” is defined that can be accessed by all resources in
the federation [17]. CometCloud uses this physically distributed, but logically
shared, tuple space to share tasks between different resources that are included
in a resource federation.
The main task of the master node is to prepare a task that is to be executed
and give information about the data required to process the task. The second
component is the worker, which receives tasks from the master, executes the job
and sends the results to the place specified by the master. In our framework
there are two workers – one that will be running the LEMS algorithm that will
generate the schedule to for the charging and discharging of the electric vehicles
and the second that will forecast energy demand for the next day. There are
two cloud-hosted servers that receive requests from a graphical user interface,
and based on the requests call the appropriate function are executed via the
master. The second server manages a database which contains information about
building data, EVs and weather attributes around the building. The database
is used to store historic data about power consumption, energy pricing etc. for
each building and information regarding the weather is used to forecast (energy)
demand for the next day. There is an intermediary gateway which intercepts all
signals from the cloud server and forwards the schedule to EV’s and ESU’s or
current energy reading of building, EV’s and ESU’s to database server.
The last component is the Fog server that contains sensors to read data at
every time step from buildings, EVs and from energy storage units. There is a
pre-generated model placed inside the Fog component to which the data read
from the sensor are sent, and the output is the schedule for each EV for the time
step. The rationale for using the Fog component is that in case there is network
failure or latency, such that a signal is not able to reach the charging station,
then with the help of sensors the current state of each asset can be read and a
schedule can be prepared. The predictive model is updated every 24 h, replacing
any previous estimation that was generated.
(i.e. at periods of low demand) and shaving the peaks (i.e. at periods of high
demand) of the demand profile using controllable loads (EVs, ESUs) of the
building facility – this approach aims to shift energy load from periods where
demand/tariff is likely to be high, to periods of low demand (e.g. night). Refer
Fig. 1b the LEMS reads the building demand for next timestep, current charge
in EV and ESU. Then calculates the charging/discharging schedules of the EVs
and ESUs, and sends them, the corresponding power set points as charging
or discharging schedule, at the beginning of every timestep. These power set
points are messages with the exact power rate at which each EV/ESU must
charge/discharge at each timestep.
– Demand Response schedule: This approach is intended to enable building
managers to participate in the ancillary services market and provide demand
response actions to the grid. It was assumed that the system operator sends
requests to the building manager to either reduce or increase its aggregate
demand in the next time step (of 15 min). Similar to peak shaving mode,
refer Fig. 1b, in demand response mode LEMS read the current charge in
EV’s and ESU’s to calculate available energy that can be used to increase or
decrease demand. Then based on the request received the LEMS overrides the
charging/discharging schedules of the available controllable assets by creating
a charging/discharging schedule for all connected assets.
When LEMS operates in the demand response mode, a user can request an
increase or decrease in total demand using a graphical interface. The compu-
tation of this request operates similarly to the previous scenario – i.e. a com-
putational task is generated, executed on the cloud platform, and the result
(a schedule) sent to the gateway. Based on the request a schedule is prepared
to based on available EVs/ESUs, refer Fig. 1b. However, the demand response
schedule (generated by LEMS) is only valid for one time step, with the sched-
ule reverting back to peak shaving mode after this time step. For example, if a
user has made a demand down request then LEMS will create a schedule that
will discharge all available connected EVs/ESUs for one time step. However the
schedule for other remaining intervals over the remaining 24 h will revert back
to peak shaving operation. This indicates that during a communication failure
(at a particular time step), the demand response request will not be executed
and will have to be made again.
Currently LEMS handles communication failure by creating a 24 h moving
window of charging/discharging schedule for connected EVs/ESUs. Further-
more, this schedule gets updated every time step, so that if the number of EVs
has changed in the last 15 min then this change is taken into account. How-
ever, LEMS does not address a situation when there is a communication failure
between LEMS and the charging point, and during this communication failure
a new EV arrives. As there is no communication link available, no new schedule
can be created for the new EV. To address this situation we create a forecasting
model that can be hosted at the edge of the network (i.e. closer to the charging
points for EVs) that will, by reading the environment data (information about
EV, building energy and time) available locally, predict the charging schedule
over one time step, or make adjustments (using pre-defined rules) to an existing
schedule generated by LEMS if there is a communication failure.
originally generated schedule. There are two main components that interact to
update the EV charging schedule:
– the forecasting model developed using the LEMS system – using attributes
such as time of day, EV identity, state of charge of EV at that time, the
current energy consumption of the building, in order to estimate the charging
schedule for that EV at that time. A number of learning algorithms were
used to determine the schedule, and the model with the lowest error rate
was placed at the edge of the network. Once the model (used to derive a
schedule) is created, a set of rules that govern the charging and discharging
of the vehicle are used to update the model, providing an alternative means to
alter the LEMS generated schedule using local information (in case of network
failure).
– The rule based component is used to improve the accuracy of the forecasting
model. The rule based component takes the LEMS-generated schedule that
was saved at the charger, and produces a new charging schedule as output.
Once all the attributes/parameters (as indicated above) are passed as input to
the algorithm, it checks if there is already a previously generated schedule for
the connected EV, and updates it according to the rules available. If there is
no schedule for the connected EV (using an EV identifier), then this indicates
that this EV arrived after the communication failure and the LEMS-generated
schedule did not take this EV into account. The first things the algorithm
checks is the state of charge of the vehicle (SOC EVit ) at time instant ‘t’,
if this is less than the minimum state of charge (M in SOC), then a charge
event is generated (regardless of the LEMS-generated schedule) for that time
step. Furthermore, the algorithm checks if the time to departure of the vehicle
is less than 30 min, and the forecasting model is attempting to discharge the
vehicle, then the schedule is updated to generate a charging event.
Furthermore, the algorithm checks if the State of charge for an EV has reached
its maximum limit if it has and the forecasting model is signalling to still charge
the vehicle a new schedule is set to idle for that vehicle. If none of the condition
is met then the new schedule is set to the one forecasted by the machine learning
model.
We simulated a scenario with ten EVs used to reduce the peak demand of one
commercial building. In the simulation we had recorded the building energy con-
sumption at a timestep of 15 min, along with details of each EV such as vehicle
ID, state of charge of vehicle, minimum state of charge for that vehicle, maximum
state of charge, transfer rate, time of arrival and time to leave. Based on these data
our LEMS creates a charging/discharging schedule for each connected EV – along
with a log file that contains values for each of the input parameters. We devel-
oped a forecasting model using the Weka toolkit. To identify the most appropriate
172 A. Javed et al.
Algorithm 1. Algorithm
Input: Scheduling algorithm generated by forecasting model for each EV denoted by
Sch EVit (where i denotes the number of EVs and t denotes time instant t),
Schedule stored at Charger generated by LEMS Old Sch EVit ,
State of Charge of each EV SOC EVit ,
T: time of day,
Minimum State of Charge for each EV M in SOC,
Maximum State of Charge for each EV M ax SOC,
Identifier for each connected EV EV IDi ,
time of (each) EV departure T ime EVi
Output: Output New Schedule for each EV as denoted by N ew Sch EVit
1: for i in tolal EV do
2: charge = 3
3: Discharge = −3
4: idle = 0
5: if EV IDi in Old Sch EVit then
6: set N ew Sch EVit = Old Sch EVit
7: else if Sch EVit == Discharge and SOC EVit < M in SOC then
8: set N ew Sch EVit = charge
9: else if (Sch EVit == Discharge or Sch EVit == idle) and (T ime EVi − t) < 30 then
10: set N ew Sch EVit = charge
11: else if Sch EVit == charge and SOC EVit = M ax SOC then
12: set N ew Sch EVit = idle
13: else
14: set N ew Sch EVit = Sch EVit
15: end if
16: end for
classification algorithm to use for developing the model, we considered both gen-
erative and discriminative techniques to identify the most appropriate classifier.
For generative models we looked at models that consider conditional dependencies
(Bayesnet) [35] and those that looked at conditional independence (Naivebayes)
[38] in the dataset. For discriminative model we considered decision trees (J48)
[36] and a neural network (multilayer perceptron) [37].
While each model trained on the dataset to identify the best classifier we
used the standard/default configuration for our classifier. Once the model was
trained we created a log file to simulate the a scenario in which communication
link was broken between the LEMS and the charging station and an EV had
arrived after the communication failure. This log file was used as the testing
dataset for the model. The log file considered arrival of the EV at different
times with different states of charge. Based on their state of charge, time of
day, energy consumption of building a charging schedule was forecasted. The
accuracy of each model is shown in the table below. We can see from the table
that the Bayesnet performed the best (85%) among the four models we created,
which showed that data had conditional dependency and NaiveBayes performed
worst (75%). Once the schedule was created, the rule based component compares
the scheduling decisions made for the connected EV and updates the schedule.
As we can see the accuracy of the forecasting model increased once the rules
Fog Paradigm for Local Energy Management Systems 173
were applied. From Table 1 and Fig. 3, we can observe that the combination of
Bayesian model and rule based component gave the highest accuracy of 87%.
5 Conclusion
the schedule). The general approach described here can also be generalised to
other similar types of applications.
References
1. I.E. Agency: Energy efficiency (2017). http://www.iea.org/about/faqs/
energyefficiency/. Accessed 11 Jan 2017
2. Amarasinghe, K., Wijayasekara, D., Manic, M.: Neural network based downscaling
of building energy management system data. In: IEEE 23rd International Sympo-
sium on Industrial Electronics (ISIE), pp. 2670–2675. IEEE (2014)
3. Bera, S., Misra, S., Rodrigues, J.J.: Cloud computing applications for smart grid:
A survey. IEEE Trans. Parallel Distrib. Syst. 26(5), 1477–1494 (2015)
4. Bonomi, F.: Connected vehicles, the internet of things, and fog computing. In: The
Eighth ACM International Workshop on Vehicular Inter-Networking (VANET),
Las Vegas, pp. 13–15 (2011)
5. Bonomi, F., Milito, R., Zhu, J., Addepalli, S.: Fog computing and its role in the
internet of things. In: Proceedings of the First Edition of the MCC Workshop on
Mobile Cloud Computing, pp. 13–16. ACM (2012)
6. Diaz-Montes, J., AbdelBaky, M., Zou, M., Parashar, M.: CometCloud: enabling
software-defined federations for end-to-end application workflows. IEEE Internet
Comput. 19(1), 69–73 (2015)
7. Diaz-Montes, J., Xie, Y., Rodero, I., Zola, J., Ganapathysubramanian, B.,
Parashar, M.: Exploring the use of elastic resource federations for enabling large-
scale scientific workflows. In: Proceedings of Workshop on Many-Task Computing
on Clouds, Grids, and Supercomputers (MTAGS), pp. 1–10 (2013)
8. Fadlullah, Z.M., Quan, D.M., Kato, N., Stojmenovic, I.: GTES: an optimized game-
theoretic demand-side management scheme for smart grid. IEEE Syst. J. 8(2),
588–597 (2014)
9. Giacobbe, M., Celesti, A., Fazio, M., Villari, M., Puliafito, A.: A sustainable energy-
aware resource management strategy for IoT cloud federation. In: IEEE Interna-
tional Symposium on Systems Engineering (ISSE), pp. 170–175. IEEE (2015)
10. Green, R.C., Wang, L., Alam, M.: Applications and trends of high performance
computing for electric power systems: focusing on smart grid. IEEE Trans. Smart
Grid 4(2), 922–931 (2013)
11. Hong, K., Lillethun, D., Ramachandran, U., Ottenwälder, B., Koldehofe, B.: Mobile
fog: a programming model for large-scale applications on the internet of things. In:
Proceedings of the Second ACM SIGCOMM Workshop on Mobile Cloud Comput-
ing, pp. 15–20. ACM (2013)
12. Ji, L., Lifang, W., Li, Y.: Cloud service based intelligent power monitoring and
early-warning system. In: Innovative Smart Grid Technologies-Asia (ISGT Asia),
pp. 1–4. IEEE (2012)
13. Kim, H., Kim, Y.-J., Yang, K., Thottan, M.: Cloud-based demand response for
smart grid: architecture and distributed algorithms. In: IEEE International Con-
ference on Smart Grid Communications (SmartGridComm), pp. 398–403. IEEE
(2011)
Fog Paradigm for Local Energy Management Systems 175
14. Korzhyk, D., Conitzer, V., Parr, R.: Solving stackelberg games with uncer-
tain observability. In: The 10th International Conference on Autonomous Agents
and Multiagent Systems, vol. 3, pp. 1013–1020. International Foundation for
Autonomous Agents and Multiagent Systems (2011)
15. Laustsen, J.: Energy efficiency requirements in building codes, energy efficiency
policies for new buildings. Int. Energy Agency (IEA) 2, 477–488 (2008)
16. Li, X., Lo, J.-C.: Pricing and peak aware scheduling algorithm for cloud computing.
In: Innovative Smart Grid Technologies (ISGT), 2012 IEEE PES, pp. 1–7. IEEE
(2012)
17. Li, Z., Parashar, M.: A computational infrastructure for grid-based asynchronous
parallel applications. In: Proceedings of the 16th International Symposium on High
Performance Distributed Computing, pp. 229–230. ACM (2007)
18. Lohrmann, B., Kao, O.: Processing smart meter data streams in the cloud. In:
2nd IEEE PES International Conference and Exhibition on Innovative Smart Grid
Technologies (ISGT Europe), pp. 1–8. IEEE (2011)
19. Lv, H., Wang, F., Yan, A., Cheng, Y.: Design of cloud data warehouse and its
application in smart grid. In: International Conference on Automatic Control and
Artificial Intelligence (ACAI 2012), pp. 849–852. IET (2012)
20. Madsen, H., Burtschy, B., Albeanu, G., Popentiu-Vladicescu, F.: Reliability in
the utility computing era: towards reliable fog computing. In: 20th International
Conference on Systems, Signals and Image Processing (IWSSIP), pp. 43–46. IEEE
(2013)
21. Maharjan, S., Zhu, Q., Zhang, Y., Gjessing, S., Basar, T.: Dependable demand
response management in the smart grid: a stackelberg game approach. IEEE Trans.
Smart Grid 4(1), 120–132 (2013)
22. Marmaras, C., Javed, A., Cipcigan, L., Rana, O.: Predicting the energy demand
of buildings during triad peaks in GB. Energy Build. 141, 262–273 (2017)
23. Montes, J.D., Zou, M., Singh, R., Tao, S., Parashar, M.: Data-driven workflows in
multi-cloud marketplaces. In: IEEE 7th International Conference on Cloud Com-
puting, pp. 168–175. IEEE (2014)
24. Nishio, T., Shinkuma, R., Takahashi, T., Mandayam, N.B.: Service-oriented het-
erogeneous resource sharing for optimizing service latency in mobile cloud. In:
Proceedings of the First International Workshop on Mobile Cloud Computing &
Networking, pp. 19–26. ACM (2013)
25. National Institute of Standards and Technology. Cyber-physical systems — NIST
(2016). https://www.nist.gov/el/cyber-physical-systems. Accessed 11 Jan 2017
26. Ottenwälder, B., Koldehofe, B., Rothermel, K., Ramachandran, U.: MigCEP: oper-
ator migration for mobility driven distributed complex event processing. In: Pro-
ceedings of the 7th ACM International Conference on Distributed Event-Based
Systems, pp. 183–194. ACM (2013)
27. Pérez-Lombard, L., Ortiz, J., Pout, C.: A review on buildings energy consumption
information. Energy Build. 40(3), 394–398 (2008)
28. United Nations Environment Programme: Why buildings (2016). http://www.
unep.org/sbci/AboutSBCI/Background.asp. Accessed 11 Jan 2017
29. Rajeev, T., Ashok, S.: A cloud computing approach for power management of
microgrids. In: Innovative Smart Grid Technologies-India (ISGT India), 2011 IEEE
PES, pp. 49–52. IEEE (2011)
30. Rusitschka, S., Eger, K., Gerdes, C.: Smart grid data cloud: a model for utilizing
cloud computing in the smart grid domain. In: First IEEE International Conference
on Smart Grid Communications (SmartGridComm), pp. 483–488. IEEE (2010)
176 A. Javed et al.
31. Sheikhi, A., Rayati, M., Bahrami, S., Ranjbar, A.M., Sattari, S.: A cloud computing
framework on demand side management game in smart energy hubs. Int. J. Electr.
Power Energy Syst. 64, 1007–1016 (2015)
32. Simmhan, Y., Aman, S., Kumbhare, A., Liu, R., Stevens, S., Zhou, Q., Prasanna,
V.: Cloud-based software platform for big data analytics in smart grids. Comput.
Sci. Eng. 15(4), 38–47 (2013)
33. Stojmenovic, I., Wen, S.: The fog computing paradigm: scenarios and security
issues. In: 2014 Federated Conference on Computer Science and Information Sys-
tems (FedCSIS), pp. 1–8. IEEE (2014)
34. Tang, L., Li, J., Wu, R.: Synergistic model of power system cloud computing based
on mobile-agent. In: 3rd IEEE International Conference on Network Infrastructure
and Digital Content (IC-NIDC), pp. 222–226. IEEE (2012)
35. University of Waikato: Bayesnet (2017). http://weka.sourceforge.net/doc.dev/
weka/classifiers/bayes/BayesNet.html
36. University of Waikato: J48 (2017). http://weka.sourceforge.net/doc.dev/weka/
classifiers/trees/J48.html
37. University of Waikato: Mlpclassifier (2017). http://weka.sourceforge.net/doc.
packages/multiLayerPerceptrons/weka/classifiers/functions/MLPClassifier.html
38. University of Waikato: Naivebayes (2017). http://weka.sourceforge.net/doc.dev/
weka/classifiers/bayes/NaiveBayes.html
39. Wei, C., Fadlullah, Z.M., Kato, N., Stojmenovic, I.: On optimally reducing power
loss in micro-grids with power storage devices. IEEE J. Sel. Areas Commun. 32(7),
1361–1370 (2014)
40. Weng, T., Agarwal, Y.: From buildings to smart buildings—sensing and actuation
to improve energy efficiency. IEEE Des. Test 29(4), 36–44 (2012)
41. Wijayasekara, D., Linda, O., Manic, M., Rieger, C.: Mining building energy man-
agement system data using fuzzy anomaly detection and linguistic descriptions.
IEEE Trans. Ind. Inform. 10(3), 1829–1840 (2014)
42. Wijayasekara, D., Manic, M.: Data-fusion for increasing temporal resolution of
building energy management system data. In: 41st Annual Conference of the IEEE
Industrial Electronics Society, IECON 2015, pp. 004550–004555. IEEE (2015)
43. Yaagoubi, N., Mouftah, H.T.: User-aware game theoretic approach for demand
management. IEEE Trans. Smart Grid 6(2), 716–725 (2015)
44. Yang, C.-T., Chen, W.-S., Huang, K.-L., Liu, J.-C., Hsu, W.-H., Hsu, C.-H.: Imple-
mentation of smart power management and service system on cloud computing.
In: 9th International Conference on Ubiquitous Intelligence & Computing and 9th
International Conference on Autonomic & Trusted Computing (UIC/ATC), pp.
924–929. IEEE (2012)
Orchestration for the Deployment of Distributed
Applications with Geographical Constraints
in Cloud Federation
1 Introduction
Nowadays, federated Cloud networking [1] represents an interesting scenario for
the deployment of distributed applications. In this paper, we describe the results
obtained by the Horizon 2020 BEACON Project in terms of Cloud brokering for
the deployment of distributed applications in federated OpenStack-based Cloud
networking environments [2]. In such a scenario, we assume that a distributed
application consists of several microservices that can be instantiated in differ-
ent federated Cloud providers and that users can specify advanced geolocation
deployment constrains. In particular, we present an Orchestration Broker that is
able to create ad-hoc service manifest documents including application deploy-
ment instructions destined to selected federated Cloud providers and users’
requirements. The Orchestration Broker interacts through RESTFUL communi-
cations with federated OpenStack Clouds through their own HEAT orchestration
systems.
The purposes of this paper is not to define a new standard for addressing
application deployment but to extend the Heat Orchestrator Template (HOT) [3]
resource set in order to manage the federated deployment of distributed applica-
tions. The Orchestration Broker analyses the HOT service manifest of an applica-
tion and automatically extracts the elements able to describe how microservices
have to be deployed in federated OpenStack Clouds via their HEAT systems.
An important feature of this approach is that the Orchestrator Broker is able
to select target federated Clouds according to their geographic location. In fact,
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 177–187, 2018.
https://doi.org/10.1007/978-3-319-67636-4_19
178 M. Villari et al.
a “borrower”, i.e., a Cloud federation client, can exactly specify the application
requirements along with the geographical locations where the microservices of a
distributed application have to be deployed.
The rest of the paper is organized as follows: Sect. 2 describes related works.
Section 3 presents the Orchestrator Broker design. Section 4 describes imple-
mentation highlights. Section 5 concludes the paper also providing lights to the
future.
3 Architectural Design
The federation management system requires to manage several OpenStack
Clouds that cooperate each other according to specific federation agreements. In
order to achieve such a goal, we designed a federation management component
named OpenStack Federation Flow Manager (OSFFM) that acts as Orchestra-
tor Broker. It is responsible to interact with OpenStack Clouds under specific
assumptions and to lead the deployment and management of distributed appli-
cations. The OSFFM was designed according to the following assumptions:
– The system is composed by twin Clouds; this means that Virtual Machine
(VM) images, networks, users, key pair, security groups and other configura-
tions are the same in each Clouds.
– Each OpenStack Cloud interacts with a component called Federation Agent
able to set up OVN tunnel between with other Clouds.
Orchestration for the Deployment of Distributed Applications 179
The manager component includes three sub-modules that are OSFF Orchestra-
tor (OSFFM ORC), Monitoring (OSFFM MON) and Elasticity Location Aware
(OSFFM ELA). All actions are performed by such sub-modules through the
jClouds adapter that interacts with OpenStack Clouds.
OSFFM ORC. This module is responsible to orchestrate all the tasks required
for deploying a distributed application. In particular it receives as an input
Orchestration for the Deployment of Distributed Applications 181
the HOT service manifest including guidelines for the deployment of a distrib-
uted application and builds several HOT microservice manifests that are sent to
the heat modules of specific OpenStack Clouds for the instantiation of virtual
resources in which microservices are deployed in. In order to do this it:
The OSFFM ORC sub-module uses hash tables in order to store the result of
its “manifest parsing” sub-elaboration activities; these activities are focused on
retrieving pieces of federated deployment information and pieces information
required to apply elasticity policies on the VM instantiated for the distributed
application deployment. After such parsing activities of the HOT service mani-
fest, OSFFM ORC builds several HOT microservice manifests that are provided
to specific federated Clouds. Figure 3 shows an example of HOT microservice
manifest. The HOT standard has been extended in order to add new parame-
ters that specify deployment requirements defined as resources. In this way, it
is possible to compose a complex HOT service manifest by defining resources of
other resources that can be queried by means of a simple “get resource” function
call. In particular, the following deployment requirements have been added: (i)
service placement policies according to location constraints; (ii) location-aware
elasticity rules; (iii) network reachability rules. The main methods involved for
the HOT service manifest instantiation process are:
OSFFM ELA. This module is designed to control and maintain the perfor-
mance of applications deployed in the federation. This goal is achieved by pro-
viding functions that allow to horizontally scale of resources in the federation.
This module interacts with the target Cloud when a particular condition occurs
in order to trigger a specific action. By interfacing this component with the
monitoring one, it is possible to have the parameters needed to verify both
the Cloud infrastructure and VMs internal states. When the previous informa-
tion is correlated with other Clouds information, the module becomes able to
make the required scalability decisions. The OSFFM ELA interacts directly with
OSFFM ORC to accomplish the decision made and interacts with OSFFM MON
to receive status notifications about monitored condition.
state instantiated via the stacks. For OSFFM architecture, the monitoring solu-
tion is achieved by adapting the monitoring module used in OpenStack, that
is the Ceilometer component, at a federation level and interconnected with the
federated Ceilometer creating a hierarchical level structure.
The OSFFM was developed in JAVA, and exposes REST-API interfaces in order
to avoid platform constraint during its usage. According to HEAT-API, also
service manifest used in our solution are based on YAML, this makes OSFFM
able to manipulate the manifest disrupting, and manifest composing, without
problems.
1 public HashMap<String,ArrayList<ArrayList<String>>> managementgeoPolygon(
2 String manName,MDBInt.DBMongo db,String tenant)
3 {
4 HashMap<String,ArrayList<ArrayList<String>>> tmp=new
5 HashMap<String,ArrayList<ArrayList<String>>>();
6 ManifestManager mm=(ManifestManager)OrchestrationManager.mapManifestThr.get(manName);
7 Set s=mm.table_resourceset.get("OS::Beacon::ServiceGroupManagement").keySet();
8 Iterator it=s.iterator();
9 boolean foundone =false;
10 while(it.hasNext()){
11 String serName=(String)it.next();
12 SerGrManager sgm=(SerGrManager)mm.serGr_table.get(serName);
13 ArrayList<MultiPolygon> ar=null;
14 try{
15 ar=(ArrayList<MultiPolygon>)mm.geo_man.retrievegeoref(
16 sgm.getGeoreference());
17 }catch(NotFoundGeoRefException ngrf){...}
18 ArrayList dcInfoes=new ArrayList();
19 for(int index=0;index<ar.size();index++){
20 try{
21 ArrayList<String> dcInfo=
22 db.getDatacenters(tenant,ar.get(index).toJSONString());
23 if(dcInfo.size()!=0){
24 dcInfoes.add(dcInfo);
25 foundone=true;
26 }
27 }
28 catch(org.json.JSONException je){...}
29 }
30 tmp.put(serName, dcInfoes);
31 if(!foundone) return null;
32 }
33 return tmp;
34 }
Stack4DeployAction methods that are strictly linked. The first one prepares all
information needed by the second one, the real executor of the deployment task.
These methods are used to deploy a stack in the federation according to the
service replication purposes. The performed actions are used for the resources
instantiation on all Clouds selected in the service manifest for a particular service
group.
1 public HashMap<String, ArrayList<Port>> sendShutSignalStack4DeployAction(
2 String stackName, OpenstackInfoContainer credential,boolean first, DBMongo m) {
3 try {
4 Registry myRegistry = LocateRegistry.getRegistry(ip,port);
5 RMIServerInterface impl = (RMIServerInterface) myRegistry.lookup("myMessage");
6 ArrayList resources =impl.getListResource(credential.getEndpoint(),
7 credential.getUser(),credential.getTenant(),credential.getPassword(),stackName);
8 boolean continua=true;
9 NovaTest nova = new NovaTest(credential.getEndpoint(), credential.getTenant(),
10 credential.getUser(), credential.getPassword(), credential.getRegion());
11 NeutronTest neutron = new NeutronTest(credential.getEndpoint(),
12 credential.getTenant(), credential.getUser(), credential.getPassword(),
13 credential.getRegion());
14 HashMap<String, ArrayList<Port>> mapResNet = new HashMap<String, ArrayList<Port>>();
15 Iterator it_res = resources.iterator();
16 while (it_res.hasNext()) {
17 String id_res = (String) it_res.next();
18 if(!first){
19 nova.stopVm(id_res);
20 m.updateStateRunTimeInfo(credential.getTenant(), id_res, first);
21 }
22 ArrayList<Port> arPort = neutron.getPortFromDeviceId(id_res);
23 mapResNet.put(id_res, arPort);
24 Iterator it_po = arPort.iterator();
25 while (it_po.hasNext()) {
26 m.insertPortInfo(credential.getTenant(),
27 neutron.portToString((Port)it_po.next()));
28 }
29 }
30 return mapResNet;
31 }catch (Exception e){
32 ...
33 return null;
34 }
35 }
1 federation:
2 type: OS::Beacon::ServiceGroupManagement
3 properties:
4 name: GroupName
5 geo_deploy: { get_resource: geoshape_1}
6 resource:
7 groups: {get_resource: A}
Acknowledgment. This work was supported by the European Union Horizon 2020
BEACON project under grant agreement number 644048.
Orchestration for the Deployment of Distributed Applications 187
References
1. Moreno-Vozmediano, R., et al.: BEACON: a cloud network federation framework.
In: Celesti, A., Leitner, P. (eds.) ESOCC Workshops 2015. CCIS, vol. 567, pp.
325–337. Springer, Cham (2016). doi:10.1007/978-3-319-33313-7 25
2. Celesti, A., Levin, A., Massonet, P., Schour, L., Villari, M.: Federated networking
services in multiple OpenStack clouds. In: Celesti, A., Leitner, P. (eds.) ESOCC
Workshops 2015. CCIS, vol. 567, pp. 338–352. Springer, Cham (2016). doi:10.1007/
978-3-319-33313-7 26
3. Heat Orchestration Template (HOT) specification. http://docs.openstack.org/
developer/heat/template guide/hot spec.html
4. Vernik, G., Shulman-Peleg, A., Dippl, S., Formisano, C., Jaeger, M., Kolodner,
E., Villari, M.: Data on-boarding in federated storage clouds. In: 2013 IEEE Sixth
International Conference on Cloud Computing (CLOUD), pp. 244–251 (2013)
5. Azodolmolky, S., Wieder, P., Yahyapour, R.: Cloud computing networking: chal-
lenges and opportunities for innovations. IEEE Commun. Mag. 51, 54–62 (2013)
6. Celesti, A., Fazio, M., Villari, M.: Enabling secure XMPP communications in fed-
erated IoT clouds through XEP 0027 and SAML/SASL SSO. Sensors 17, 1–21
(2017)
7. Celesti, A., Celesti, F., Fazio, M., Bramanti, P., Villari, M.: Are next-generation
sequencing tools ready for the cloud? Trends Biotechnol. 35, 486–489 (2017)
8. Villari, M., Fazio, M., Dustdar, S., Rana, O., Ranjan, R.: Osmotic computing: a
new paradigm for edge/cloud integration. IEEE Cloud Comput. 3, 76–83 (2016)
9. Mashayekhy, L., Nejad, M.M., Grosu, D.: Cloud federations in the sky: formation
game and mechanism. IEEE Trans. Cloud Comput. 3, 14–27 (2015)
10. Giacobbe, M., Celesti, A., Fazio, M., Villari, M., Puliafito, A.: An approach to
reduce carbon dioxide emissions through virtual machine migrations in a sustain-
able cloud federation. In: 2015 Sustainable Internet and ICT for Sustainability
(SustainIT), Institute of Electrical & Electronics Engineers (IEEE) (2015)
11. Panarello, A., Breitenbcher, U., Leymann, F., Puliafito, A., Zimmermann, M.:
Automating the deployment of multi-cloud application in federated cloud environ-
ments. In: Proceedings of the 10th EAI International Conference on Performance
Evaluation Methodologies and Tools (2017)
12. Celesti, A., Peditto, N., Verboso, F., Villari, M., Puliafito, A.: DRACO PaaS: a
distributed resilient adaptable cloud oriented platform. In: IEEE 27th International
Parallel and Distributed Processing Symposium (2013)
Web Services for Radio Resource Control
1 Introduction
Mobile Edge Computing (MEC) is a hot topic in 5 G. MEC supports network
function virtualization and it brings network service intelligence close to the net-
work edge [1]. MEC enables low latency communications, big data analysis close
to the point of capture and flexible network management in response to user
requirements [2,3]. MEC is required for critical communications which demand
processing traffic and delivering applications close to the user [4,5]. MEC pro-
vides real-time network data such as radio conditions, network statistics, etc.,
for authorized applications to offer context-related services that can differentiate
end user experience. Some of the promising real-time MEC application scenarios
are discussed in [6].
MEC use cases and deployment options are presented in [7]. The European
Telecommunications Standards Institute (ETSI) defined MEC reference archi-
tecture, where MEC deployment can be inside the base station or at aggregation
point within Radio Access Network (RAN) [8]. Minimal latency for many appli-
cations can be achieved by integrating MEC server in base station [9,10].
The communications between applications and services in the MEC server
are designed according to the principles of Service-oriented Architecture (SOA).
The Radio Network Information Services (RNIS) provide information about the
mobility and activity of User Equipment (UE) in the RAN. The information
includes parameters on the UE context and established E-UTRAN Radio Access
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 188–198, 2018.
https://doi.org/10.1007/978-3-319-67636-4_20
Web Services for Radio Resource Control 189
Bearer (E-RAB), such as Quality of Service (QoS), Cell ID, UE identities, etc.
This information is available based on the network protocols like Radio Resource
Control (RRC), S1 Application Protocol (S1-AP), and X2 Application Protocol
(X2-AP) [11].
ETSI standards just identified the required MEC service functionality, but
do not define Web Service application programming interfaces (APIs). As far as
our knowledge there is a lack of research on MEC service APIs, and the related
works consider MEC applications that may use MEC services. In this paper we
propose an approach to design APIs of SOA based Web Services for access to
radio network information.
The paper is structured as follows. Section 2 provides a detailed Web Service
description including definitions of data structure, interfaces, interface operation
and use cases. Section 3 describes functionality required for mapping of Web
Service interfaces onto network protocols. Device state models are described
and formally verified. The conclusion summarizes the authors’ contributions and
highlights the benefits of the proposed approach.
Device Context service provides access to the UE context including EPS Mobility
Management (EMM) state, EPS Connectivity Management (ECM) state, RRC
state, UE identities, and Cell-ID. This information is provided through:
The response to a request for a group of devices may contain a full or partial
set of results. The results are provided based on a number of criteria including
number of devices for which the request is made and amount of time required to
retrieve the information. Additional requests may be initiated for those devices
for which information was not provided.
The EMM states describe mobility management states that result from the
Attach and Tracking Area Update (TAU) procedures. The EM M status is of
enumeration type with values of EMM-Deregistered (device is deregistered and
it is not accessible) and EMM-Registered (device is registered to the network).
The ECM states describe the signaling connectivity between the device and the
core networks. The ECM status is of enumeration type with values of ECM-
Idle (there is no non-access stratum signaling connection between the device
and the network) and ECM-Connected (there is a non-access stratum signaling
connection). The RRC states describe the connection between the device and the
RAN. The RRCstatus is also of enumeration type with values of RRC-Idle (there
is no RRC connection between the device and the network), and RRC-Connected
190 E. Pencheva and I. Atanasov
(an RRC connection between the device and the RAN is established). The UE
identity information is represented by C-RNTI (Cell Radio Network Temporary
Identity) which identified the RRC connection. The Cell-ID uniquely identifies
the E-Node B which currently serves the device.
StatusData structure contains the device context information. As this can be
related to a query of a group of devices, the ResultStatus element is used. It is of
enumerated type with values indicating whether the information for the device
was retrieved or not, or if an error occurred. Table 1 illustrates the StatusData
elements.
Fig. 1. Sequence diagram for on demand access to device context and triggered noti-
fications
The Application queries for the device context and receives its context. The
Application generates a correlator and starts triggered notifications. The Web
Service sets up a notification to monitor changes in the device context. A noti-
fication is delivered to the Application when the device context changes. When
the notifications are completed, the Application is notified.
Device Bearer service provides access to information about active Radio Access
Bearers (RABs) of the device and allows applications to dynamically manipulate
device RABs. The information about devices RABs is provided on demand,
periodically or upon event occurrence. An authorized Application may request
RAB establishment, modification or release.
192 E. Pencheva and I. Atanasov
3 Implementation Issues
As a mediation point between MEC applications and RAN the MEC server,
which provides Web Services, needs to maintain the network and the application
views on the device status. These views need to be synchronized. Furthermore,
the MEC server needs to translate the Web Service interface operations into
respective events in the network and vice versa.
Figure 2 shows the device state model as seen by the MEC server.
Table 2 provides a mapping between Web Services operations and network
events.
Figure 3 shows the device state model as seen by the application.
Web Services for Radio Resource Control 193
The proposed state model representing the Application view on the device
state includes the following states.
In AppDeregistered state, the device is not registered to the network. The
respective states in the network are RRC-idle, EMM-deregistered and ECM-idle.
During attachment to the network (Attach), the network notifies the Applica-
tion about the change in the device context (NU Econtext ), the device moves to
AppActive state and the respective network states are RRC-connected, EMM-
registered and ECM-connected. After successful mobility management event
without data transfer activity e.g. T AUAccept , the network notifies the Appli-
cation about the change in device context (NU Econtext ), and the Application
considers the device being in AppIdle state, where the respective network states
are RRC-idle, EMM-registered and ECM-idle. In case of unsuccessful mobility
management event (e.g. AttachReject , T AUReject ), the Application is notified
(NM M rej ) and the device moves to AppDeregistered state.
194 E. Pencheva and I. Atanasov
2. In case of detach, or attach reject, Mor TAU reject, or radio link failure, or
EC M EC M EC M EC M EC
device power off, for Connected ∃ τ11 ∨ τ12 ∨
τ 13 ∨ τ 14 ∨ τ 15
that leads to Deregistered, and for AppActive ∃ τ6A ∨ τ7A ∨ τ8A that leads
to AppDeregistered.
3. In case of device
initiated bearer establishment/modification/release, for
Connected ∃ τ3M EC ,Aτ
M EC
4 , τ5M EC that leads to Connected, and for the
state AppActive ∃ τ5 that leads to AppActive.
4. In case of Application
initiated establishment/modification/release,
for state
Connected ∃ τ6M EC , τ7MEC , τ8M EC , τ9M EC that leads to Connected, and for
AppActive ∃ τ2A , τ3A , τ4A that leads to AppActive.
M EC
5. In case of handover,
A for Connected ∃ τ10 that leads to Connected, and
for AppActive ∃ τ5 that leads to AppActive.
6. In case of device inactivity detection
M EC orMTAU accept or Application initiated
EC M EC
disconnect, for Connected
A ∃ τ 15 ∨
τ 16 ∨ τ17 that leads to Idle, and
A A
for AppConnected ∃ τ9 ∨ τ10 ∨ τ11 that leads to AppIdle.
7. In case of device initiated new traffic, M ECor TAU request, or Application initiated
M EC M EC M EC
bearer establishment, for Idle ∃ τ21 , τ18 ∨τ19 ∨ τ20 that leads
A A A
to Connected, and for AppIdle ∃ τ12 ∨ τ13 ∨ τ14 to AppIdle.
M EC
M EC
8. In case of device power off or radio link failure, ∃ τ22
A forA Idle ∨ τ23
leads to Deregistered, and for AppIdle ∃ τ15 ∨ τ16 to AppDeregistered.
Therefore DApp and DN are weakly bisimilar.
4 Conclusion
In this paper we propose an approach to design APIs of Web Services for MEC.
The approach is based on the RNIS provided by the MEC server. Two Web Ser-
vices are proposed: Device Context Web Service and Device Bearers Web Service.
The Device Context Web Service provides applications with information about
device connectivity, mobility and data transfer activity. The Device Bearers Web
Service provides applications with information about device’s active bearer and
allows dynamic control on QoS available on device’s data sessions. Web Ser-
vice data structures, interfaces and interface operations are defined. Some issues
related to MEC service APIs deployment are presented. The MEC server func-
tionality includes transition between Web Service operations and network events
(signaled by respective protocol messages) and maintenance of device state mod-
els which has to be synchronized with the Application view on the device state.
A method for formal model verification is proposed.
Following the same approach, other Web Services that use radio network
information may be designed. Examples include access to appropriate up-to-date
radio network information regarding radio network conditions that may be used
by applications which minimize round trip time and maximize throughput for
optimum quality of experience, access to measurement and statistics information
related to the user plane regarding video management applications, etc.
References
1. Nunna, S., Ganesan, K.: Mobile Edge Computing. In: Thuemmler, C., Bai, C. (eds.)
Health 4.0: How Virtualization and Big Data are Revolutionizing Healthcare, pp.
187–203. Springer, Cham (2017). doi:10.1007/978-3-319-47617-9 9
2. Gupta, L., Jain, R., Chan, H.A.: Mobile edge computing - an important ingredient
of 5G networks. In: IEEE Softwarization Newsletter (2016)
3. Chen, Y., Ruckenbusch, L.: Mobile edge computing: brings the value back to net-
works. In: IEEE Software Defined Networks Newsletter (2016)
4. Roman, R., Lopez, J., Mambo, M.: Mobile edge computing, fog et al.: a survey
and analysis of security threats and challenges. J. CoRR, abs/1602.00484 (2016)
5. Beck, M.T., Feld, S., Linnhhoff-Popien, C., Pützschler, U.: Mobile edge computing.
Informatik-Spektrum 39(2), 108–114 (2016)
6. Ahmed, A., Ahmed, E.: A survey on mobile edge computing. In: 10th IEEE Interna-
tional Conference on Intelligent Systems and Control (ISCO 2016), pp. 1–8 (2016)
7. Brown, G.: Mobile edge computing use cases and deployment options. In: Juniper
White Paper, pp. 1–10 (2016)
8. ETSI GS MEC 003, Mobile Edge Computing (MEC); Framework and Reference
Architecture, v1.1.1 (2016)
9. Sarria, D., Park, D., Jo, M.: Recovery for overloaded mobile edge computing.
Future Generation Computer Systems, vol. 70, pp. 138–147. Elsevier (2017)
10. Beck, M., Werner, M., Feld, S., Schimper, T.: Mobile edge computing: a taxon-
omy. In: Sixth International Conference on Advances in Future Internet, pp. 48–54
(2014)
11. 3GPP. TS 36.300 Evolved Universal Terrestrial Radio Access (EUTRA) and
Evolved Universal Terrestrial Radio Access Network (E-UTRAN); Overall descrip-
tion; Stage 2, Release 14, v14.0.0 (2016)
12. Fuchun, L., Qiansheng, Z., Xuesong, C.: Bisimilarity control of decentralized nonde-
terministic discrete-event systems. In: International Control Conference, pp. 3898–
3903 (2014)
Big Data HIS of the IRCCS-ME Future:
The Osmotic Computing Infrastructure
1 Introduction
Nowadays, the healthcare industry is facing many challenges such as waste reduc-
tion, integration of a new generation of electronical medical systems, collection
and communication of a huge amount of clinical data in a quick and safe fashion.
Up to now, for the healthcare industry has not been easy to introduce new tech-
nological improvements in the daily work of the clinical personnel but, currently,
medical and governmental authorities of many countries encourage the adoption
of cutting-edge information technology solutions in healthcare. Typical examples
of famous initiatives include Electronic Health Record (EHR), Remote Patient
Monitoring (RPM) and tools for medical decision making.
The healthcare industry is looking at modern Big Data storage, processing,
and analytics technologies. An analysis of the McKinsey Global Institute [1]
studied the Big Data penetration for healthcare, highlighting a good potential
to achieve insights. In the period between 2010 and 2015, it measured that the
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 199–207, 2018.
https://doi.org/10.1007/978-3-319-67636-4_21
200 L. Carnevale et al.
2 Related Work
Nowadays, Cloud computing into the healthcare domain is a really challenging
topic. To demonstrate this, in the following we report several scientific works
that aim to improve HIS solutions. In [9] authors focused on Digital Imaging and
Communications in Medicine (DICOM), a standard for storing and managing
medical images. They proposed a hybrid model for Cloud-based HIS focusing
about public and private Cloud. The last one was able to manage and store
computer-based patients records and other important information, while a public
Cloud was able to handle management and business data. In order to share
information among different hospitals, authors adopted VPN (Virtual Private
Network). In [10] authors proposed a Cloud-based Tele-Medicine system that,
thanks to wearable sensors, allows patients to send eHealth data, such as Blood
Pressure (BP) and Electrocardiogram (ECG), into specific gateways in order
to forward them to the Cloud. Here, data were processed and compared with
existing results. If system founds suitable results, it sends back an automatic
feedback, otherwise the appropriate physician is intimated via phone call or
SMS (Short Message System). Using their PDA/smartphone, physicians can
get patients’ data in order to diagnose the disease and send back reports. Four
services for a Healthcare as a Service (HaaS) model have been proposed in [11]:
In [12] the authors presented a hybrid storage solution for the management of
eHealth data, which proved the synergetic utilization of SQL-like strategies ad
NoSQL document based approaches. Specifically, this scientific work adopted the
proposed solution for neurologic Tele-Rehabilitation (TR) of patients at home.
A Open Archivial Storage System (OAIS) for Cloud-based HIS able to manage
Big Clinical Data through a NoSQL column oriented approach was presented
in [13]. Finally, a Cloud-based next-generation sequencing (NGS) tool has been
proposed in [14]. The authors investigated the existing NGS solutions, highlight-
ing the necessary missing features in order to move toward the achievement of
an ecosystem of biotechnology Clouds.
202 L. Carnevale et al.
3 Motivation
Nowadays, traditional HIS are composed of several independent subsystems that
perform specific tasks and store personal and medical patients’ data into local
repositories. Nevertheless, this kind of configuration presents several issues for
patient, physician, technician and administrative staff. From the patient point
of view, the management of exams outcomes is difficult. They can not retrieve
results of their analysis. Indeed, at present, they have to request these to admin-
istrative staff of each ward. The mismanagement of HIS causes losing of a huge
amount of clinical data generated both by human and machine sources. In this
way, only a small quantity of gathered data can be analysed. Moreover, this
quantity is managed through spreadsheets, causing difficulties to correlate clin-
ical data and find out insights. As mentioned above, HIS is composed of several
black-box subsystems that require specific servers and hardware/software config-
urations. This kind of infrastructure is really expensive for hardware, cooling and
powering costs. Furthermore, it is difficult to be managed due to the fact that
update operations are hard to be accomplished because system administrators
have to replicate the same tasks several times.
The objective of this scientific work is to describe a Cloud-based HIS, which inte-
grates daily clinical activities along with digitalization and analysis processes in
order to support healthcare professionals. Our case study is the IT infrastruc-
ture of IRCCS Centro Neurolesi “Bonino Pulejo” placed in Messina (Italy). It
is a scientific institute for recovery and care with the mission in the field of neu-
roscience for the prevention, recovery and treatment of individuals with severe
acquired brain injuries, besides spinal cord and neurodegenerative diseases, by
integrating highly specialized healthcare, technological innovation and higher
education. To this end, the clinical activities need to be divided into two cate-
gories: Production and Research. With reference to Fig. 1, the production side
includes all services that facilitate the administrative and healthcare personnel;
on the other hand, the research side includes all innovative services that sup-
port the IRCCS’ healthcare research activities. Moreover, for each category, we
thought several thematic areas such as Frontend & Communication, Security &
Privacy, Microservices, Big Data and Storage.
All these services are supported from a powerful physical infrastructure. With
reference to Fig. 2, the infrastructure is largely built using traditional systems
because it is required to build a solid foundation for the production line. At the
same time, research line is supported by the same infrastructure. It is composed
by three layers (Storage, Network and Computation) linked together. Storage
disks are network available thanks to the iSCSI internet protocol. Instead, com-
putation is provided with different technologies. Xen Server is the hypervisor
used to provide traditional virtual environment and it allows to run several Vir-
tual Machine for different purposes. Docker and Kubernates provide a lighter
Osmotic-Based Big Data Infrastructure for IRCCS-Me 203
Following this section, we deepen the implemented services both for pro-
duction and research sides. The only one service shared for both sides was the
Identity and Access Management (IAM). It allows to manage users’ authen-
tication and authorization for the whole healthcare system. Indeed, thanks to
204 L. Carnevale et al.
Lightweight Directory Access Protocol (LDAP), our IAM system provides unique
credentials for all users both for productive and research Cloud services. In the
following, we discuss in detail both production and research services.
The characteristic shared by these phases is that each of them can be handled
as a stand alone system but, at the same time, as part of a well-defined health-
care workflow. Thus, it is easy to think about phases as microservices that are
dynamically tailored on hosting smart environments. The Osmotic Computing
was born from the dynamic movement of microservices that individually perform
their tasks, but together complete an ensemble action. Indeed, like the movement
of solvent molecules through a semipermeable membrane into a region of higher
solute concentration to equalize the solute concentrations on the two sides of the
membrane - that is, osmosis (in the context of chemistry) - in Osmotic Com-
puting, the dynamic management of resources in Cloud and Edge datacenters
evolves toward the balanced deployment of microservices satisfying well-defined
low-level constrains and high-level needs. However, unlike the chemical osmotic
process, Osmotic Computing allows a tunable configuration of involved resources,
following resource availability and application requirements.
The advent of Osmotic Computing as management paradigm of microservices
has been enabled by the proliferation of light virtualization technologies (such
as Docker and Kubernates), alternatively to traditional approaches based on
hypervisor (such as Xen and VMWare). Adapting microservices to the physical
characteristics of underlying infrastructure by using decision making strategies
that map them, the Osmotic Computing reduces waste in terms of systems
administration and energy costs.
References
1. Big Data: The next frontier for innovation, competition, and productivity. McK-
insey Global Institute, June 2011
2. OECD, Health spending (indicator) (2016). http://dx.doi.org/10.1787/8643de7e-en
3. How big data is changing healthcare. https://goo.gl/R0RIOb
4. Celesti, A., Peditto, N., Verboso, F., Villari, M., Puliafito, A.: Draco paas: a dis-
tributed resilient adaptable cloud oriented platform. In: 2013 IEEE International
Symposium on Parallel Distributed Processing, Workshops and Phd Forum, pp.
1490–1497, May 2013
5. Celesti, A., Fazio, M., Giacobbe, M., Puliafito, A., Villari, M.: Characterizing cloud
federation in IoT. In: 2016 30th International Conference on Advanced Information
Networking and Applications Workshops (WAINA), pp. 93–98, March 2016
6. Villari, M., Fazio, M., Dustdar, S., Rana, O., Ranjan, R.: Osmotic computing: a
new paradigm for edge/cloud integration. IEEE Cloud Comput. 3(6), 76–83 (2016)
7. Enabling microservices: containers & orchestration explained, July 2016. https://
www.mongodb.com/collateral/microservices-containers-and-orchestration-
explained
8. Microservices: The evolution of building modern applications, July 2016. https://
www.mongodb.com/collateral/microservices-the-evolution-of-building-modern-
applications
9. He, C., Jin, X., Zhao, Z., Xiang, T.: A cloud computing solution for hospital infor-
mation system. In: 2010 IEEE International Conference on Intelligent Computing
and Intelligent Systems, vol. 2, pp. 517–520, October 2010
10. Parane, K.A., Patil, N.C., Poojara, S.R., Kamble, T.S.: Cloud based intelligent
healthcare monitoring system. In: 2014 International Conference on Issues and
Challenges in Intelligent Computing Techniques (ICICT), pp. 697–701, February
2014
11. John, N., Shenoy, S.: Health cloud - healthcare as a service (HaaS). In: 2014 Inter-
national Conference on Advances in Computing, Communications and Informatics
(ICACCI), pp. 1963–1966, September 2014
12. Fazio, M., Bramanti, A., Celesti, A., Bramanti, P., Villari, M.: A hybrid storage
service for the management of big e-health data: a tele-rehabilitation case of study.
In: Proceedings of the 12th ACM Symposium on QoS and Security for Wireless
and Mobile Networks, pp. 1–8 (2016)
13. Celesti, A., Maria, F., Romano, A., Bramanti, A., Bramanti, P., Villari, M.: An
oais-based hospital information system on the cloud: analysis of a NoSQL column-
oriented approach. IEEE J. Biomed. Health Inform. 99, 1 (2017)
14. Celesti, A., Celesti, F., Fazio, M., Bramanti, P., Villari, M.: Are next-generation
sequencing tools ready for the cloud? In: Trends in Biotechnology (2017)
15. ownCloud. http://www.owncloud.org
16. The Big Big Data Workbook, Informatica (2016)
17. Swifts documentation. http://www.docs.openstack.org/developer/swift
Dynamic Identification of Participatory
Mobile Health Communities
1
University of Bologna, Bologna, Italy
{isam.aljawarneh3,paolo.bellavista,luca.foschini}@unibo.it
2
Universidade do Estado de Santa Catarina, Florianópolis, Brazil
rolt@udesc.br
Abstract. Today’s spread of chronic diseases and the need to control infectious
diseases outbreaks have raised the demand for integrated information systems
that can support patients while moving anywhere and anytime. This has been
promoted by recent evolution in telecommunication technologies, together with
an exponential increase in using sensor-enabled mobile devices on a daily basis.
The construction of Mobile Health Communities (MHC) supported by Mobile
CrowdSensing (MCS) is essential for mobile healthcare emergency scenarios. In
a previous work, we have introduced the COLLEGA middleware, which inte‐
grates modules for supporting mobile health scenarios and the formation of MHCs
through MCS. In this paper, we extend the COLLEGA middleware to address the
need in real time scenarios to handle data arriving continuously in streams from
MHC’s members. In particular, this paper describes the novel COLLEGA support
for managing the real-time formation of MHCs. Experimental results are also
provided that show the effectiveness of our identification solution.
1 Introduction
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 208–217, 2018.
https://doi.org/10.1007/978-3-319-67636-4_22
Dynamic Identification of Participatory Mobile Health Communities 209
volunteers and professionals, who are willing to provide instantaneous assistance in case
of emergency, and also patients who are geographically co-located. Participants may
include health staff, friends, neighbors, and passing-by people who can provide instant
aid or contribute to patient’s rescue. As a simplified scenario, patients with illnesses like
cardiopathy and asthma, who have the potential to alert anytime and anywhere, would
wear sensor-enabled devices equipped with some kind of network connectivity support
(wireless, Bluetooth, LTE, ..) in order to be able to send notifications in case of emer‐
gency. On the other side, passing-by volunteers, who are in a relatively nearby distance
with that patient, and for whom specific constraints apply, would be notified to provide
suitable assistance to that patient. Participants can either be trained enough to give
instantaneous medical care, or act as information collectors of relevant data to provide
to official medical staff upon arrival.
In the last decade, the use of Mobile CrowdSensing (MCS) has increased vastly,
promoting a participatory approach for the management of MHCs. MCS is a community
sensing method that collects information using people’s sensor-equipped devices [1].
The purpose of those sensing technologies is not only sending emergency alerts, but also
feeding specialized database servers with information necessary for disease’s diagnosis
and management. For instance, MCS’s users can feed the system with information
related to air pollution’s measurements or allergens in some locations of a city, which,
in turns, can alert patients with asthma to avoid those areas. Also, MCS support can be
useful in proposing sensing activities to users belonging to MHCs based on their loca‐
tion, like taking photos or examining the availability of defibrillators, pharmacies and
medical facilities within a city.
However, despite its increased adoption, many technological challenges hinder a
proper implementation of participatory MHCs. Wireless-enabled monitoring, control
health indicators, location discovery systems for identifying patient’s location at the
time where help is emergent, and appropriate crowdsensing platforms for supporting
participatory mobile healthcare, to name but a few.
Specifically, today it is essential to find solutions that can dynamically form appro‐
priate MHCs nearly on-the-fly, and upon arrival of an emergency event, which will offer
a basement for a dynamic interaction among participants, which in turns, facilitates quick
decisions suitable for those emergency situations. We define the main requirements that
should be met to be able to speed up the decision making process. However, current
methods fall short in achieving these requirements and that calls for a novel contribution
to accomplish dynamic health’s scenario-specific requirements.
Recently, a novel approach for analyzing large streams of data has emerged, the
Apache Spark platform, which will be referred to as Spark for short hereafter. Spark has
many features, including fast processing of streaming data, and parallel dynamic anal‐
ysis [2]. These unique features and many others make Spark an excellent candidate for
addressing the requirements of our specific mobile healthcare scenarios.
However, our main focus in this paper is not Spark as a core. Alternatively, we are
interested in a genuine platform that has been built on the top of Spark, specifically for
graph processing, the so-called GraphX. Its introducers claim that it acts faster than
traditional programming paradigms in terms of performance and an ability to easily
integrate with other platforms, which may need to act on constructed graphs for more
210 I.M. Aljawarneh et al.
specific analysis [3]. Spark through an implementation for GraphX facilitates a construc‐
tion of MHC’s dynamically. Compared to traditional programming paradigms, it is the
best current choice available for forming MHCs for many reasons. First, it enables near
real-time formation of MHCs by applying functionalities of its GraphX. Second, it easily
enables the system to apply intelligent algorithms on the constructed MHCs, then to
find best rescue plan depending on a specific situation of a patient in the presence of
emergency. For example, this includes, but not limited to, discovery of a most suitable
passing-by participant, discovery of a best hospital that is nearby a location of a patient,
and which hosts medical staff and contains equipment appropriate for a specific case of
a patient.
As far as we know, currently there are no specific integrated platforms capable of
achieving all of the requirements for dynamic mobile healthcare. In this paper, we extend
our middleware architecture, called COLLEGA (COLLaborative Emergency Group
Assistant) [4] with an innovative MHC formation support. Based on the analysis of
requirements for dynamic MHC creation, we have developed and integrated a Spark’s-
based community detection support for MHCs.
The rest of the paper is divided into the following sections. First, we provide some
background of participatory MCS and its application in the construction of MHCs. This
is followed by detailed explanations of the COLLEGA middleware and our community
detection support based on Spark. In last sections, we present experimental results and
conclude the paper reporting future development directions.
2 Background
have wireless sensors attached to their clothes or implanted under skin for observing
vital body health indicators and constructing a Wireless Body Area Network (WBAN).
To this end, Patient’s smart devices send alarms in case of emergency while moving.
On the other side, COLLEGA enables potential passing-by participants to receive those
alarms, with information including the type of emergency and helpful information to
provide first-aid.
COLLEGA exploits our previous experience in the ParticipAct middleware [16] and
the Proteus access control model described in [17]. In addition, new mobile healthcare
specific functions are added here as better detailed in the following.
The Monitoring System (MOS) is a module that is interfaced with a patient’s WBAN
for collecting and analyzing sensor’s data, thereby sending emergency alerts and coor‐
dinating communications with the patient. In addition, MOS is also responsible for
suggesting first-aids to patient in relative to off-the-shelf control plans.
The Emergency Context Analyzer (ECA) is responsible for combining data received
from MOS module with patient’s medical history, probably received from a remote
database, then compares them with similar cases stored in a distributed knowledge base,
to analyze and classify the severity degree of the emergency situation. Thereafter, ECA
selects the most suitable control plan, which contains tolerance limit’s values for each
event’s severity degree. It also chooses the most appropriate actions to be performed for
a specific level of severity, the equipment required to accomplish every action and a set
of skills that a potential participant should have.
The CrowdSensing module (CSP) encapsulates a variety of mobile crowdsensing-
related functionalities. To be more specific, CSP is responsible for discovering user’s
status, such as user walking and running. Also, it detects user’s location by geolocali‐
zation functions. Furthermore, it uses geofences to detect user proximity in relative to
214 I.M. Aljawarneh et al.
a specific geographical point. Furthermore, it assigns tasks to users while entering given
geofences.
The Participant Context Analyzer (PCA) dynamically selects potential participants
from an established MHC. PCA is also responsible for dynamically detecting commun‐
ities through the Dynamic Community Detection (DCD) sub-module. DCD receives
appropriate data from CSP, joins them with corresponding data in the knowledge base,
and then applies a community detection algorithm to construct MHCs. To this end, PCA
is also responsible to choose the most appropriate participant depending on several
factors, including the control plan to be performed.
The Virtual Community Manager (VCM) distributes tasks to the identified partici‐
pants, gets their acceptance, provides a set of instructions, and asks security support
permission to access patient’s medical data. Also, VCM gathers data throughout the
whole cycle of an emergency situation in order to update the user’s medical history and
knowledge base.
The core task of the Security Framework (SFW) is to allow the patient to set access
control policies, and thereby enforce mechanisms for accessing patient’s personal data.
Also, SWF assures user’s health data confidentiality during the transmission to partic‐
ipant’s devices.
In order to realize the modules of the COLLEGA middleware, it is essential to
implement and deploy related software packages at client and server sides. In this paper,
we focus on the aspect of MHC construction and detection using Spark GraphX.
According to our main requirements detailed in Sect. 2.1, we have designed the DCD
sub-module and integrate it with the PCA module, aiming to meet those requirements
as detailed in the following sub-section.
finding the nearest hospital that contains medical equipment’s relevant to assist that
patient, and probably finding the best way to move the patient from current location to
the identified hospital. This requires the application of data mining and machine learning
techniques and obtaining results within minimal timeframes.
The responsibility of the DCD module is to discover potential nearby participants to
provide first-aid. The module comprises three sub-modules deployed on a distributed
system. The first sub-module, deployed on patient device, is responsible to generate an
alert signal in case that an emergency was detected by the MOS module. The alert is
implemented in a formatted packet such that it encapsulates information like an identifier
of the health problem type and GPS coordinates of current patient location. The second
sub-module is deployed on a remote-server running Spark and implements a community
detection algorithm (e.g. LPA). The server-side module will receive the alert, analyze
its components, form MHCs and apply data mining and machine learning algorithms on
them to discover the best rescue plan. The third sub-module is deployed on the partici‐
pant’s device, and is always listening to alerts. In fact, listening mode is adaptable in
accordance with participant’s preferences. For example, participant may prefer to
receive alerts through messaging services, in order to avoid exhausting the battery and
mobile device’s resources. After completion of the analysis, the server sends to the
potential participant necessary information to commence with the first-aid.
4 Experimental Results
In this section, we present results obtained from applying a community detection algo‐
rithm using Spark’s GraphX. Static LPA has been tested using two implementations.
The first one is a conventional (vanilla) implementation of LPA that uses traditional
programming modules that have been implemented using Java 1.8 version libraries. The
second one, instead, is purely based on Spark’s GraphX. We have generated synthetic
data that model the relationships between participants and patients. IN particular, gener‐
ated data consisted of tables, each table contains two columns, and each column contains
a set of nodes. This means that each row in the table models the relationship on a patient-
participant or a participant-participant base.
Our testing environment consists of one machine hosting Linux Ubuntu 64-bit oper‐
ating system, with four Intel Core 2.20 GHz processors and 4 GB RAM. In order for the
test to be fair with respect to the vanilla LPA concentrated implementation, we have
decided to test on a single machine, taking into consideration that the conventional
libraries cannot run in a parallel and distributed platform. The goal of our tests is to
evaluate the scalability of the two LPA implementations, namely, the evolution of the
running time as we increase the network size.
As depicted in Fig. 2, GraphX-based LPA implementation outperforms vanilla LPA
implementation in terms of running time, even though both implementations act in a
near linear fashion.
216 I.M. Aljawarneh et al.
We also believe that applying cache mechanisms to the GraphX-based LPA will
definitely reduce the running time, which means that formation of MHCs can depends
effectively on caching for improving the overall performance and enforcing the near-
real time objective. The fact that any application that is constructed using Spark libraries
can be executed in a parallel fashion encouraged us to adopt GraphX. The running time
shown in the figure will definitely decrease dramatically when executing the algorithm
in a cluster or a cloud.
This simplified scenario exhibits the efficiency of adopting Spark and GraphX, for
community detection in mobile healthcare scenarios. Add to this the fact that it is a
streamlined process to apply a machine learning algorithm on the obtained result in order
to identify the most potential participant in case of emergency. For instance, algorithms
can be applied to discover the shortest path to the most suitable hospital.
In this paper, we have introduced the solid COLLEGA middleware and presented the
novel function to dynamically discovery potential participants for providing first-aid to
co-located patients. Further, we have incorporated a recent technology for the construc‐
tion of MHCs with COLLEGA. To validate this incorporation, we have tested the
middleware with a synthetic data and compared the performance of MHCs construction
with a conventional paradigm.
Boosted by obtained results, we are currently exploring other ongoing work direc‐
tions. First, despite the fact that the incorporated method for MHCs outperforms its
predecessors, it is worth noticing that only synthetic testing scenarios have been consid‐
ered; hence, we plan to test with benchmark data in order to strengthen the theory. In
addition, data mining and machine learning algorithms will be applied on the constructed
MHCs.
Acknowledgments. This research was supported by the program CAPES- Pesquisador Visitante
Especial - 3º Cronograma - Chamadas de Projetos nº 09/2014 and by the Sacher project
(no. J32I16000120009) funded by the POR-FESR 2014-20 through CIRI.
Dynamic Identification of Participatory Mobile Health Communities 217
References
1. Ganti, R.K., Ye, F., Lei, H.: Mobile crowdsensing: current state and future challenges. IEEE
Commun. Magaz. 49, 32–39 (2011)
2. Zaharia, M., Chowdhury, M., Franklin, M.J., Shenker, S., Stoica, I.: Spark: cluster computing
with working sets. In: Presented at the Proceedings of the 2nd USENIX Conference on Hot
Topics in Cloud Computing, Boston, MA (2010)
3. Xin, R.S., Gonzalez, J.E., Franklin, M.J., Stoica, I.: GraphX: a resilient distributed graph
system on Spark. In: Presented at the First International Workshop on Graph Data
Management Experiences and Systems, New York (2013)
4. Rolt, C.R.D., Montanari, R., Brocardo, M.L., Foschini, L., Dias, J.D.S.: COLLEGA
middleware for the management of participatory mobile health communities. In: 2016 IEEE
Symposium on Computers and Communication (ISCC), pp. 999–1005 (2016)
5. Alali, H., Salim, J.: Virtual communities of practice success model to support knowledge
sharing behaviour in healthcare sector. Procedia Technol. 11, 176–183 (2013)
6. Christo El, M.: Mobile virtual communities in healthcare the chronic disease management
case. In: Sabah, M., Jinan, F. (eds.) Ubiquitous Health and Medical Informatics: The Ubiquity
2.0 Trend and Beyond, pp. 258–274. IGI Global, Hershey (2010)
7. Chorbev, I., Sotirovska, M., Mihajlov, D.: Virtual communities for diabetes chronic disease
healthcare. Int. J. Telemed. Appl. 2011, 11 (2011)
8. Morr, C.E.: Mobile virtual communities in healthcare: self-managed care on the move. In:
Presented at the Third IASTED International Conference on Telehealth, Montreal, Quebec,
Canada (2007)
9. Zhao, Z., Feng, S., Wang, Q., Huang, J.Z., Williams, G.J., Fan, J.: Topic oriented community
detection through social objects and link analysis in social networks. Knowl. Based Syst.
26(164–173), 2 (2012)
10. Akoglu, L., Tong, H., Koutra, D.: Graph based anomaly detection and description: a survey.
Data Mining Knowl. Disc. 29, 626–688 (2015)
11. Raghavan, U.N., Albert, R., Kumara, S.: Near linear time algorithm to detect community
structures in large-scale networks. Phys. Rev. E 76(3 Pt 2), 036106 (2007)
12. Yang, J., McAuley, J., Leskovec, J.: Community detection in networks with node attributes.
In: 2013 IEEE 13th International Conference on Data Mining, pp. 1151–1156 (2013)
13. Lei, T., Huan, L.: Community Detection and Mining in Social Media. Morgan & Claypool,
San Rafael (2010)
14. Malewicz, G., Austern, M.H., Bik, A.J.C., Dehnert, J.C., Horn, I., Leiser, N., et al.: Pregel: a
system for large-scale graph processing. In: Presented at the Proceedings of the 2010 ACM
SIGMOD International Conference on Management of data, Indianapolis, Indiana, USA
(2010)
15. Lan, S., He, G., Yu, D.: Relationship analysis of network virtual identity based on spark. In:
2016 8th International Conference on Intelligent Human-Machine Systems and Cybernetics
(IHMSC), pp. 64–68 (2016)
16. Cardone, G., Cirri, A., Corradi, A., Foschini, L.: The participact mobile crowd sensing living
lab: the testbed for smart cities. IEEE Commun. Magaz. 52, 78–85 (2014)
17. Toninelli, A., Montanari, R., Kagal, L., Lassila, O.: Proteus: a semantic context-aware
adaptive policy model. In: Eighth IEEE International Workshop on Policies for Distributed
Systems and Networks (POLICY 2007), pp. 129–140 (2007)
Securing Cloud-Based IoT Applications
with Trustworthy Sensing
1 Introduction
Ubiquitous sensor networks together with cloud computing and storage have
played a vital role in enabling numerous IoT applications permeating in domains
such as health-care, environmental monitoring, natural disaster detection, and
urban planning. A generic IoT application infrastructure comprises spatially dis-
tributed sensor nodes, a cloud-based server that collects the sensed data from
the nodes, processes the collected data to extract contextual information that
is offered as the service to the end-users. The growing software stack on today’s
sensor nodes and widespread use of public networks (e.g., Internet) for communi-
cations make these cloud-based applications more vulnerable and more attractive
for attackers. Thus, the reliability of the services offered by these applications
critically depends on the “trustworthiness” of the sensed data.
A typical sensor node comprises a sensing unit (also referred to as a sensor)
that is connected to a host processor. A sensing unit typically consists of sensing
circuitry and a lightweight MCU (known as the sensor controller), whereas a host
processor is mostly a powerful processor that runs an operating system (OS).
On top of the OS, the specific applications are deployed. Sensed data pollution
attacks [1–3], that aim to manipulate and fabricate sensors’ readings, can be
launched using either software or hardware of the sensor node as illustrated
in Fig. 1. An adversary may exploit security bugs (e.g., Android Fake ID and
MasterKey vulnerabilities) to inject malware in the host OS, physically tamper
with the sensor hardware or falsify sensor data by modifications of the sensor
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 218–227, 2018.
https://doi.org/10.1007/978-3-319-67636-4_23
Securing Cloud-Based IoT Applications with Trustworthy Sensing 219
Fig. 1. A high-level sensor node stack, depicting sensed data pollution attacks due to
vulnerable host OS, hardware tampering and firmware modification of the sensors.
220 I. Haider and B. Rinner
The rest of the paper is organized as follows: Sect. 2 reviews the state of the
art in trustworthy sensing. Section 3 introduces the proposed scheme and pro-
vides detailed constructions of the building blocks to achieve non-repudiation
of sensed data and integrity of sensor hardware and firmware. Section 4 evalu-
ates our scheme in terms of required logic area, latency and storage. Section 5
concludes this paper.
2 Related Work
Research on trustworthy sensing has mainly focused on the integration of trusted
platform modules (TPM) and other secure cryptoprocessors to the sensors or
host devices. The anonymous attestation feature of TPM is used to attest the
sensed data. Early work on trustworthy sensing [2] was motivated by participa-
tory sensing. TPM was proposed for mobile devices to attest the sensed data.
Saroiu and Wolman [1] proposed the integration of TPM into the mobile device
sensors which may not be an economical solution for resource constrained embed-
ded applications. Moreover, TPMs are vulnerable to physical attacks. Winkler
et al. [5] used TPM to secure embedded camera nodes. Potkonjak et al. [6]
proposed an alternative approach for the trusted flow of information in remote
sensing scenarios that employed public physically unclonable functions (PPUFs).
Despite similar names, PUFs and PPUFs are fundamentally different primitives.
PPUFs are hardware security primitives which can be modeled by algorithms
of high complexity where as PUFs cannot. The security of PPUF relies on the
fact that the PPUF hardware output is many orders faster than its software
counterpart. The main drawback of this approach is that current PPUF designs
involve complex circuits requiring high measurement accuracy which slows down
the authentication process and therefore it is not a scalable solution. Interest-
ingly, some recent research efforts have lead to successful identification of PUF
behavior on some sensors. Rosenfeld et al. [7] introduced the idea of a sensor
PUF, whereby the PUF response depends on the applied challenge as well as
the sensor reading.
The incorporation of TPMs into sensors and host devices requires extensive
hardware modifications and introduces significant overhead. Despite the wide-
spread deployment of TPMs in laptops, desktops, and servers for over a decade,
TPMs have not yet found their way into sensors or embedded host devices. Pro-
tocols based on complex PPUF primitives are slow, and have limited scalability.
Fig. 2. Our scheme for trustworthy sensing. The readings are signed inside the sensor
using a key that is inseperably bound to the sensor by the on chip PUF. The sensor-
based security modules are marked in green. (Color figure online)
key to each sensor. All the three components of our scheme: PUF-based Secure
Key Generation & Storage Framework, PUF-based Cert-IBS and Verified Boot
are realized on the sensor as depicted in Fig. 2 and we refer to the resulting
sensor as PUF-based Trusted Sensor. Each component requires an enrollment
phase, where an interactive protocol is performed between a trusted authority
and the sensor before the sensor is deployed in the field.
the sensor (ii) determines the helper data W by executing Gen(r, sk) of Table 1
and (iii) creates a certificate on the identity and public key of the sensor using
signing algorithm Sign of the SS i.e., cert ← (pk, Signmsk (pk, I)). Helper data
W and the certificate cert are stored in the sensor memory. The user key usk is
given by the PUF-bound secret key sk and the certificate cert (see Fig. 3 (left)).
Fig. 3. Enrollment (left) and sensed data attestation (right) phases of PUF-based
Cert-IBS scheme
In our scheme, the sensed data attestation is performed in the firmware of the
sensor controller (see Fig. 2). Therefore, during the enrollment phase of the Ver-
ified Boot, the trusted authority binds the legitimate firmware (also responsible
for the sensed data attestation) with the sensor I using the on-chip PUF as fol-
lows: Given a sensor controller with a two stage boot-chain, i.e., the boot-loader
and the firmware, the bootloader is modified to additionally compute the message
authentication code (MAC) of the legitimate firmware (hF W ) and stores it in the
immutable memory available on the sensor controller. The scheme assumes that
one-time programmable memory such as OTPROM, MROM or PROM is avail-
able on the sensor controller. After the sensor is deployed for sensing, Verified
Boot verifies the integrity of the sensor firmware at every power-up as follows: the
bootloader generates the key, calculates a fresh hash value of the firmware and
compares it with the reference hash value stored in the ROM during enrollment.
Verified Boot resists against sensor firmware modification attacks. If a modifi-
cation in the sensor firmware is detected during power-up, the boot process is
aborted.
The correctness of the trustworthy sensing scheme follows from the correctness of
the PUF-based Cert-IBS scheme. Since, any compromise to the trusted authority
nullifies the trust and non repudiation guarantees on the sensed data, we empha-
size the offline nature of the authority in our scheme greatly reduces the risk of
compromise. The scheme withstands the sensed data corruption attacks due to
(i) compromised host device OS (ii) tampered sensor hardware and (iii) modi-
fied sensor firmware. The host OS receives signed sensor readings and the corre-
sponding certificate. In order to inject fabricated data at OS level, an attacker
has to produce a valid signature certificate pair i.e., valid PUF-based Cert-IBS
signature. A uf-cma secure PUF-based Cert-IBS implies that there is negligible
probability that an attacker produces a valid PUF-based Cert-IBS signature.
Since the PUF behavior corresponds to underlying hardware, given a tampered
sensor hardware, PUF-based key generation implies that this results in genera-
tion of invalid secret key leading to generation of invalid signatures. Lastly, the
offline attacks to modify the sensor firmware are detected by the Verified Boot.
Various PUF sources are inherent to a typical sensor including SRAM PUF, ring
oscillator (RO) PUF, and sensor-specific PUFs [7,11,12]. We aim to identify PUF
sources that are commonly available on most of the sensors e.g., SRAM PUF and
RO PUFs. We implemented PUFs on three platforms of varying complexities:
(i) Atmel ATMEGA328P, a lightweight 8-bit MCU, (ii) ARM Cortex M4, a
32-bit MCU, and (iii) Xilinx Zynq7010 SoC with re-programmable logic and
a dual core ARM Cortex A9. These platforms are perfectly suitable as sensor
controllers for a wide range of sensors. The power-up state of the SRAM cells on
the ATMEGA328P and the ARM Cortex M4 show PUF behavior. Figures 4(a)
and (b) depicts the error-rate (measured as intra-Hamming distance (HDintra ))
and the non-uniform distribution (measured as Hamming weight (HW)) of 100
PUF-responses obtained at room temperature. We implemented the RO PUF in
FPGA area of Xilinx’s Zynq7010 SoC. We characterized the RO PUF for HDintra
and HW, from 800 responses obtained over a temperature range of 0–60◦ C,
depicted in the Fig. 4(c). The maximum error-rate HDintra (max) for the three
PUFs ≈ 7.2%, 9.16%, and 6.97%. So, we designed the framework of Table 1 that
can correct 10% error-rate. The error-correcting code determines the number of
required PUF response bits and hence the size of PUF, so we experimented with
two code: (i) a simple code: BCH(492, 57, 171) and (ii) a concatenated code:
Reed Muller(16, 5, 8) Repetition(5, 1, 5). The resources consumed by SRAM
PUF-based framework on ATMEGA328P MCU (Arduino board) and RO PUF-
based framework on Xilinx’s Zynq7010 SoC (MicroZed board) for 128-bit key
are summarized in Table 2.
7 7
9 55
65 64.2
HDintra (%)
HDintra (%)
6
HW (%)
54.5
HDintra (%)
5
8.5
HW (%)
HW (%)
64
5
64 54
63 8 3
4
53.5
3 62 7.5 63.8 1
Table 2. Implementation results of the PUF-based 128-bit key generation and storage
framework for the sensors
Our prototype PUF-based trusted sensor of Fig. 2 comprises OV5642 image sen-
sor and Zynq7010 SoC as the sensor controller. We evaluate the storage, logic-
area and latency overhead and summarize the results in Table 3.
Storage Requirements. The sensor I needs to store (i) helper data W and
certificate cert of PUF-based Cert-IBS and (ii) firmware hash value hF W for
the Verified Boot. For RO PUF-based framework, helper data W ≈ 1105 bits.
The cert is comprised of pk and Signmsk (I, pk). We implemented BLS signature
scheme where pk ≈ 320 bits and Signmsk (I, pk) ≈ 160 bits which amounts to
480 bits. hF W is a 256-bit hash value computed using SHA-256. Therefore, the
total storage requirement on a sensor for asymmetric version of our scheme is
not more than 1841 bits (≈230 bytes).
Latency. PUF-based Key Generation and Verified Boot are performed at start-
up and therefore run-time delay overhead is only incurred by the sensed data
attestation phase of PUF-based Cert-IBS scheme. For sensed data attestation,
we used the open-source pairing-based cryptography library and measured the
latency of 6.27 ms on ARM cortex A9 core of Zynq7010 SoC. At 640 × 480
resolution, OV5642 can provide up to 15 FPS, which implies that a new frame
is available for signing every 66.7 ms 6.27 ms of the signing latency.
Logic-Area Overhead. The logic-area is required only if the RO PUF is imple-
mented. From Table 2, 2210 logic gates are used to generate a 128-bit key.
Securing Cloud-Based IoT Applications with Trustworthy Sensing 227
5 Conclusion
Security concerns in cloud-based IoT applications present severe obstacles for
widespread adoption of these applications. In this paper, we presented a PUF-
based scheme to secure the sensed data at its source, in order to improve the trust
in the services provided by these application. The scheme is proven lightweight
resulting in very low overhead wrt. storage (230B), and latency (6.27 ms). The
logic area overhead can be avoided by choosing a PUF source inherent of the
sensor (e.g., SRAM PUF or sensor PUF). This is significant improvement over
TPM-based approach [2] that incurs an overhead of a secure co-processor chip
on each sensor and takes 1.92 s for the attestation.
Acknowledgment. This research has been funded by the Austrian Research Promo-
tion Agency (FFG) under grant number 842432. Michael Höberl has implemented RO
PUF and was supported by the FP7 research project MATTHEW under grant number
610436.
References
1. Saroiu, S., Wolman, A.: I am a sensor, and i approve this message. In: Proceedings
of Mobile Computing Systems & Applications, pp. 37–42. ACM (2010)
2. Dua, A., Bulusu, N., Feng, W.-C., Hu, W.: Towards trustworthy participatory
sensing. In: Proceedings on Hot topics in security, p. 8. USENIX (2009)
3. Kapadia, A., Kotz, D., Triandopoulos, N.: Opportunistic sensing: security chal-
lenges for the new paradigm. In: Proceedings of Communication Systems and Net-
works and Workshops, pp. 1–10. IEEE (2009)
4. Haider, I., Höberl, M., Rinner, B.: Trusted sensors for participatory sensing and iot
applications based on physically unclonable functions. In: Proceedings of Workshop
on IoT Privacy, Trust, and Security, pp. 14–21. ACM (2016)
5. Winkler, T., Rinner, B.: Securing embedded smart cameras with trusted comput-
ing. EURASIP J. Wirel. Commun. Netw. 2011, 530354 (2011)
6. Potkonjak, M., Meguerdichian, S., Wong, J.L.: Trusted sensors and remote sensing.
In: Proceedings on Sensors. IEEE (2010)
7. Rosenfeld, K., Gavas, E., Karri, R.: Sensor physical unclonable functions. In: Pro-
ceedings on Hardware-Oriented Security and Trust (HOST). IEEE (2010)
8. Maes, R.: Physically unclonable functions: constructions, properties and applica-
tions. Ph.D. dissertation, University of KU Leuven (2012)
9. Tuyls, P., Batina, L.: RFID-tags for anti-counterfeiting. In: Pointcheval, D. (ed.)
CT-RSA 2006. LNCS, vol. 3860, pp. 115–131. Springer, Heidelberg (2006). doi:10.
1007/11605805 8
10. Bellare, M., Namprempre, C., Neven, G.: Security proofs for identity-based iden-
tification and signature schemes. J. Cryptol. 22(1), 1–61 (2009)
11. Cao, Y., Zhang, L., Zalivaka, S.S., Chang, C., Chen, S.: CMOS image sensor based
physical unclonable function for coherent sensor-level authentication. IEEE Trans.
Circu. Syst. I Regular Papers 62(11), 2629–2640 (2015)
12. Rajendran, J., Tang, J., Karri, R.: Securing pressure measurements using Sensor-
PUFs. In: Proceedings of Circuits and Systems, pp. 1330–1333. IEEE (2016)
Secure Data Sharing and Analysis
in Cloud-Based Energy Management Systems
1 Introduction
Smart grids can be defined as a network of intelligent entities that are capable
of bidirectional communication and can autonomously operate and interact with
each other to deliver power to the end users. Over the years smart grids have been
used to address the high energy consumption of commercial building or set of build-
ings. As it was reported by the United Nations Environment Program that residen-
tial and commercial buildings consume approximately 60% of the world’s electric-
ity. In addition to using 40% of global energy, 25% of global water, and 40% of global
resources. Interestingly, because of the high energy consumption, buildings are also
one of the major contributors to greenhouse gas production [22,38], but also offer
the greatest potential for achieving significant greenhouse gas emission reductions,
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 228–242, 2018.
https://doi.org/10.1007/978-3-319-67636-4_24
Secure Data Sharing and Analysis 229
with numbers projected to increase [34,52]. For these reasons improving energy effi-
ciency of buildings has received a lot of attention globally [53]. Smart grids based
energy management systems have been used to reduce the energy demand of a
building or set of buildings however, these systems have their own challenges. Such
as they have central point of failure and scalability issues due to limited memory [3].
Researchers over the years have suggested a cloud based energy management sys-
tem that is not only scalable, it does not have a single point of failure and because
of its on demand allocation of resources, it uses only the energy required for the
energy management system. Keeping these challenges in mind and to overcome
them, a cloud based demand response system was proposed that introduced data
centric communication and topic based communication models [20]. Their model
was based on a master and slave architecture, in which the smart meters and energy
management system at home acted as slave where as the utility acted as masters.
The authors advocated that a reliable and scalable energy management system can
be built using their model. Energy pricing is considered to be one of the relevant
factor as the energy consumption cost is determined by it. Taking this into consid-
eration, an energy management system was built by considering the energy pric-
ing to be dynamic [23]. While building this model, the authors considered the peak
demand of the building and incorporated the dynamic pricing while handling cus-
tomer requests. While designing a cloud based energy management system [39] pro-
posed an architecture for control, storage, power management and resource allo-
cation of micro-grids and to integrate cloud based application for micro-grids with
external one. The bigger and distributed the smart grid infrastructure becomes, the
more difficult it is to analyse real time data from smart meters. Yang et al. [54] sug-
gested that a cloud based system is most appropriate to handle the analysis of real-
time energy data from smart meters. In another approach, power monitoring and
early warning system facilities were provided using a cloud platform [17]. A mobile
agent architecture for cloud based energy management system was proposed to
handle customer request more efficiently [47]. Focusing on the energy demand a
dynamic cloud based demand response model was proposed to periodically fore-
cast demand and by dynamically managing available resources to reduce the peak
demand [43]. The shift of micro-grid based energy management system to cloud
based energy management system does overcome many challenges faced by conven-
tional smart grid based energy management system. However, whenever we expose
a model to the internet, security and privacy concerns are raised. In this paper we
address these issues for the cloud based energy management system and particu-
larly for the Internet of Things (IoT) devices that are integrated into it. The analy-
sis is done by a live example of a cloud based Local Energy Management System
(LEMS) and later extended to general cloud based energy management system.
The LEMS is developed and deployed on cloud (i) to flatten the demand profile of
the building facility and reduce its peak, based on analysis that can be carried out
at the building or in its vicinity (rather than at a data center); (ii) to enable the par-
ticipation of the building manager in the grid balancing services market through
demand side management and response.
230 E. Anthi et al.
IoT Devices at the Network Edge: The cloud based energy management sys-
tem and our LEMS, depend on smart devices such as: smart meters, chargers,
electric vehicles, energy storage units, to gather information from the environ-
ment/buildings and to control the energy flow. Smart meters measure the energy
consumption of the commercial building at a 15 min interval. The chargers are
capable to charge but also discharge an electric vehicle, in order to efficiently
manage the energy demand of the connected buildings. The electric vehicles
and energy storage unites reserve energy, that can be supplied to the buildings
whenever is needed to reduce the energy cost or demand.
The LEMS Algorithm and the GUI: The heart of the LEMS consists of
a demand forecasting tool and a scheduling algorithm. The rationale to add a
forecasting tool, was to be able to predict in advance what the building’s energy
demand, so that a schedule can be created to reduce this expected demand.
The demand forecast tool estimates the electricity demand of the building for a
particular time period. The demand forecasting tool made use of a neural net-
work (from the Weka toolkit [32]) using historical data (collected from actual
building use) and weather data within the proximity of the building. The LEMS
scheduling algorithm operates in timesteps during which the system is consid-
ered static (changes are only discovered at the end of the timestep). A time
step is defined as a time interval after which the LEMS read the data from
each components such as building, EVs/ESUs, etc. For our case study we have
kept the timestep duration to be 15 min. It was concluded that this timestep
duration is an acceptable trade-off between a dynamic (semi-real time) and a
reliable operation that allows the frequent capture of the building conditions
and minimizes the risk of communication lags. Data about EVs located at the
building, such as their battery capacity, state of charge (SoC), expected discon-
nection times, charging/discharging power rate, charging/discharging schedule
and available discharge capacity, is requested from the EV charging stations
upon the connection of every EV. Information regarding the available capacity,
state of charge (SoC), charging/discharging power rate and charging/discharging
schedule is requested from every ESU. This information is stored in a database,
and is accessed from the LEMS on a regular basis (every 15 min) in order to
define the future power set points for the chargers.
The LEMS is deployed on the CometCloud [5] system. CometCloud enables
federation between heterogeneous computing platforms that can support com-
plete LEMS work, such as a local computational cluster with a public cloud (such
as Amazon AWS). There are two main components in CometCloud: a master and
(potentially multiple) the worker node(s). In its software architecture, Comet-
Cloud comprises mainly three layers: the programming layer, an automatic man-
agement layer and a federation or infrastructure layer. The programming layer
defines the task that needs to be executed, the set of dependencies between tasks
that enables a user to define the number of resources that are available along
with any constraints on the usage of those resources. Each task in this instance
is associated with the types of LEMS operation supported, or whether a demand
forecast needs to be carried out. In the automatic management layer the policy
232 E. Anthi et al.
and objectives are specified by the user that help in allocating the resource to
carry out the task. In addition to allocation of resources, this layer also keeps a
track of all task that are generated for workers are executed or not [6]. In the fed-
eration layer a look-up system is built so that content locality is maintained and
a search can be carried out using wildcards [30]. Furthermore, a “CometSpace”
is defined that can be accessed by all resources in the federation [26]. Essentially,
CometCloud uses this physically distributed, but logically shared, tuple space to
share tasks between different resources that are included in a resource federation.
The main task of the master node is to prepare a task that is to be executed
and give information about the data required to process the task. The second
component is the worker, which receives request from the master, executes the
job and sends the results to the place specified by the master. In our framework
there are two workers – one that will be running the LEMS algorithm that will
generate the schedule and the second that will forecast energy demand for the
next day, to generate the charging and discharging of the electric vehicles.
There are two cloud-hosted servers that receive requests from a graphical
user interface, and based on the requests call the appropriate function via the
master. The second server manages a database which contains information about
building data, EVs and weather attributes around the building. The database is
used to store historic data about power consumption, energy pricing, and more,
for each building. Information regarding the weather is also used to forecast
(energy) the energy demand for the next day. There is an intermediary gateway,
which intercepts all signals from the cloud server and forwards the requests to
the EVs to either charge or discharge.
The energy management system is designed for various purposes such as
to reduce demand, reduce energy cost etc. The LEMS that we had developed
maximizes its utility to the building manager by adjusting its operational target
(objective) according to the system status and condition. Furthermore, it was
designed to create two scheduling algorithms for the management of the EVs
and the ESUs, namely Peak Shaving Schedule and Demand Response Schedule
respectively. Each algorithm serves one objective and the LEMS shifts from
one scheduling strategy to another depending on the objective of the building
manager. The peak shaving algorithm aims to flatten the aggregate demand
profile of the building facility. This is achieved by filling the valleys and shaving
the peaks of the demand profile using the controllable loads (EVs, ESUs) of
the building facility. The LEMS calculates the charging/discharging schedules
of the EVs and ESUs, and sends them the corresponding power set points at
the beginning of every timestep. For the demand response algorithm a demand
response signal is send by the building manager to either reduce or increase its
aggregate demand in the next time step (of 15 min). Triggered by the arrival of
such a request, the LEMS overrides the charging/discharging schedules of the
available controllable assets.
Secure Data Sharing and Analysis 233
Data Leakage: As one of the key features of a smart grid based energy man-
agement system is bidirectional communication, an attacker could eavesdrop on
information from the communication channels between: (a) the gateway and
LEMS, (b) the gateway and the IoT system, and (c) among the IoT devices
(i.e. between the smart charger and the electric vehicle) [10] as per Fig. 2a. If
the Secure Sockets Layer (SSL)/Transport Layer Security (TLS) protocol [7,12]
is not employed and therefore the data that gets transmitted is not encrypted,
an unauthorized party could simply intercept it by performing passive network
sniffing on the operating channel [2,19]. If the SSL/TLS protocol is employed,
and therefore the transmitted data is encrypted, an eavesdropping attacker can
observe it to identify traffic patterns and hence gain information about the func-
tionality of the system. For example, the smart energy storage units that are
used in the LEMS send information to the gateway about their energy status
every 15 min. The adversary could use this information to identify when and
how the energy management system is going to adjust the energy requirements
of the buildings and therefore could alter the scheduling algorithm sent to the
energy storage units. Once the scheduling algorithm is altered, the cyber crimi-
nal can create a situation where the ESU’s and EV’s are charging at peak hours
Secure Data Sharing and Analysis 235
resulting in increasing the energy demand at these hours. This will increase the
energy demand and cost for the company and defeat the purpose of deploying
an energy management system.
Additionally, an attacker can perform a Man-In-The-Middle (MITM) attack.
With this attack the original connection between the two parties gets split into
two new ones: one connection between the first party/device and the attacker
and another one between the attacker and the second party/device. When the
original connection is finally compromised, the attacker is able to act as a proxy
and therefore read, insert, and modify data in the intercepted communication
[33]. In cases where the attacker has managed to compromise the communication
channels, using any of the above discussed methods, they could gain access
to important information such as: IDs from the electrical vehicles, electrical
signals/pulse from the batteries, meter readings etc. This could significantly
impact the energy management functionality of the LEMS and the energy cost.
For instance, if an unauthorised party interfered (spoofed, manipulated, inserted,
or deleted) with the unique IDs of the batteries of the electrical vehicles, then
the LEMS would receive false information from the gateway and it would not
be able to adjust correctly the building’s energy demand.
To defend the energy management system, the SSL/TLS protocols should
always be used to establish a secure channel for communication among all the
parties/devices in the LEMS. Nevertheless, this protocol is not enough to pre-
vent MITM attacks. Consequently, techniques such as certificate pinning [16],
should also be employed to authenticate the devices on the grid. This ensures
that each device checks the servers certificate against a known copy stored in
its firmware [11]. However, although this is an efficient way of preventing MITM
attacks it is not completely immune, as an adversary could disable the certificate
pinning procedure, and manage to intercept the communication [31]. As an alter-
native to SSL/TLS, managed certificate whitelisting was recently proposed [9]
to authenticate devices, specifically in energy automation systems. Even though
this approach appears to be promising, its security aspects haven’t been fully
explored. Finally, to protect against traffic analysis, [27] proposes a re-encryption
algorithm that can be used to randomise the transmitted cipher text, without
affecting the decryption process. This prevents the attacker from linking in and
out data by comparing the transmitted packets.
Spoofing: Sybil attack is a type of spoofing attack in which IoT devices on
the LEMS are particularly vulnerable [56]. During such attacks, attackers can
manipulate fake identities to compromise the effectiveness of the IoT as per
Fig. 2b. For instance, in the energy management system, such an attack could
forge a massive number of identities to act as legal nodes and request more
energy from the LEMS. This could severely affect the energy cost and latency
of the system.
Various methods to detect and defend against Sybil attacks have already
been implemented and can be employed on the LEMS. For instance, SVELTE,
is a novel Intrusion Detection System designed specifically for IoT devices, which
is inherently protected against Sybil attacks [41]. Alternatively, Zhang et al. [56]
236 E. Anthi et al.
is connected to the National Grid, the hacker could potentially take control of it
too. As a result, this would lead to serious financial losses and not only. A recent
study by Dlamini et al. [8] showed that this scenario could also result in loss of
lives.
Although various mechanisms against DoS attacks have been proposed, none
of them can provide full protection against them all. Raymond et al. [40], dis-
cusses currently used protection methods against DoS in Wireless Sensor Net-
works. Moreover, Garcia et al. [13,14] present various DoS countermeasures such
as: DTLS, IKEv2, HIP, and Diet HIP, for IP-based Internet of Things. Finally [19]
proposes a DoS detection architecture for 6LoWPAN network with great poten-
tial. It is necessary to underline that due to the severity of DoS attacks, there is a
need to research better preventive measures and defensive mechanisms [29].
Energy Bleeding: In sensor networks like the cloud based energy management
system, the ability of the devices to enter power-saving modes (e.g. various sleep
and hibernation modes) is important to preserve the network’s longevity, the
lifetime of these devices, and reduce the overall power consumption [15,36,37].
In this section two major attacks that target this functionality of the devices
will be discussed: sleep deprivation and barrage attack. Specifically, an attacker
can use them to forbid these devices from going into power-saving mode by
continually sending traffic to them and hence exhausting their battery resources
[21], as per Fig. 3b. These are also known as sleep deprivation torture attacks
[45]. Both of them, if used against the LEMS, could cause severe energy and
therefore financial losses.
During the barrage attack the targeted device is being bombarded with
requests that seem to be legitimate. The goal is to waste the device’s limited
power supply by preventing it from going into sleep mode and making it per-
form complex energy demanding operations. In sleep deprivation attack, mali-
cious nodes on the network, send requests to the victim device only as often as
necessary to keep it switched on. Although the goal of this attack is the same
as the barrage attack, the sleep deprivation attack does not make the target
nodes perform energy intense operations [36]. Barrage attack has been proven to
exhaust faster the battery resources of the targeted nodes [36], but at the same
time it is very easy to detect as opposed to sleep deprivation attack. For this
reason we consider sleep deprivation to be more a more serious threat [21].
Pirreti et al. [36] showed that sleep deprivation attack can impact severely
networks like the LEMS. They demonstrated that if an attacker manages to
compromise as few as 20 devices on a 400 node network, they will be able to
double its power consumption. Additionally, they showed that a single malicious
node can attack approximately 150 devices at the same time. Therefore this
attack can significantly affect the energy consumption levels of the system. To
protect the energy management system from sleep deprivation attack, we can
use any of the currently implemented mechanisms. For instance, Pirretti et al.
[36], extensively compares and evaluates three different defence schemes against
sleep deprivation attack that can be applied in sensor networks. These include
the random vote scheme, the round robin scheme, and the hash-based scheme.
238 E. Anthi et al.
4 Conclusion
With the advancement of technology a gradual shift of energy management sys-
tems to the cloud has been seen, to overcome computational challenges faced
by conventional energy management system. However, as we expose each com-
ponent to the internet, to move to the cloud, the complexity and security of
the system increases. In this paper we have given an overview of a cloud based
energy management system by using a live example of a cloud based local energy
management system (LEMS).
The aim of LEMS is to reduce the aggregated energy demand of a com-
mercial building by using a set of electric vehicles and energy storage units
available at building sites. Furthermore, we have addressed security concerns for
Secure Data Sharing and Analysis 239
Table 1.
LEMS vulnerabilities
Risks Attacks Target
Data leakage Data sniffing and MITM Transmitted data
Spoofing Sybil attack System blackout
Disruption of service DoS/DDoS System availability
Energy bleeding Barrage and sleep deprivation System’s energy resources
Hardware issues Faulty gateways Transmitted data
the algorithm in the cloud as well as for the attached IoT devices. Each concern
is explored by creating an attack scenario to identify vulnerabilities and best
countermeasures for each attack is presented for that scenario.
In a cloud based energy management system, five major risks were identified
and included: Data Leakage, Spoofing, Disruption of Service, Energy Bleeding,
and Hardware issues, as per Table 1. For each one of this risks, we described
in detail the attacks associated with it and current defence mechanisms. We
concluded that, although various measures to defend against these attacks have
been proposed, none can fully guarantee the protection of the system. However,
we hope this paper will act as a guideline to build a robust and lightweight
security architecture to secure it.
References
1. Ashton, K.: That ‘internet of things’ thing. RFiD J. 22(7), 97–114 (2009)
2. Barcena, M.B., Wueest, C.: Insecurity in the internet of things. In: Security
Response, Symantec (2015)
3. Bera, S., Misra, S., Rodrigues, J.J.: Cloud computing applications for smart grid:
a survey. IEEE Trans. Parallel Distrib. Syst. 26(5), 1477–1494 (2015)
4. Bhattasali, T., Chaki, R., Sanyal, S.: Sleep deprivation attack detection in wireless
sensor network. arXiv preprint arXiv:1203.0231 (2012)
5. Diaz-Montes, J., AbdelBaky, M., Zou, M., Parashar, M.: CometCloud: enabling
software-defined federations for end-to-end application workflows. IEEE Internet
Comput. 19(1), 69–73 (2015)
6. Diaz-Montes, J., Xie, Y., Rodero, I., Zola, J., Ganapathysubramanian, B.,
Parashar, M.: Exploring the use of elastic resource federations for enabling large-
scale scientific workflows. In: Proceedings of Workshop on Many-Task Computing
on Clouds, Grids, and Supercomputers (MTAGS), pp. 1–10 (2013)
7. Dierks, T.: The transport layer security (TLS) protocol version 1.2 (2008)
8. Dlamini, M., Eloff, M., Eloff, J.: Internet of things: emerging and future scenarios
from an information security perspective. In: Southern Africa Telecommunication
Networks and Applications Conference (2009)
240 E. Anthi et al.
9. Falk, R., Fries, S.: Managed certificate whitelisting-a basis for internet of things
security in industrial automation applications. In: SECURWARE 2014, p. 178
(2014)
10. Farooq, M., Waseem, M., Khairi, A., Mazhar, S.: A critical analysis on the security
concerns of internet of things (IoT). Int. J. Comput. Appl. 111(7), 1–6 (2015)
11. Fossati, T., Tschofenig, H.: Transport layer security (TLS)/datagram transport
layer security (DTLS) profiles for the internet of things. Transport (2016)
12. Frier, A., Karlton, P., Kocher, P.: The ssl 3.0 protocol, vol. 18, p.2780. Netscape
Communications Corporation (1996)
13. Garcia-Morchon, O., Kumar, S., Struik, R., Keoh, S., Hummen, R.: Security con-
siderations in the IP-based internet of things (2013)
14. Heer, T., Garcia-Morchon, O., Hummen, R., Keoh, S.L., Kumar, S.S., Wehrle, K.:
Security challenges in the IP-based internet of things. Wirel. Personal Commun.
61(3), 527–542 (2011)
15. Hummen, R., Wirtz, H., Ziegeldorf, J.H., Hiller, J., Wehrle, K.: Tailoring end-to-
end IP security protocols to the internet of things. In: 21st IEEE International
Conference on Network Protocols (ICNP), pp. 1–10. IEEE (2013)
16. Jha, A., Sunil, M.: Security considerations for internet of things. L&T Technology
Services (2014)
17. Ji, L., Lifang, W., Li, Y.: Cloud service based intelligent power monitoring and
early-warning system. In: Innovative Smart Grid Technologies-Asia (ISGT Asia),
pp. 1–4. IEEE (2012)
18. Jing, Q., Vasilakos, A.V., Wan, J., Lu, J., Qiu, D.: Security of the internet of
things: perspectives and challenges. Wirel. Netw. 20(8), 2481–2501 (2014)
19. Kasinathan, P., Pastrone, C., Spirito, M.A., Vinkovits, M.: Denial-of-service detec-
tion in 6LoWPAN based internet of things. In: IEEE 9th International Conference
on Wireless and Mobile Computing, Networking and Communications (WiMob),
pp. 600–607. IEEE (2013)
20. Kim, H., Kim, Y.-J., Yang, K., Thottan, M.: Cloud-based demand response for
smart grid: architecture and distributed algorithms. In: IEEE International Con-
ference on Smart Grid Communications (SmartGridComm), pp. 398–403. IEEE
(2011)
21. Krishnaswami, J.: Denial-of-service attacks on battery-powered mobile computers.
Ph.D. thesis, Virginia Polytechnic Institute and State University (2004)
22. Laustsen, J.: Energy efficiency requirements in building codes, energy efficiency
policies for new buildings. Int. Energy Agency (IEA) 2, 477–488 (2008)
23. Li, X., Lo, J.-C.: Pricing and peak aware scheduling algorithm for cloud computing.
In: Innovative Smart Grid Technologies (ISGT), IEEE PES, pp. 1–7. IEEE (2012)
24. Li, X., Lu, R., Liang, X., Shen, X.: Side channel monitoring: packet drop attack
detection in wireless ad hoc networks. In: IEEE International Conference on Com-
munications (ICC), pp. 1–5. IEEE (2011)
25. Li, X., Lu, R., Liang, X., Shen, X., Chen, J., Lin, X.: Smart community: an internet
of things application. IEEE Commun. Mag. 49(11) (2011)
26. Li, Z., Parashar, M.: A computational infrastructure for grid-based asynchronous
parallel applications. In: Proceedings of the 16th International Symposium on High
Performance Distributed Computing, pp. 229–230. ACM (2007)
27. Lin, X., Lu, R., Shen, X., Nemoto, Y., Kato, N.: SAGE: a strong privacy-preserving
scheme against global eavesdropping for eHealth systems. IEEE J. Sel. Areas Com-
mun. 27(4), 365–378 (2009)
Secure Data Sharing and Analysis 241
28. Maheshwari, K., Lim, M., Wang, L., Birman, K., van Renesse, R.: Toward a reli-
able, secure and fault tolerant smart grid state estimation in the cloud. In: Innov-
ative Smart Grid Technologies (ISGT), IEEE PES, pp. 1–6. IEEE (2013)
29. Mayer, C.P.: Security and privacy challenges in the internet of things. Electron.
Commun. EASST 17, 1–12 (2009)
30. Montes, J.D., Zou, M., Singh, R., Tao, S., Parashar, M.: Data-driven workflows in
multi-cloud marketplaces. In: IEEE 7th International Conference on Cloud Com-
puting, pp. 168–175. IEEE (2014)
31. Moonsamy, V., Batten, L.: Mitigating man-in-the-middle attacks on smartphones-
a discussion of SSL pinning and DNSSec. In: Proceedings of the 12th Australian
Information Security Management Conference, pp. 5–13. Edith Cowan University
(2014)
32. University of Waikato: Weka 3 - data mining with open source machine learning
software in Java (2017). http://www.cs.waikato.ac.nz/ml/weka/. Accessed 13 Jan
2017
33. OWASP: Man-in-the-middle attack (2016). https://www.owasp.org/index.php/
Man-in-the-middle attack/. Accessed 18 Apr 2016
34. Pérez-Lombard, L., Ortiz, J., Pout, C.: A review on buildings energy consumption
information. Energy Build. 40(3), 394–398 (2008)
35. Perrig, A., Stankovic, J., Wagner, D.: Security in wireless sensor networks. Com-
mun. ACM 47(6), 53–57 (2004)
36. Pirretti, M., Zhu, S., Vijaykrishnan, N., McDaniel, P., Kandemir, M., Brooks, R.:
The sleep deprivation attack in sensor networks: analysis and methods of defense.
Int. J. Distrib. Sens. Netw. 2(3), 267–287 (2006)
37. Poslad, S., Hamdi, M., Abie, H.: Adaptive security and privacy management for
the internet of things (ASPI 2013). In: Proceedings of the 2013 ACM Conference
on Pervasive and Ubiquitous Computing Adjunct Publication, pp. 373–378. ACM
(2013)
38. United Nations Environment Programme: Why buildings (2016). http://www.
unep.org/sbci/AboutSBCI/Background.asp. Accessed 11 Jan 2017
39. Rajeev, T., Ashok, S.: A cloud computing approach for power management of
microgrids. In: Innovative Smart Grid Technologies-India (ISGT India), IEEE PES,
pp. 49–52. IEEE (2011)
40. Raymond, D.R., Midkiff, S.F.: Denial-of-service in wireless sensor networks: attacks
and defenses. IEEE Pervasive Comput. 7(1), 74–81 (2008)
41. Raza, S., Wallgren, L., Voigt, T.: SVELTE: real-time intrusion detection in the
internet of things. Ad Hoc Netw. 11(8), 2661–2674 (2013)
42. Saxena, M.: Security in wireless sensor networks-a layer based classification.
Department of Computer Science, Purdue University (2007)
43. Simmhan, Y., Aman, S., Kumbhare, A., Liu, R., Stevens, S., Zhou, Q., Prasanna,
V.: Cloud-based software platform for big data analytics in smart grids. Comput.
Sci. Eng. 15(4), 38–47 (2013)
44. Simmhan, Y., Kumbhare, A.G., Cao, B., Prasanna, V.: An analysis of security and
privacy issues in smart grid software architectures on clouds. In: IEEE International
Conference on Cloud Computing (CLOUD), pp. 582–589. IEEE (2011)
45. Stajano, F., Anderson, R.: The resurrecting duckling: security issues for ubiquitous
computing. Computer 35(4), supl22–supl26 (2002)
46. Suo, H., Wan, J., Zou, C., Liu, J.: Security in the internet of things: a review.
In: International Conference on Computer Science and Electronics Engineering
(ICCSEE), vol. 3, pp. 648–651. IEEE (2012)
242 E. Anthi et al.
47. Tang, L., Li, J., Wu, R.: Synergistic model of power system cloud computing based
on mobile-agent. In: 3rd IEEE International Conference on Network Infrastructure
and Digital Content (IC-NIDC), pp. 222–226. IEEE (2012)
48. Ugale, B.A., Soni, P., Pema, T., Patil, A.: Role of cloud computing for smart grid
of India and its cyber security. In: Nirma University International Conference on
Engineering (NUiCONE), pp. 1–5. IEEE (2011)
49. Wang, Y., Attebury, G., Ramamurthy, B.: A survey of security issues in wireless
sensor networks (2006)
50. Wang, Y., Deng, S., Lin, W.-M., Zhang, T., Yu, Y.: Research of electric power
information security protection on cloud security. In: International Conference on
Power System Technology (POWERCON), pp. 1–6. IEEE (2010)
51. Wen, M., Lu, R., Zhang, K., Lei, J., Liang, X., Shen, X.: PaRQ: a privacy-
preserving range query scheme over encrypted metering data for smart grid. IEEE
Trans. Emerg. Top. Comput. 1(1), 178–191 (2013)
52. Weng, T., Agarwal, Y.: From buildings to smart buildings—sensing and actuation
to improve energy efficiency. IEEE Des. Test 29(4), 36–44 (2012)
53. Wijayasekara, D., Linda, O., Manic, M., Rieger, C.: Mining building energy man-
agement system data using fuzzy anomaly detection and linguistic descriptions.
IEEE Trans. Ind. Inform. 10(3), 1829–1840 (2014)
54. Yang, C.-T., Chen, W.-S., Huang, K.-L., Liu, J.-C., Hsu, W.-H., Hsu, C.-H.: Imple-
mentation of smart power management and service system on cloud computing.
In: 9th International Conference on Ubiquitous Intelligence & Computing and 9th
International Conference on Autonomic & Trusted Computing (UIC/ATC), pp.
924–929. IEEE (2012)
55. Zanella, A., Bui, N., Castellani, A., Vangelista, L., Zorzi, M.: Internet of things for
smart cities. IEEE Internet Things J. 1(1), 22–32 (2014)
56. Zhang, K., Liang, X., Lu, R., Shen, X.: Sybil attacks and their defenses in the
internet of things. IEEE Internet Things J. 1(5), 372–383 (2014)
57. Zhang, Y.: Technology framework of the internet of things and its application.
In: International Conference on Electrical and Control Engineering (ICECE), pp.
4109–4112. IEEE (2011)
58. Zhao, K., Ge, L.: A survey on the internet of things security. In: 9th International
Conference on Computational Intelligence and Security (CIS), pp. 663–667. IEEE
(2013)
59. Zia, T., Zomaya, A.: Security issues in wireless sensor networks. In: International
Conference on Systems and Networks Communications (ICSNC 2006), p. 40. IEEE
(2006)
IoT and Big Data: An Architecture
with Data Flow and Security Issues
1
School of Computing and Communications, University of Technology Sydney,
Ultimo, Australia
deepak.puthal@gmail.com
2
School of Computing Science, Newcastle University, Newcastle upon Tyne, UK
rranjans@gmail.com
3
CSIRO Data61, Canberra, Australia
Surya.Nepal@data61.csiro.au
4
Swinburne Data Science Research Institute, Swinburne University of Technology,
Melbourne, Australia
jinjun.chen@gmail.com
Abstract. The Internet of Things (IoT) introduces a future vision where users,
computer, computing devices and daily objects possessing sensing and actuating
capabilities cooperate with unprecedented convenience and benefits. We are
moving towards IoT trend, where the number of smart sensing devices deployed
around the world is growing at a rapid speed. With considering the number of
sources and types of data from smart sources, the sensed data tends to new trend
of research i.e. big data. Security will be a fundamental enabling factor of most
IoT applications and big data, mechanisms must also be designed to protect
communications enabled by such technologies. This paper analyses existing
protocols and mechanisms to secure the IoT and big data, as well as security
threats in the domain. We have broadly divided the IoT architecture into several
layers to define properties, security issues and related works to solve the security
concerns.
1 Introduction
IoT is a widely-used expression but still a fuzzy one, due to the large number of concepts
brought together to a concept. The IoT appears a vision of a future source of data where
sensing device, possessing computing and sensorial capabilities can communicate with
other devices using Internet protocol. Such applications are expected to bring a large
total of sensing and actuating devices, and in significance these costs will be a major
factor. On the other hand, cost restrictions dictate constraints in terms of the resources
available in sensing platforms, such as memory and computational power. Overall, such
factors motivate the design and adoption of communications and security mechanisms
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 243–252, 2018.
https://doi.org/10.1007/978-3-319-67636-4_25
244 D. Puthal et al.
2 IoT Architecture
The connection of physical things to the Internet makes it possible to access remote
sensor data and to control the physical world from a distance. The IoT is based on
IoT and Big Data 245
this vision. A smart object, which is the building block of the IoT, is just another
name for an embedded system that is connected to the Internet [9]. Al-Fuqaha et al.
in [10] clearly defined the individual elements of IoT, which includes identifica‐
tion, sensing, communication, computation, services, and semantics. There is another
technology that points in the same direction as RFID technology. The novelty of the
IoT is not in any new disruptive technology, but in the pervasive deployment of
smart objects. IoT system architecture must guarantee the operations of IoT, which
bridges the gap between the physical and the virtual worlds. Since things may move
geographically and need to interact with others in real-time mode, IoT architecture
should be adaptive to make devices interact with other things dynamically and
support unambiguous communication of events [11]. We broadly divided the
complete architecture of IoT into three different layers, such as source smart sensing
device, communication (Networks) layer and cloud data centre as shown in Fig. 1.
These layers can be related to the service layer of IoT, where service layer and inter‐
face layer are integrated into the data centre in our architecture. The service level
architecture of IoT consists of four different layers with functionality such as sensing
layer, network layer, service layer, and interfaces layer [11, 12].
• Sensing layer: This layer is integrated with available hardware objects (sensors,
RFID, etc.) to sense/control statuses of things.
• Network layer: This layer supports the infrastructure for networking over wireless
or wired connections.
• Service layer: This layer creates and manages services requirements according to the
user’s need.
• Interfaces layer: This layer provides interaction methods to users and applications.
Fig. 1. Layer wise IoT architecture from IoT device to cloud data centre.
246 D. Puthal et al.
The role of the networking layer is to connect all things together and allow things to
share information with other connected things. In addition, the networking layer is
capable of aggregating information from existing IT infrastructures [4], data can then
be transmitted to cloud data centre for the high-level complex services. The communi‐
cation in the network might involve the Quality of Service (QoS) to guarantee reliable
services for different users or applications [5]. Automatic assignment of the devices in
an IoT environment is one of the major tasks, it enables devices to perform tasks collab‐
oratively. There are some issues related to the networking layer as listed below [11]:
• Network management technologies including managing fixed, wireless, mobile
networks
• Network energy efficiency
• Requirements of QoS
• Technologies for mining and searching
• Data and signal processing
• Security and privacy
Among these issues, information confidentiality and human privacy security are
critical because of the IoT device deployment, mobility, and complexity. For informa‐
tion confidentiality, the existing encryption technology used in WSNs can be extended
and deployed in IoT. Granjal et al. [3] divided the communication layer for IoT
IoT and Big Data 247
applications into five different parts: Physical layer, MAC layer, Adaptation layer,
network/routing layer, application layers. They also mentioned the associated protocols
for energy efficiency as shown in Fig. 2.
A main activity in the service layer involves the service specifications for middleware,
which are being developed by various organisations. A well-designed service layer will
be able to identify common application requirements.
The service layer relies on the middleware technology, which provides functionali‐
ties to integrate services and applications in IoT. The middleware technology provides
a cost-effective platform, where the hardware and software platforms can be reused. The
services in the service layer run directly on the network to effectively locate new services
for an application and retrieve metadata dynamically about services. Most of specifica‐
tions are undertaken by various standards developed by different organisations.
However, a universally accepted service layer is important for IoT. A practical service
layer consists of a minimum set of the common requirements of applications, application
programming interfaces (APIs), and protocols supporting required applications and
services.
In IoT, a large number of devices are involved; those devices can be provided by different
vendors and hence do not always comply with same standards. The compatibility issue
among the heterogeneous things must be addressed for the interactions among things.
Compatibility involves information exchanging, communication, and events processing.
There is a strong need for an effective interface mechanism to simplify the management
248 D. Puthal et al.
This subsection lists the security threats and security issues is each individual layer as
divided in the above subsections.
The sensing layer is responsible for frequency selection, carrier frequency generation,
signal detection, modulation, and data encryption [3, 14]. An adversary may possess a
broad range of attack capabilities. A physically damaged or manipulated node used for
attack may be less powerful than a normally functioning node. IoT devices use wireless
communication because the network’s ad hoc, large-scale deployment makes anything
else impractical. As with any radio-based medium, there exists the possibility of
jamming in IoT. In addition, devices may be deployed in hostile or insecure environ‐
ments where an attacker has easy physical access. Network jamming and source device
tampering are the major types of possible attack in the sensing layer. The features of
sensing layers follow from Fig. 2.
Jamming: Interference with the radio frequencies nodes are using and
Tampering: Physical compromise of nodes.
MAC Layer. The MAC layer manages, besides the data service, other operations,
namely accesses to the physical channel, validation of frames, guaranteed time slots,
node association and security. The standard distinguishes sensing devices by its capa‐
bilities and roles in the network. A full-function device (FFD) can coordinate a network
of devices, while a reduced-function device (RFD) is only able to communicate with
other devices (of RFD or FFD types). By using RFD and FFD, IEEE 802.15.4 support
topologies such as peer-to-peer, star and cluster networks [15].
IoT and Big Data 249
Routing Layer. The Routing Over Low-power and Lossy Networks (ROLL) working
group of the IETF was formed with the goal of designing routing solutions for IoT
applications. The current approach to routing in 6LoWPAN environments is material‐
ized in the Routing Protocol for Low power and Lossy Networks (RPL) [16] Protocol.
The information in the Security field indicates the level of security and the cryptographic
algorithms employed to process security for the message. What this field doesn’t include
is the security-related data required to process security for the message, for example a
Message Integrity Code (MIC) or a signature. Instead, the security transformation itself
states how the cryptographic fields should be employed in the context of the protected
message.
Due to the very large number of technologies normally in place within the IoT paradigm,
a type of middleware layer is employed to enforce seamless integration of devices and
data within the same information network. Within such middleware, data must be
exchanged respecting strict protection constraints. IoT applications are vulnerable to
security attacks for several reasons: first, devices are physically vulnerable and are often
left unattended; second, is difficult to implement any security countermeasure due to the
large scale and the decentralised paradigm; finally, most of the IoT components are
devices with limited resources, that can’t support complex security schemes [19]. The
major security challenge in IoT middleware is to protect data from data integrity,
authenticity, and confidentiality attacks [20].
250 D. Puthal et al.
Both the networking and security issues have driven the design and the development
of the VIRTUS Middleware, an IoT middleware relying on the open XMPP protocol to
provide secure event driven communications within an IoT scenario [19]. Leveraging
the standard security features provided by XMPP, the middleware offers a reliable and
secure communication channel for distributed applications, protected with both authen‐
tication (through TLS protocol) and encryption (SASL protocol) mechanisms.
Security and privacy are responsible for confidentiality, authenticity, and nonrepu‐
diation. Security can be implemented in two ways – (i) secure high-level peer commu‐
nication which enables higher layers to communicate among peers in a secure and
abstract way and (ii) secure topology management which deals with the authentication
of new peers, permissions to access the network and protection of routing information
exchanged in the network [21]. The major IoT security requirements are data authenti‐
cation, access control, and client privacy [8]. Several recent works tried to address the
presented issues. For example, [22] deals with the problem of task allocation in IoT.
Applications dealing with large data sets obtained via simulation or actual real-time
sensor networks/social network are increasing in abundance [23]. The data obtained
from real-time sources may contain certain discrepancies which arise from the dynamic
nature of the source. Furthermore, certain computations may not require all the data and
hence this data must be filtered before it can be processed. By installing adaptive filters
that can be controlled in real-time, we can filter out only the relevant parts of the data
thereby improving the overall computation speed.
Nehme et al. [24] proposed a system, StreamShield, designed to address the problem
of security and privacy in the data stream. They have clearly highlighted the need for
two types of security in data stream i.e. (1) the “data security punctuations” (dsps)
describing the data-side security policies, and (2) the “query security punctuations”
(qsps) in their paper. The advantages of such a stream-centric security model include
flexibility, dynamicity and speed of enforcement. A stream processor can adapt to not
only data-related but also to security-related selectivity, which helps reduce waste of
resources, when few subjects have access to streaming data.
There are several applications where sensor nodes work as the source of the data
stream. Here we list several applications such as real-time health monitoring applications
(Health care), industrial monitoring, geo-social networking, home automation, war front
monitoring, smart city monitoring, SCADA, event detection, disaster management and
emergency management.
From all the above applications, we found data needs to be protected from malicious
attacks to maintain originality of data before it reaches a data processing centre [25]. As
the data sources is sensor nodes, it is always important to propose lightweight security
solutions for data streams [25].
These applications require real-time processing of very high-volume data streams
(also known as big data stream). The complexity of big data is defined through 5Vs i.e.
volume, variety, velocity, variability, veracity. These features present significant
IoT and Big Data 251
opportunities and challenges for big data stream processing. Big data stream is contin‐
uous in nature and it is important to perform the real-time analysis as the life time of the
data is often very short (applications can access the data only once) [1, 2]. So, it is
important to perform security verification of big data streams prior to data evaluation.
Following are the important points to consider during data streams security evaluation.
• Security verification is important in data stream to avoid malicious data.
• Another important issue, security verification should perform in near real-time.
• Security verification should not degrade the performance of stream processing engine
(SPE). i.e. security verification speed should synchronize with SPE.
5 Conclusion
A glimpse of the IoT may be already visible in current deployments where networks of
smart sensing devices are being interconnected with a wireless medium, and IP-based
standard technologies will be fundamental in providing a common and well accepted
ground for the development and deployment of new IoT applications. According to the
5Vs features of big data, the current data stream heading towards the new term as big
data stream where sources are the IoT smart sensing devices. Considering that security
may be an enabling factor of many of IoT applications, mechanisms to secure data stream
using data in flow for the IoT will be fundamental. With such aspects in mind, this paper
an exhaustive analysis on the security protocols and mechanisms available to protect
big data streams on IoT applications.
References
1. Puthal, D., Nepal, S., Ranjan, R., Chen, J.: A dynamic prime number based efficient security
mechanism for big sensing data streams. J. Comput. Syst. Sci. 83(1), 22–42 (2017)
2. Puthal, D., Nepal, S., Ranjan, R., Chen, J.: DLSeF: a dynamic key length based efficient real-
time security verification model for big data stream. ACM Trans. Embedded Comput. Syst.
16(2), 51 (2016)
3. Granjal, J., Monteiro, E., Sá Silva, J.: Security for the internet of things: a survey of existing
protocols and open research issues. IEEE Commun. Surv. Tutor. 17(3), 1294–1312 (2015)
4. Tien, J.: Big data: unleashing information. J. Syst. Sci. Syst. Eng. 22(2), 127–151 (2013)
5. Boldyreva, A., Fischlin, M., Palacio, A., Warinschi, B.: A closer look at PKI: security and
efficiency. In: Okamoto, T., Wang, X. (eds.) PKC 2007. LNCS, vol. 4450, pp. 458–475.
Springer, Heidelberg (2007). doi:10.1007/978-3-540-71677-8_30
6. Puthal, D., Nepal, S., Ranjan, R., Chen, J.: A dynamic key length based approach for real-
time security verification of big sensing data stream. In: Wang, J., Cellary, W., Wang, D.,
Wang, H., Chen, S.-C., Li, T., Zhang, Y. (eds.) WISE 2015. LNCS, vol. 9419, pp. 93–108.
Springer, Cham (2015). doi:10.1007/978-3-319-26187-4_7
7. Puthal, D., Nepal, S., Ranjan, R., Chen, J.: DPBSV- an efficient and secure scheme for big
sensing data stream. In: 14th IEEE International Conference on Trust, Security and Privacy
in Computing and Communications, pp. 246–253 (2015)
8. Weber, R.: Internet of things-new security and privacy challenges. Comput. Law Secur. Rev.
26(1), 23–30 (2010)
252 D. Puthal et al.
9. Kopetz, H.: Internet of things. In: Kopetz, H. (ed.) Real-Time Systems. Real-Time Systems
Series. Springer, Boston (2011). doi:10.1007/978-1-4419-8237-7_13
10. Al-Fuqaha, A., et al.: Internet of things: a survey on enabling technologies, protocols, and
applications. IEEE Commun. Surv. Tutor. 17(4), 2347–2376 (2015)
11. Li, S., Xu, L., Zhao, S.: The internet of things: a survey. Inf. Syst. Front. 17(2), 243–259
(2015)
12. Xu, L., He, W., Li, S.: Internet of things in industries: a survey. IEEE Trans. Industr. Inf.
10(4), 2233–2243 (2014)
13. Ilie-Zudor, E., et al.: A survey of applications and requirements of unique identification
systems and RFID techniques. Comput. Ind. 62(3), 227–252 (2011)
14. Wang, Y., Attebury, G., Ramamurthy, B.: A survey of security issues in wireless sensor
networks. IEEE Commun. Surv. Tutor. 8(2), 2–23 (2006)
15. IEEE Standard for Local and Metropolitan Area Networks—Part 15.4: Low-Rate Wireless
Personal Area Networks (LR-WPANs) Amendment 1: MAC Sublayer, IEEE Std.
802.15.4e-2012 (Amendment to IEEE Std. 802.15.4–2011), (2011), pp. 1–225 (2012)
16. Thubert, P.: Objective function zero for the routing protocol for low-power and lossy networks
(RPL). RFC 6550 (2012)
17. Bormann, C., Castellani, A., Shelby, Z.: Coap: an application protocol for billions of tiny
internet nodes. IEEE Internet Comput. 16(2), 62 (2012)
18. Zheng, T., Ayadi, A., Jiang, X.: TCP over 6LoWPAN for industrial applications: an
experimental study. In: 4th IFIP International Conference on New Technologies, Mobility
and Security (NTMS), pp. 1–4 (2011)
19. Conzon, D., Bolognesi, T., Brizzi, P., Lotito, A., Tomasi, R., Spirito, M.: The virtus
middleware: an XMPP based architecture for secure IoT communications. In: 21st
International Conference on Computer Communications and Networks, pp. 1–6 (2012)
20. Sicari, S., Rizzardi, A., Grieco, L., Coen-Porisini, A.: Security, privacy and trust in internet
of things: the road ahead. Comput. Netw. 76, 146–164 (2015)
21. Bandyopadhyay, S., Sengupta, M., Maiti, S., Dutta, S.: A survey of middleware for internet
of things. In: Özcan, A., Zizka, J., Nagamalai, D. (eds.) CoNeCo/WiMo -2011. CCIS, vol.
162, pp. 288–296. Springer, Heidelberg (2011). doi:10.1007/978-3-642-21937-5_27
22. Colistra, G., Pilloni, V., Atzori, L.: The problem of task allocation in the internet of things
and the consensus-based approach. Comput. Netw. 73, 98–111 (2014)
23. Fox, G., et al.: High performance data streaming in service architecture. Technical report,
Indiana University and University of Illinois at Chicago (2004)
24. Nehme, R., Lim, H., Bertino, E., Rundensteiner, E.: StreamShield: a stream-centric approach
towards security and privacy in data stream environments. In: ACM SIGMOD International
Conference on Management of data, pp. 1027–1030 (2009)
25. Chen, P., Wang, X., Wu, Y., Su, J., Zhou, H.: POSTER: iPKI: identity-based private key
infrastructure for securing BGP protocol. In: ACM CCS, pp. 1632–1634 (2015)
IoT Data Storage in the Cloud: A Case Study
in Human Biometeorology
1 Introduction
The emergence of WSNs (Wireless Sensor Networks) enabled the pervasive mon-
itoring of environments. However, the main weakness of the WSNs is that com-
munications are restricted to the monitoring site (due to short-range radios and
energy constraints). The necessary modifications to the WSNs in order to actu-
ally be introduced on a large scale in the IT industry (Information Technology)
is to connect them to the Internet and extend their limited computation and
storage capabilities. Thus, the IoT (Internet of Things) has emerged to fill this
gap and provide interconnected devices able to interact with the environment [1].
The implementation and integration of IoT devices, data storage and
the development of applications is very challenging. This paper presents an
infrastructure for aggregating and storing data collected from different IoT
devices into the cloud. It make use of consolidated technologies, such as ZigBee
to interconnect monitoring devices, and Azure, to implement a NoSQL Database
(DB) into the cloud.
c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018
A. Longo et al. (Eds.): IISSC 2017/CN4IoT 2017, LNICST 189, pp. 253–262, 2018.
https://doi.org/10.1007/978-3-319-67636-4_26
254 B. Vanelli et al.
In order to evaluate the proposed infrastructure and the quality of the storage
service in, the paper deals with an AAL (Ambient Assisted Living) scenario and,
in particular, it addresses an Human Biometeorology application as use case. The
evaluation of metrics related to sending, receiving and storing data demonstrate
that the experimental environment is completely reliable and appropriate for the
case study in question.
This paper is organized as follows: Sect. 2 introduces the reference scenario
and the motivations at the base of this work. Section 3 discusses the state of the
art on the main topic related to our work. Section 4 presents the architecture we
propose and related technologies. We describe our evaluation results in Sect. 5.
Finally, Sect. 6 presents our conclusions and future work.
3 Related Work
The search for related work was conducted on two main topics, that are (1)
the adoption of ubiquitous computing to monitor environmental conditions and
correlate the meteorological variables with human biometeorology, and (2) data
storage solutions for IoT data into the Cloud.
About the first topic, we noticed that many monitoring biomedical signals
using body sensor networks and monitoring elderly activities in AAL environ-
ments use the ZigBee technology. Indeed, ZigBee presents good performance in
monitoring the ambient air quality in order to improve and support the users’
health [8,9,11–13]. For these reasons, we adopted ZigBee in our experimentation
(as we will discuss later).
About the second topic, we noticed a great interest of the scientific com-
munity in designing new solutions for IoT and cloud integration. This paper
[14] presents a two-layer architecture based on a hybrid storage system able to
support a Platform as a Service (PaaS) federated Cloud scenario. Generalized
architectures which use Cloud computing and Big Data for effective storage and
analysis of data generated are discussed in [10,15]. This paper proposes [16] a
parallel storage algorithm for the classification of data The experiment shows
that it classifies the original heterogeneous data flow according to the data type
to realize parallel processing, which greatly improves the storage and access effi-
ciency. In this paper [17], the two technologies, cloud computing and IoT, are
introduced and analyzed. Then, an intelligent storage management system is
designed combining of cloud computing and IoT. The designed system is divided
into four layers: perception layer, network layer, service layer, and application
256 B. Vanelli et al.
layer. And the system’s function modules and database design are also described.
The system processes stronger applicability and expansion functions, and all of
them can be extendedly applied to other intelligent management systems based
on cloud calculating and IoT. Our solution is mainly focused on a storage service
for IoT data exploiting consolidated technologies, such as ZigBee and Microsoft
Azure.
Fig. 2. Reference architecture for IoT data storage service in the cloud
5 Experimental Results
The AAL (Ambient Assisted Living) comprises the ZigBee sensor network and
bio-medical sensors. In order to send data to the cloud, it was used a Home
Gateway, a TP-Link with Link Internet an Internet link of 50 Mbps/4 Mbps
to guarantee access the cloud. TheZigBee network was configured to standard
802.15.4 with star topology consisting of 12 slave nodes and a coordinator node.
The slave nodes are composed of 6 DHT11 sensor - humidity and temperature,
3 LDR sensor - light and 3 PIR sensor - presence. The ZigBee network is com-
posed by 12 modules XBee Antenna 1Mw Serie 1. Each module is connected to
aXBee shield, which in turn is embedded in the Arduino Uno board.
In the experiment, we used sensors for pulse and oxygen in blood, body
temperature, blood pressure,airflow and electrocardiogram sensor (EGC). The
collection of data from the sensors was done through the open-source Arduino
Software (IDE) - ARDUINO 1.0.6. The data are captured and transmitted via
serial communication of the user terminal to the Home Gateway, this in turn
sends the data to the web server for storage in the cloud. The frequency of
data transmission depends on the type of sensing, that is, pulse and oxygen in
blood, body temperature – every 5 min; bloodpressure – every 2 h; airflow and
electrocardiogram sensor (EGC) – continuously for periods of set times).
After establishing the connection, the host (i.e., IoT gateway) needs to
authenticate to the web server. The authentication process assigns the creden-
tials of the host and grants permission for the storage table(s). To store the data,
each host can invoke the storage functions. For each data type sensed, there is a
storage function and invocation, and each host must pass the right parameters
IoT Data Storage in the Cloud: A Case Study in Human Biometeorology 259
according to the type of data sensed. After this process, the script automatically
sends a data storage request to the respective table in the database. Consider-
ing the organization of the data schema defined in Sect. 4, searches by device
type, device identifier, or time will be easier and there will be a better use of
Azure Table mandatory keys, since these fields carry the most used informa-
tion such as filter in the consultations to be held. Authentication services and
storage were made in the Azure using a virtual machine, standard DS1 v2 (1
core, 3.5 GB memory), with Linux operating system. For this scenario have been
implemented some tables, the main ones being the tables for authentication of
hosts, storage environmental conditions and storage of biomedical signals.
Consistent with the metrics monitoring Fig. 4, Fig. 5 shows the metrics with
the percentage of success and errors of requests to the storage service. In the mon-
itored period, the minimum percentage of successful requests came in 98.91%,
maximum of 100% and average of 99.99%. The percentage of requests that failed
with a timeout error, got maximum value of 0.07%. This number includes the
client time and server. The percentage of requests that have failed with a Clien-
tOtherError was a maximum of 1.09%.
Figure 6 shows the latency (in milliseconds) of successful requests made to
the storage service. This amount includes the processing time required in the
Azure storage to read the request, send the response and receive the confirmation
response.
IoT Data Storage in the Cloud: A Case Study in Human Biometeorology 261
6 Conclusions
This paper presented an infrastructure for IoT data gathering and storage ser-
vice in a NoSQL cloud DB. To evaluate the proposed solution, we implemented
the system considering an AAL reference scenario. The setting was applied to
human biometeorology where a patient, assisted remotely, frequently sends envi-
ronmental data, presence and biomedical signals to the cloud. The available data
in the cloud can be consumed by third party applications (health caregivers, fam-
ily members, equipment maintenance operator or the user himself). However, in
order to achieve the desired behavior for monitoring applications, it is neces-
sary to verify both the quality in transmission and in data storage. The storage
service is based on Azure.
To evaluate the quality, several metrics were selected for the purpose of show-
ing the number and percentage of successful requests to the storage service as
well as possible errors and the response time in storage operations carried out
successfully. The results show that the experimental environment is reliable and
appropriate for the considered case of study.
As future proposals, we intend to scale the AAL equipments, implement new
functions in the storage service and work with machine learning, to support
human analysis by health caregiver about persisted data.
References
1. Botta, A., de Donato, W., Persico, V., Pescapé, A.: Integration of cloud computing
and internet of things: a survey. Future Gener. Comput. Syst. 56, 684–700 (2016)
2. Thom, E.C.: The discomfort index. Weatherwise 12, 57–60 (1959)
3. Quinn, A., Tamerius, J.D., Perzanowski, M., Jacobson, J.S., Goldstein, I., Acosta,
L., Shaman, J.: Predicting indoor heat exposure risk during extreme heat events.
Sci. Total Environ. 490, 686–693 (2014)
262 B. Vanelli et al.
4. Zhang, K., Li, Y., Schwartz, J.D., O’Neill, M.S.: What weather variables are impor-
tant in predicting heat-related mortality? A new application of statistical learning
methods. Environ. Res. 132, 350–359 (2014)
5. Ngo, N.S., Horton, R.M.: Climate change and fetal health: the impacts of exposure
to extreme temperatures in New York City. Environ. Res. 144(Pt A), 158–164
(2016)
6. Azevedo, J.V.V., Santos, A.C., Alves, T.L.B., Azevedo, P.V., Olinda, R.A.:
Influência do Clima na Incidência de Infecção Respiratória Aguda em Crianças nos
Municı́pios de Campina Grande e Monteiro, Paraı́ba, Brasil. Revista Brasileira de
Meteorologia 30(4), 467–477 (2015)
7. Yang, C.-T., Liao, C.-J., Liu, J.-C., Den, W., Chou, Y.-C., Tsai, J.-J.: Construc-
tion and application of an intelligent air quality monitoring system for healthcare
environment. J. Med. Syst. 38(2), 15 (2014)
8. Sun, F.M., Fang, Z., Zhao, Z., Xu, Z.H., Tan, J., Chen, D.L., Du, L.D., Qian,
Y.M., Hui, H.Y., Tian, L.L.: A wireless ZigBee router with P-H-T sensing for
health monitoring. In: IEEE International Conference on Green Computing and
Communications and IEEE Internet of Things and IEEE Cyber, Physical and
Social Computing, GreenCom-iThings-CPSCom 2013, art. no. 682338, pp. 1773–
1778 (2013)
9. Nam, J.-W., Kim, H.-T., Min, B.-B., Kim, K.-H., Kim, G.-S., Kim, J.-C.: Ventila-
tion control of subway station using USN environmental sensor monitoring system.
In: International Conference on Control, Automation and Systems, art. 6106440,
pp. 305–308 (2011)
10. Fazio, M., Celesti, A., Puliafito, A., Villari, M.: Big data storage in the cloud for
smart environment monitoring. Procedia Comput. Sci. 52(2015), 500–506 (2015)
11. Jayakumar, D., Omana, J., Sivakumar, M., Senthil, B.: A safe guard system for
mine workers using wireless sensor networks. Int. J. Appl. Eng. Res. 10(8), 21429–
21441 (2015)
12. Sung, W.-T., Chen, J.-H., Wang, H.-C.: Wisdom health care environment systems
for monitoring and automated control via RBF function. Appl. Mech. Mater. 157–
158, 315–318 (2012)
13. Li, H., Zhao, L., Ling, P.: Wireless control of residential HVAC systems for energy
efficient and comfortable homes. ASHRAE Trans. 116(PART 2), 355–367 (2010)
14. Fazio, M., Celesti, A., Villari, M., Puliafito, A.: The need of a hybrid storage
approach for IoT in PaaS cloud federation. In: 28th International Conference
on Advanced Information Networking and Applications Workshops, pp. 779–784
(2014)
15. Behera, R.K., Gupta, S., Gautam, A.: Big-data empowered cloud centric internet
of things. In: International Conference on Man and Machine Interfacing, pp. 1–5
(2015)
16. Yan, Z.: The application of mobile cloud in heterogeneous data storage in web of
things system. In: 7th International Conference on Intelligent Computation Tech-
nology and Automation, pp. 773–776 (2014)
17. Kang, J., Yin, S., Meng, W.: An Intelligent storage management system based
on cloud computing and internet of things. In: Patnaik, S., Li, X. (eds.) Pro-
ceedings of International Conference on Computer Science and Information Tech-
nology. AISC, vol. 255, pp. 499–505. Springer, New Delhi (2014). doi:10.1007/
978-81-322-1759-6 57
Author Index