0% found this document useful (0 votes)
54 views

R18 IT - Internet of Things (IoT) Unit-IV

The document discusses the relationship between cloud computing and the Internet of Things (IoT). It describes how cloud computing provides scalable infrastructure and resources to support the massive amounts of data generated by IoT devices. Specifically, cloud computing offers elastic scaling, automatic provisioning and deprovisioning of resources, billing based on usage, and other advantages. The document also discusses different cloud service models like Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) and how they relate to supporting IoT applications and data. Converging IoT with cloud computing provides opportunities for cost effective scaling of IoT systems but also poses challenges due to differences in properties of Io
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

R18 IT - Internet of Things (IoT) Unit-IV

The document discusses the relationship between cloud computing and the Internet of Things (IoT). It describes how cloud computing provides scalable infrastructure and resources to support the massive amounts of data generated by IoT devices. Specifically, cloud computing offers elastic scaling, automatic provisioning and deprovisioning of resources, billing based on usage, and other advantages. The document also discusses different cloud service models like Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) and how they relate to supporting IoT applications and data. Converging IoT with cloud computing provides opportunities for cost effective scaling of IoT systems but also poses challenges due to differences in properties of Io
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Unit IV :

Services and attributes for IoT: Cloud Computing and IoT, Big-Data Analytics
and Visualization, Dependability, Security, Localization, Maintainability

UNIT-IV
Services and Attributes for IOT

CLOUD COMPUTING AND IOT:


Both Cloud computing and the IoT have a complimentary relationship.
The IoT generates massive amounts of data whereas cloud computing ads in
offering a pathway for that data to travel to its destination, thus helping to
increase efficiency in our work. There is no need for you to guess your
infrastructure capacity needs. Cloud computing increases speed and agility while
making resources available to developers. You can save money on operating data
centers and can deploy your applications worldwide in a matter of minutes.

Cloud Computing Basics


Cloud computing is the next evolutionary step in Internet-based computing,
which provides the means for delivering ICT resources as a service. The ICT
resources that can be delivered through cloud computing model include
computing power, computing infrastructure (e.g., servers and/or storage
resources), applications, business processes and more. Cloud computing
infrastructures and services have the following characteristics, which typically
differentiate them from similar (distributed computing) technologies:
 Elasticity and the ability to scale up and down: Cloud computing services
can scale upwards during high periods of demand and downward during
periods of lighter demand. This elastic nature of cloud computing facilitates
the implementation of flexibly scalable business models, e.g., through enabling
enterprises to use more or less resources as their business grows or shrinks.
 Self-service provisioning and automatic deprovisioning: Contrary to
conventional web-based Application Service Providers (ASP) models (e.g., web
hosting), cloud computing enables easy access to cloud services without a
lengthy provisioning process. In cloud computing, both provisioning and de-
provisioning of resources can take place automatically.
 Application programming interfaces (APIs): Cloud services are accessible via
APIs, which enable applications and data sources to communicate with each
other.
 Billing and metering of service usage in a pay-as-you-go model: Cloud
services are associated with a utility-based pay-as-you-go model. To this end,
they provide the means for metering resource usage and subsequently issuing
bills.
 Performance monitoring and measuring: Cloud computing infrastructures
provide a service management environment along with an integrated approach
for managing physical environments and IT systems.
 Security: Cloud computing infrastructures offer security functionalities
towards safeguarding critical data and fulfilling customers’ compliance
requirements.
The two main business drivers behind the adoption of a cloud computing model
and associated services including:
 Business Agility: Cloud computing alleviates tedious IT procurement
processes, since it facilitates flexible, timely and on-demand access to
computing resources (i.e. compute cycles, storage) as needed to meet business
targets.
 Reduced Capital Expenses: Cloud computing holds the promise to lead to
reduced capital expenses (i.e. IT capital investments) (CAPEX), through
enabling conversion of CAPEX to operational expenses (i.e. paying per month,
per user for each service) (OPEX). This is due to the fact that cloud computing
enables flexible planning and elastic provisioning of resources instead of
upfront overprovisioning.
Depending on the types of resources that are accessed as a service, cloud
computing is associated with different service delivery models.
 Infrastructure as a Service (IaaS): IaaS deals with the delivery of storage and
computing resources towards supporting custom business solutions.
Enterprises opt for an IaaS cloud computing model in order to benefit from
lower prices, the ability to aggregate resources, accelerated deployment, as well
as increased and customized security. The most prominent example of IaaS
service Amazon’s Elastic Compute Cloud (EC2), which uses the Xen open-
source hypervisor to create and manage virtual machines.
 Platform as a Service (PaaS): PaaS provides development environments for
creating cloud-ready business applications. It provides a deeper set of
capabilities comparing to IaaS, including development, middleware, and
deployment capabilities. PaaS services create and encourage deep ecosystem of
partners who commit to this environment. Typical examples of PaaS services
are Google’s App Engine and Microsoft’s Azure cloud environment, which both
provide a workflow engine, development tools, a testing environment, database
integration functionalities, as well as third-party tools and services.
 Software as a Service (SaaS): SaaS services enable access to purpose-built
business applications in the cloud. Such services provide the pay-go-go,
reduced CAPEX and elastic properties of cloud computing infrastructures.
Cloud services can be offered through infrastructures (clouds) that are publicly
accessible (i.e. public cloud services), but also by privately owned infrastructures
(i.e. private cloud services). Furthermore, it is possible to offer services supporting
by both public and private clouds, which are characterized as hybrid cloud
services.

IoT / Cloud Convergence


Internet-of-Things can benefit from the scalability, performance and pay-as-you-
go nature of cloud computing infrastructures. Indeed, as IoT applications produce
large volumes of data and comprise multiple computational components (e.g., data
processing and analytics algorithms), their integration with cloud computing
infrastructures could provide them with opportunities for cost-effective on-demand
scaling. As prominent examples consider the following settings:
 A Small Medium Enterprise (SME) developing an energy management IoT
product, targeting smart homes and smart buildings. By streaming the
data of the product (e.g., sensors and WSN data) into the cloud it can
accommodate its growth needs in a scalable and cost effective fashion. As
the SMEs acquires more customers and performs more deployments of its
product, it is able to collect and manage growing volumes of data in a
scalable way, thus taking advantage of a “pay-as-you-grow” model.
Moreover, cloud integration allows the SME to store and process massive
datasets collected from multiple (rather than a single) deployments.
 A smart city can benefit from the cloud-based deployment of its IoT
systems and applications. A city is likely to deploy many IoT applications,
such as applications for smart energy management, smart water
management, smart transport management, urban mobility of the citizens
and more. These applications comprise multiple sensors and devices,
along with computational components. Furthermore, they are likely to
produce very large data volumes. Cloud integration enables the city to
host these data and applications in a cost-effective way. Furthermore, the
elasticity of the cloud can directly support expansions to these
applications, but also the rapid deployment of new ones without major
concerns about the provisioning of the required cloud computing
resources.
 A cloud computing provider offering pubic cloud services can extend them
to the IoT area, through enabling third-parties to access its infrastructure
in order to integrate IoT data and/or computational components operating
over IoT devices. The provider can offer IoT data access and services in a
pay-as-you-fashion, through enabling third-parties to access resources of
its infrastructure and accordingly to charge them in a utility-based
fashion.
These motivating examples illustrate the merit and need for converging IoT and
cloud computing infrastructure. Despite these merits, this convergence has
always been challenging mainly due to the conflicting properties of IoT and cloud
infrastructures, in particular, IoT devices tend to be location specific, resource
constrained, expensive (in terms of development/ deployment cost) and generally
inflexible (in terms of resource access and availability). On the other hand, cloud
computing resources are typically location independent and inexpensive, while at
the same time providing rapid and flexibly elasticity. In order to alleviate these
incompatibilities, sensors and devices are virtualized prior to integrating their data
and services in the cloud, in order to enable their distribution across any cloud
resources. Furthermore, service and sensor discovery functionalities are
implementing on the cloud in order to enable the discovery of services and sensors
that reside in different locations.
Based on these principles the IoT/cloud convergence efforts have started since
over a decade i.e. since they very early days of IoT and cloud computing. Early
efforts in the research community (i.e. during 2005-2009) have focused on
streaming sensor and WSN data in a cloud infrastructure. Since 2007 we have
also witnessed the emergence of public IoT clouds, including commercial efforts.
One of the earliest efforts has been the famous Pachube.com infrastructure (used
extensively for radiation detection and production of radiation maps during
earthquakes in Japan). Pachube.com has evolved (following several evolutions and
acquisitions of this infrastructure) to Xively.com, which is nowadays one of the
most prominent public IoT clouds. Nevertheless, there are tens of other public IoT
clouds as well, such as ThingsWorx, ThingsSpeak, Sensor-Cloud, Realtime.io and
more. The list is certainly non-exhaustive. These public IoT clouds offer
commercial pay-as-you-go access to end-users wishing to deploying IoT
applications on the cloud. Most of them come with developer friendly tools, which
enable the development of cloud applications, thus acting like a PaaS for IoT in the
cloud.
Similarly to cloud computing infrastructures, IoT/cloud infrastructures and
related services can be classified to the following models:
 Infrastructure-as-a-Service (IaaS) IoT/Clouds: These services provide
the means for accessing sensors and actuator in the cloud. The associated
business model involves the IoT/Cloud provide to act either as data or
sensor provider. IaaS services for IoT provide access control to resources
as a prerequisite for the offering of related pay-as-you-go services.
 Platform-as-a-Service (PaaS) IoT/Clouds: This is the most widespread
model for IoT/cloud services, given that it is the model provided by all
public IoT/cloud infrastructures outlined above. As already illustrate
most public IoT clouds come with a range of tools and related
environments for applications development and deployment in a cloud
environment. A main characteristic of PaaS IoT services is that they
provide access to data, not to hardware. This is a clear differentiator
comparing to IaaS.
 Software-as-a-Service (SaaS) IoT/Clouds: SaaS IoT services are the ones
enabling their uses to access complete IoT-based software applications
through the cloud, on-demand and in a pay-as-you-go fashion. As soon as
sensors and IoT devices are not visible, SaaS IoT applications resemble
very much conventional cloud-based SaaS applications. There are
however cases where the IoT dimension is strong and evident, such as
applications involving selection of sensors and combination of data from
the selected sensors in an integrated applications. Several of these
applications are commonly called Sensing-as-a-Service, given that they
provide on-demand access to the services of multiple sensors. Note that
SaaS IoT applications are typically built over a PaaS infrastructure and
enable utility based business models involving IoT software and services.
These definitions and examples provide an overview of IoT and cloud convergence
and why it is important and useful. More and more IoT applications are nowadays
integrated with the cloud in order to benefit from its performance, business agility
and pay-as-you-go characteristics. In following chapters of the tutorial, we will
present how to maximize the benefits of the cloud for IoT, through ensuring
semantic interoperability of IoT data and services in the cloud, thus enabling
advanced data analytics applications, but also integration of a wide range of
vertical (silo) IoT applications that are nowadays available in areas such as smart
energy, smart transport and smart cities. We will also illustrate the benefits of
IoT/cloud integration for specific areas and segments of IoT, such as IoT-based
wearable computing.
IoT cloud platforms bring together capabilities of IoT devices and cloud computing
delivered as a end-to-end service. They are also referred by other terms such
as Cloud Service IoT Platform. In this age, where billions of devices are connected
to the Internet, we see increasing potential of tapping big data acquired from these
devices and processing them efficiently through various applications.

IoT devices are devices with multiple sensors connected to the cloud, typically via
gateways. There are several IoT Cloud Platforms in the market today provided by
different service providers that host wide ranging applications. These can also be
extended to services that use advanced machine learning algorithms for predictive
analysis, especially in disaster prevention and recovering planning using data from
the edge devices.
What are the key features of an IOT cloud platform
 An IoT cloud platform may be built on top of generic clouds such as those
from Microsoft, Amazon, Google or IBM. Network operators such as AT&T,
Vodafone and Verizon may offer their own IoT platforms with stronger focus
on network connectivity. Platforms could be vertically integrated for specific
industries such as oil and gas, logistics and transportation, etc. Device
manufacturers such as Samsung (ARTIK Cloud) are also offering their
own IoT cloud platforms.

In most cases, typical features include connectivity and network


management, device management, data acquisition, processing analysis and
visualization, application enablement, integration and storage.

Cloud for IoT can be employed in three ways: Infrastructure-as-a-Service


(IaaS), Platform-as-a-Service (PaaS) or Software-as-a-Service (SaaS).
Examples of PaaS include GE's Predix, Honeywell's Sentience, Siemens's
MindSphere, Cumulocity, Bosch IoT, and Carriots. Developers can deploy,
configure and control their apps on PaaS. Prefix is built on top of Microsoft
Azure (PaaS). Likewise, MindSphere is built on top of SAP Cloud (PaaS).
Siemens's Industrial Machinery Catalyst on the Cloud is an example
of SaaS which is a ready-to-use app within minimal maintenance.
 Where does cloud fit in with the overall architecture of IoT?
Google Trends comparison of interest between IoT and Cloud
Computing during the past 5 years. Source: Google Trends 2018.
In general, there are two kinds of IoT software architectures:

o Cloud-centric: Data from IoT devices such as sensors are streamed to


a data centre where all the applications that do the analytics and
decision making are executed, using real-time and past data from one
or more sources. Servers in the cloud control the edge devices too.
o Device-centric: All the data is processed in the device (sensor nodes,
mobile devices, edge gateways), with only some minimal interactions
with the cloud for firmware updates or provisioning. Terms such Edge
Computing and Fog Computing are used in this case.

Today, for IoT Cloud Platforms, the goal is to stretch the analytics and data
processing across Cloud and Device, leveraging the resources at each end
seamlessly. In general, we are beginning to see a shift towards leveraging the
compute and service capabilities of the cloud to manage IoT devices better.
This is also quite evident from a snapshot of the Google Trends showing
increasing interest in Cloud compute compared to just IoT.
 How is an IoT cloud platform different from traditional cloud infrastructure?
The traditional cloud infrastructure focuses on a model of cloud computing
where a shared pool of hardware and software resources are made available
for on-demand access in such a way that they can be easily and rapidly
provisioned and released with minimal effort. IoT Cloud Platform extends
this capability to resources that are more user-centric, which increases the
count and scale of data and devices. The cloud platform services can not
only process big data from a wider set of IoT devices, but also provides a
smart way to provision and manage each of them in an efficient manner.
This also includes fine-grained control, configuration and management
of IoT devices.

One of the IoT Cloud platform differentiators is the ability of the engine to
massively scale to handle real-time event processing of large volumes of data
generated by various devices and applications. The providers of IoT Cloud
Platforms typically work with multiple parties such as hardware vendors
(both for cloud services and IoT devices), telecommunication providers,
software service providers and system integrators to build the platform.
 What exactly is meant by Application Enablement Platform (AEP)?
The world of IoT is one of variety: many hardware platforms, many
communication technologies, many data formats, many verticals, and so
on. AEP is a platform that caters to this variety by providing basic
capabilities from which developers can build complete end-to-
end IoT solutions. For example, AEP might offer location-tracking feature
rather than a more restrictive fleet tracking feature. The former is more
generic and therefore can be used in a number of use cases.

AEP gives faster time to market without sacrificing on customization and


product differentiation. The disadvantage is that users of AEP must have the
skill sets to develop the solution. The solution might also suffer from vendor
lock-in.

With AEP, app developers need to worry about scaling up. AEP will take care
of communication, data storage, management, application building and
enablement, user interface security, and analytics. When selecting an AEP,
developers should consider developer usability including good
documentation and modular architecture, flexible and scalable deployment,
good operational capability, and a mature partnership strategy and
ecosystem.

Examples of AEP include ThingWorx Foundation Core, Aeris' AerCloud, and


deviceWISE IoT Platform.

 Could you list some IoT cloud platforms out there in the industry today?
A selection of cloud platforms for Industrial IoT. Source: Newark 2016.
With the advent of IoT with billions of devices getting connected to the
internet which not only does compute, storage and run applications, there is
also a needed to handle large amounts of data coming into the system via
the various interfaces such as sensors and user inputs.

Here are some IoT cloud platforms:

o Amazon Web Services IoT


o IBM Watson IoT Platform
o Microsoft Azure IoT Hub
o Google Cloud IoT
o Oracle Integrated Cloud for IoT
o SAP Cloud Platform for the Internet of Things
o Cisco Jasper Control Center
o PTC ThingWorx Industrial IoT Platform
o Salesforce IoT
o Xively
o Carriots
 What factors should I consider when comparing different IoT cloud
platforms?
Comparison across different platforms depends on both business and
technical factors: Scalability, Reliability, Customization, Operations,
Protocols, Hardware agnostic, Cloud agnostic, Support, Architecture and
Technology Stack, Security and Cost. For example, a comparison
of AWS IoT (serverless) and open-source IoT deployed on AWS showed that
the former reduces time to market but is expensive at scale.

The end-to-end requirements and cost-benefit analysis between commercial


and open-source solutions need to be considered while choosing the right
platform. One way to compare is to look at the best fit to various sectors, viz.
Management of various Device, System, Heterogeneity, Data, Deployment,
Monitoring and fields of Analytics, Research and Visualization.

Each of these sectors has its own performance criteria such as real time
data capture capability, data visualization, cloud model type, data analytics,
device configuration, API protocols, and usage cost. Data analytics
performance and outcome also depends on factors such as device ingress
and device egress, intermediate connectivity network latencies and speeds
and support for optimized protocol translations. Visualization of data,
filtering of large masses of data and configurability of the millions of devices
using smart application tools is another differentiating factor.
 What are some challenges of adopting IoT cloud platforms?
Security and privacy are the main concerns delaying the adoption
of IoT Cloud Platforms. Cloud providers typically will not own the data and
are only authorized to do the analytics and control of systems as permitted
by the owner of the data. Any breach of data access either during transit or
from storage is a concern from privacy and security perspective. Also, since
the value of IoT data is immense, proper legal agreements and mechanisms
must be in place to ensure the data or outcome of data analysis is only used
for the intended purpose by the authorised personnel.

Existing IoT cloud platforms may not always conform to standards, thereby
causing interoperability issues. They may also not support heterogeneous
modules or communication technologies. When there's too much data,
context awareness can help, including decisions of what needs to be done at
the edge. Vertical silos continue to exist and this prevents horizontal flow of
information. Middleware can solve this problem. Many system continue to
use IPv4 and this could be a problem as devices run out of
unique IP addresses.
 Could you compare IoT components from Amazon, Microsoft and Google?
BIG DATA ANALYTICS AND DATA VISUALIZATION:
Big Data visualization calls to mind the old saying: “a picture is worth a thousand
words.” That’s because an image can often convey “what’s going on”, more quickly,
more efficiently, and often more effectively than words. Big data visualization
techniques exploit this fact: they are all about turning data into pictures by
presenting data in pictorial or graphical format This makes it easy for decision-
makers to take in vast amounts of data at a glance to “see” what is going on what
it is that the data has to say.

What Is Big Data Visualization?


Big Data visualization involves the presentation of data of almost any type in a
graphical format that makes it easy to understand and interpret. But it goes far
beyond typical corporate graphs, histograms and pie charts to more complex
representations like heat maps and fever charts, enabling decision makers to
explore data sets to identify correlations or unexpected patterns.
A defining feature of Big Data visualization is scale. Today’s enterprises collect and
store vast amounts of data that would take years for a human to read, let alone
understand. But researchers have determined that the human retina can transmit
data to the brain at a rate of about 10 megabits per second. Big Data visualization
relies on powerful computer systems to ingest raw corporate data and process it to
generate graphical representations that allow humans to take in and understand
vast amounts of data in seconds.

Importance Of Big Data Visualization


The amount of data created by corporations around the world is growing every
year, and thanks to innovations such as the Internet of Things this growth shows
no sign of abating. The problem for businesses is that this data is only useful if
valuable insights can be extracted from it and acted upon.
To do that decision makers need to be able to access, evaluate, comprehend and
act on data in near real-time, and Big Data visualization promises a way to be able
to do just that. Big Data visualization is not the only way for decision makers to
analyze data, but Big Data visualization techniques offer a fast and effective way
to:

 Review large amounts of data – data presented in graphical form


enables decision makers to take in large amounts of data and gain an
understanding of what it means very quickly – far more quickly than
poring over spreadsheets or analyzing numerical tables.
 Spot trends – time-sequence data often captures trends, but spotting
trends hidden in data is notoriously hard to do – especially when the
sources are diverse and the quantity of data is large. But the use of
appropriate Big Data visualization techniques can make it easy to spot
these trends, and in business terms a trend that is spotted early is an
opportunity that can be acted upon.
 Identify correlations and unexpected relationships – One of the huge
strengths of Big Data visualization is that enables users to explore data
sets – not to find answers specific questions, but to discover what
unexpected insights the data can reveal. This can be done by adding or
removing data sets, changing scales, removing outliers, and changing
visualization types. Identifying previously unsuspected patterns and
relationships in data can provide businesses with a huge competitive
advantage.
 Present the data to others – An oft-overlooked feature of Big Data
visualization is that it provides a highly effective way to communicate any
insights that it surfaces to others. That’s because it can convey meaning
very quickly and in a way that it is easy to understand: precisely what is
needed in both internal and external business presentations.

The Challenges Of Big Data Visualization


Big Data visualization can be an extremely powerful business capability,
but before an organization can take advantage of it some key issues need to be
addressed. These include:

 Availability of visualization specialists: Many Big Data visualization


tools are designed to be easy enough for anyone in an organization to use,
often suggesting appropriate Big Data visualization examples for the data
sets under analysis. But to get the most out of some tools it may be
necessary to employ a specialist in big data visualization techniques who
can select the best data sets and visualization styles to ensure the data is
exploited to the maximum.
 Visualization hardware resources: Under the hood, Big Data
visualization is essentially a computing task, and the ability to carry out
this task quickly – to enable organizations to make decisions in a timely
manner using real-time data – may require powerful computer hardware,
fast storage systems, or even a move to cloud. That means Big Data
visualization initiatives are as much an IT project as a management
project.
 Data quality: The insights that can be drawn from Big Data visualization
are only as accurate as the data that is being visualized: if it is inaccurate
or out of date then the value of any insights is questionable. That means
people and processes need to be put in place to manage corporate data,
metadata, data sources, and any transformations or data cleaning that
are performed before storage.

Big Data Visualization Tools


A quick survey of the Big Data tools marketplace reveals the presence of big
names including Microsoft, SAP, IBM and SAS. But there are plenty of specialist
software vendors offering leading bog data visualization tools, and these include
Tableau Software, Qlik and TIBCO Software. Leading data visualization products
include those offered by:
Zoho Analytics: Focusing on ease of use – a particularly key attribute as data tools
grow – Zoho analytics is a self service option. Meaning that users will not need the
assistance of IT staff or professional data scientists to glean insight from data.
IBM Cognos Analytics: Driven by their commitment to Big Data, IBM’s analytics
package offers a variety of self service options to more easily identify insight.
QlikSense and QlikView: The Qlik solution touts its ability to perform the more
complex analysis that finds hidden insights.
Microsoft PowerBI: The Power BI tools enables you to connect with hundreds of
data sources, then publish reports on the Web and across mobile devices.
Oracle Visual Analyzer: A web-based tool, Visual Analyzer allows creation of
curated dashboards to help discover correlations and patterns in data.
SAP Lumira: Calling it “self service data visualization for everyone,” Lumira allows
you to combine your visualizations into storyboards.
SAS Visual Analytics: The SAS solution promotes its “scalability and governance,”
along with dynamic visuals and flexible deployment options.
Tableau Desktop: Tableau’s interactive dashboards allow you to “uncover hidden
insights on the fly,” and power users can manage metadata to make the most of
disparate data sources.
TIBCO Spotfire: Offers analytics software as a service, and touts itself as a
solution that “scales from a small team to the entire organization.”
DEPENDABILITY:
In systems engineering, dependability is a measure of a
system's availability, reliability, and its maintainability, and maintenance
support performance, and, in some cases, other characteristics such
as durability, safety and security. In software engineering, dependability is the
ability to provide services that can be trusted within a time-period. The service
guarantees must hold even when the system is subject to attacks or natural
failures.
The International Electrotechnical Commission (IEC), via its Technical
Committee TC 56 develops and maintains international standards that provide
systematic methods and tools for dependability assessment and management of
equipment, services, and systems throughout their life cycles. The IFIP Working
Group 10.4 on "Dependable Computing and Fault Tolerance" plays a role in
synthesizing the technical community's progress in the field and organizes two
workshops each year to disseminate the results.
Dependability can be broken down into three elements:

 Attributes - a way to assess the dependability of a system


 Threats - an understanding of the things that can affect the
dependability of a system
 Means - ways to increase a system's dependability
Elements of dependability:
Attributes

Taxonomy showing relationship between Dependability & Security and Attributes,


Threats and Means (after Laprie et al.)
Attributes are qualities of a system. These can be assessed to determine its overall
dependability using Qualitative or Quantitative measures. Avizienis et al. define
the following Dependability Attributes:

 Availability - readiness for correct service


 Reliability - continuity of correct service
 Safety - absence of catastrophic consequences on the user(s) and the
environment
 Integrity - absence of improper system alteration
 Maintainability - ability for easy maintenance (repair)
As these definitions suggested, only Availability and Reliability are quantifiable by
direct measurements whilst others are more subjective. For instance Safety cannot
be measured directly via metrics but is a subjective assessment that requires
judgmental information to be applied to give a level of confidence, whilst Reliability
can be measured as failures over time.
Confidentiality, i.e. the absence of unauthorized disclosure of information is also
used when addressing security. Security is a composite
of Confidentiality, Integrity, and Availability. Security is sometimes classed as an
attribute [7] but the current view is to aggregate it together with dependability and
treat Dependability as a composite term called Dependability and Security.
Practically, applying security measures to the appliances of a system generally
improves the dependability by limiting the number of externally originated errors.
Threats
Threats are things that can affect a system and cause a drop in Dependability.
There are three main terms that must be clearly understood:

 Fault: A fault (which is usually referred to as a bug for historic reasons)


is a defect in a system. The presence of a fault in a system may or may
not lead to a failure. For instance, although a system may contain a
fault, its input and state conditions may never cause this fault to be
executed so that an error occurs; and thus that particular fault never
exhibits as a failure.
 Error: An error is a discrepancy between the intended behaviour of a
system and its actual behaviour inside the system boundary. Errors
occur at runtime when some part of the system enters an unexpected
state due to the activation of a fault. Since errors are generated from
invalid states they are hard to observe without special mechanisms, such
as debuggers or debug output to logs.
 Failure: A failure is an instance in time when a system displays
behaviour that is contrary to its specification. An error may not
necessarily cause a failure, for instance an exception may be thrown by a
system but this may be caught and handled using fault tolerance
techniques so the overall operation of the system will conform to the
specification.
It is important to note that Failures are recorded at the system boundary. They are
basically Errors that have propagated to the system boundary and have become
observable. Faults, Errors and Failures operate according to a mechanism. This
mechanism is sometimes known as a Fault-Error-Failure chain. As a general rule
a fault, when activated, can lead to an error (which is an invalid state) and the
invalid state generated by an error may lead to another error or a failure (which is
an observable deviation from the specified behaviour at the system boundary).
Once a fault is activated an error is created. An error may act in the same way as
a fault in that it can create further error conditions, therefore an error may
propagate multiple times within a system boundary without causing an observable
failure. If an error propagates outside the system boundary a failure is said to
occur. A failure is basically the point at which it can be said that a service is
failing to meet its specification. Since the output data from one service may be fed
into another, a failure in one service may propagate into another service as a fault
so a chain can be formed of the form: Fault leading to Error leading to Failure
leading to Error, etc.
Means
Since the mechanism of a Fault-Error-Chain is understood it is possible to
construct means to break these chains and thereby increase the dependability of a
system. Four means have been identified so far:

1. Prevention
2. Removal
3. Forecasting
4. Tolerance
Fault Prevention deals with preventing faults being introduced into a system. This
can be accomplished by use of development methodologies and good
implementation techniques.
Fault Removal can be sub-divided into two sub-categories: Removal During
Development and Removal During Use.
Removal during development requires verification so that faults can be detected
and removed before a system is put into production. Once systems have been put
into production a system is needed to record failures and remove them via a
maintenance cycle.
Fault Forecasting predicts likely faults so that they can be removed or their effects
can be circumvented.
Fault Tolerance deals with putting mechanisms in place that will allow a system to
still deliver the required service in the presence of faults, although that service
may be at a degraded level.
Dependability means are intended to reduce the number of failures made visible to
the end users of a system.
Persistence
Based on how faults appear or persist, they are classified as:

 Transient: They appear without apparent cause and disappear again


without apparent cause
 Intermittent: They appear multiple times, possibly without a discernible
pattern, and disappear on their own
 Permanent: Once they appear, they do not get resolved on their own

Security:
Security is freedom from, or resilience against, potential harm (or other
unwanted coercive change) caused by others. Beneficiaries (technically referents)
of security may be of persons and social groups, objects and institutions,
ecosystems or any other entity or phenomenon vulnerable to unwanted change.

Refugees fleeing war and insecurity in Iraq and Syria arrive at Lesbos Island,
supported by Spanish volunteers, 2015
Security mostly refers to protection from hostile forces, but it has a wide range of
other senses: for example, as the absence of harm (e.g. freedom from want); as the
presence of an essential good (e.g. food security); as resilience against potential
damage or harm (e.g. secure foundations); as secrecy (e.g. a secure telephone line);
as containment (e.g. a secure room or cell); and as a state of mind (e.g. emotional
security).
The term is also used to refer to acts and systems whose purpose may be to
provide security (e.g.: security companies, security forces, security guard, cyber
security systems, security cameras, remote guarding).
Security is not only physical but it can also be Virtual.

OVERVIEW:
Referent
A security referent is the focus of a security policy or discourse; for example, a
referent may be a potential beneficiary (or victim) of a security policy or system.
Security referents may be persons or social groups, objects, institutions,
ecosystems, or any other phenomenon vulnerable to unwanted change by the
forces of its environment. The referent in question may combine many referents, in
the same way that, for example, a nation state is composed of many individual
citizens.
Context
The security context is the relationships between a security referent and its
environment.[2] From this perspective, security and insecurity depend first on
whether the environment is beneficial or hostile to the referent, and also how
capable is the referent of responding to its/their environment in order to survive
and thrive.
Capabilities
The means by which a referent provides for security (or is provided for) vary
widely. They include, for example:

 Coercive capabilities, including the capacity to project coercive power into


the environment (e.g. aircraft carrier, handgun, firearms);
 Protective systems (e.g. lock, fence, wall, antivirus software, air defence
system, armour)
 Warning systems (e.g. alarm, radar)
 Diplomatic and social action intended to prevent insecurity from
developing (e.g. conflict prevention and transformation strategies); and
 Policy intended to develop the lasting economic, physical, ecological and
other conditions of security
(e.g. economic reform, ecological protection, progressive
demilitarization, militarization).
Effects
Any action intended to provide security may have multiple effects. For example, an
action may have wide benefit, enhancing security for several or all security
referents in the context; alternatively, the action may be effective only temporarily,
or benefit one referent at the expense of another, or be entirely ineffective or
counterproductive.
Contested approaches
Approaches to security are contested and the subject of debate. For example, in
debate about national security strategies, some argue that security depends
principally on developing protective and coercive capabilities in order to protect the
security referent in a hostile environment (and potentially to project that power
into its environment, and dominate it to the point of strategic supremacy). Others
argue that security depends principally on building the conditions in which
equitable relationships can develop, partly by reducing antagonism between
actors, ensuring that fundamental needs can be met, and also that differences of
interest can be negotiated effectively.
The range of security contexts is illustrated by the following examples (in
alphabetical order):

Computer security
Computer security, also known as cybersecurity or IT security, refers to the
security of computing devices such as computers and smartphones, as well
as computer networks such as private and public networks, and the Internet. The
field has growing importance due to the increasing reliance on computer systems
in most societies.[9] It concerns the protection of hardware, software, data, people,
and also the procedures by which systems are accessed. The means of computer
security include the physical security of systems and security of information held
on them.
Corporate security
Corporate security refers to the resilience of corporations against espionage, theft,
damage, and other threats. The security of corporations has become more complex
as reliance on IT systems has increased, and their physical presence has become
more highly distributed across several countries, including environments that are,
or may rapidly become, hostile to them.

Ecological security
Ecological security, also known as environmental security, refers to the integrity
of ecosystems and the biosphere, particularly in relation to their capacity to
sustain a diversity of life-forms (including human life). The security of ecosystems
has attracted greater attention as the impact of ecological damage by humans has
grown.[10]
Graffiti about ecological security, Belarus, 2016
Food security[
Food security refers to the ready supply of, and access to, safe
and nutritious food.[11] Food security is gaining in importance as the world's
population has grown and productive land has diminished through overuse
and climate change.[12][13]

Climate change is affecting global agriculture and food security


Home security
Home security normally refers to the security systems used on a property used as
a dwelling (commonly including doors, locks, alarm systems, lighting, fencing);
and personal security practices (such as ensuring doors are locked, alarms
activated, windows closed etc.)

Security spikes protect a gated community in the East End of London.


Human security
Boys play among the bombed-out ruins of Gaza City, 2009
Human security is the name of an emerging paradigm which, in response to
traditional emphasis on the right of nation states to protect themselves,[14] has
focused on the primacy of the security of people (individuals and
communities).[15] The concept is supported by the United Nations General
Assembly, which has stressed "the right of people to live in freedom and dignity"
and recognized "that all individuals, in particular vulnerable people, are entitled
to freedom from fear and freedom from want".[16]
National security
National security refers to the security of a nation state, including its people,
economy, and institutions. In practice, state governments rely on a wide range of
means, including diplomacy, economic power, and military capabilities.

Perceptions of Security:
Since it is not possible to know with precision the extent to which something is
'secure' (and a measure of vulnerability is unavoidable), perceptions of security
vary, often greatly.[3][17] For example, a fear of death by earthquake is common in
the United States (US), but slipping on the bathroom floor kills more
people;[17] and in France, the United Kingdom and the US there are far fewer
deaths caused by terrorism than there are women killed by their partners in the
home.[18][19][20][21]
Another problem of perception is the common assumption that the mere presence
of a security system (such as armed forces, or antivirus software) implies security.
For example, two computer security programs installed on the same device can
prevent each other from working properly, while the user assumes that he or she
benefits from twice the protection that only one program would afford.
Security theater is a critical term for measures that change perceptions of security
without necessarily affecting security itself. For example, visual signs of security
protections, such as a home that advertises its alarm system, may deter
an intruder, whether or not the system functions properly. Similarly, the increased
presence of military personnel on the streets of a city after a terrorist attack may
help to reassure the public, whether or not it diminishes the risk of further
attacks.

Security Concepts (examples):


Certain concepts recur throughout different fields of security:

 Access control - the selective restriction of access to a place or other


resource.
 Assurance - an expression of confidence that a security measure will
perform as expected.
 Authorization - the function of specifying access rights/privileges to
resources related to information security and computer security in
general and to access control in particular.
 Countermeasure - a means of preventing an act or system from having
its intended effect.
 Defense in depth - a school of thought holding that a wider range of
security measures will enhance security.
 Exploit (noun) - a means of capitalizing on a vulnerability in a security
system (usually a cyber-security system).
 Identity management - enables the right individuals to access the right
resources at the right times and for the right reasons.
 Resilience - the degree to which a person, community, nation or system
is able to resist adverse external forces.
 Risk - a possible event which could lead to damage, harm, or loss.
 Security management - identification of an organization's assets
(including people, buildings, machines, systems and information assets),
followed by the development, documentation, and implementation of
policies and procedures for protecting these assets.
 Threat - a potential source of harm.
 Vulnerability - the degree to which something may be changed (usually
in an unwanted manner) by external forces.

LOCALIZATION:

Localization is often confused with translation, but these terms actually mean two
different things. Localization is the entire process of adapting a product or content
to a specific location or market, according to the Globalization and Localization
Association.

Translation is the process of converting text from one language to another.


Translation is one aspect of localization, but localization is more extensive

Localization also involves adapting other elements to a target market, including:

 Modifying graphics and design to properly display translated text


 Changing content to suit preferences
 Converting to local currencies and units of measurement
 Using proper formatting for elements like dates, addresses and phone
numbers
 >Addressing local regulations and legal requirements
In short, localization gives something the look and feel expected by the target
audience.

Localization makes content more appealing, which as a result makes the audience
more likely to buy. In fact, 75 percent of consumers said they were more likely
to purchase goods and services if the corresponding product information is
in their native language, according to a 2014 Common Sense Advisory report.

By that logic, anyone who is trying to reach a global audience should consider
localization as well as translation. To truly expand your audience, however, you’ll
need to localize more than just your website. Localization should also include:

 Marketing materials, including TV, radio, and print ads


 Product manuals
 Training materials
 Online HELP
 User Interfaces
 Quick-start guides
 Service materials
 Product warranty materials
 Disclosure documents, such as terms and conditions

Localizing your content allows your organization to expand its reach to a new
audience, build credibility, and increase sales. It also helps you build loyalty
among existing customers. In another Common Sense Advisory survey, half of
senior executives said they believed localization leads to profitability and
growth.
Choosing a Localization Provider

Localizing a product or website can seem intimidating, but it doesn’t have to be. A
full-service language services provider (LSP) has all the resources necessary to
produce high-quality localization on time and on budget, thereby reducing your
need to be involved in the day-to-day execution of the project.

When aggressive timelines are required, these LSPs have the ability to build and
manage large teams. They are also able to perform both linguistic and functional
quality assurance to ensure the localization is correct. And, they probably have an
engineering team that can work with, and extract text from any file type. Perhaps
most significantly, most firms will be able to use sophisticated localization tools
that will yield significant savings on future projects.

In the end, the most important factor in determining whether your localization
project is successful is the skill of the team who actually works on your materials.
When delivering fully localized materials, it should not be apparent to the
audience that the content they are reading or the product they are holding has
been localized from another language.
The fact that your product or content was originally created in English (for
example) and then localized into the consumer’s native language should be
undetectable. A properly localized product should have the look and feel of having
been created specifically for the target market.

LanguageLine has the expertise and technology to handle any type or size of
localization project, from websites, to software, to technical documentation,
multimedia, training and eLearning, and marketing materials.

Localization can help you expand your reach and make the customers you serve
feel more at home. Start reaching new audiences with localization.

MAINTAINABILITY:
In engineering, maintainability is the ease with which a product can be
maintained in order to:

 correct defects or their cause,


 Repair or replace faulty or worn-out components without having to
replace still working parts,
 prevent unexpected working conditions,
 maximize a product's useful life,
 maximize efficiency, reliability, and safety,
 meet new requirements,
 make future maintenance easier, or
 cope with a changing environment.
In some cases, maintainability involves a system of continuous improvement -
learning from the past in order to improve the ability to maintain systems, or
improve the reliability of systems based on maintenance experience.
In telecommunication and several other engineering fields, the
term maintainability has the following meanings:

 A characteristic of design and installation, expressed as the probability


that an item will be retained in or restored to a specified condition within
a given period of time, when the maintenance is performed in accordance
with prescribed procedures and resources.
 The ease with which maintenance of a functional unit can be performed
in accordance with prescribed requirements.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy