0% found this document useful (0 votes)
8 views

ETCh2

Chapter Two introduces data science as a multi-disciplinary field focused on extracting knowledge from various data types. It covers the differences between data and information, the data processing cycle, and the characteristics of big data, including its value chain and the Hadoop ecosystem. The chapter concludes with a discussion on the life cycle of big data within Hadoop and provides review questions for further understanding.

Uploaded by

seadkedir45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

ETCh2

Chapter Two introduces data science as a multi-disciplinary field focused on extracting knowledge from various data types. It covers the differences between data and information, the data processing cycle, and the characteristics of big data, including its value chain and the Hadoop ecosystem. The chapter concludes with a discussion on the life cycle of big data within Hadoop and provides review questions for further understanding.

Uploaded by

seadkedir45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Chapter Two

Data Science

By: Asfaw k (MSc).

Ethiopia.
Introduction
 In this chapter, you are going to learn more about data science, data vs.
information, data types and representation, data value chain, and basic
concepts of big data.
After completing this chapter, the students will be able to:
 Describe what data science is and the role of data scientists.
 Differentiate data and information.
 Describe data processing life cycle.
 Understand different data types from diverse perspectives
 Describe data value chain in emerging era of big data.
 Understand the basics of Big Data.
 Describe the purpose of the Hadoop ecosystem components.
Data Science

Definition: Data science is a multi-disciplinary field that uses scientific


methods, processes, algorithms, and systems to extract knowledge and insights
from structured, semi-structured and unstructured data. Data science is much
more than simply analyzing data.

• It combines expertise from various domains, including statistics, computer


science, mathematics, and domain-specific knowledge, to analyze and
interpret complex data sets. Here are some key aspects of data science:
Some examples of data science and its applications
 Application digital devices
 Identifying and predicting disease
 Personalized healthcare recommendations
 Optimizing shipping routes in real-time Using data science to optimize
transportation routes, reducing fuel consumption and improving efficiency.
Let’s take an everyday example

 I’m sure you have seen smart watches — or maybe you use one, too. These
smart gadgets can measure your sleep quality, how much you walk, your heart
rate, etc. Let’s take sleep quality, for instance!
 If you check every single day, how did you sleep the night before, that’s 1 data
point for every day. Let’s say that you enjoyed excellent sleep last night: you
slept 8 hours, you didn’t move too much, you didn’t have short awakenings,
etc. That’s a data point. The day after, you slept slightly worse: only 7
hours. That’s another data point.
Cont…
 By collecting these data points for a whole month, you can start to draw trends
from them. Maybe, on the weekends, you sleep better and longer. Maybe if you
go to bed earlier, your sleep quality is better. Or you recognize that you have
short awakenings around 2 am every night…
 By collecting the data for a year, you can create more complex analyses. You can
learn what’s the best time for you to go to bed and wake up. You can identify the
more stressful parts of the year (when you worked too much and slept too little).
Even more, you might be able to predict these stressful parts of the year and
you can prepare yourself! So, we are getting closer and closer to data science…
What are data and information?
 Data can be defined as a representation of facts, concepts, or instructions in
a formalized manner, which should be suitable for communication,
interpretation, or processing, by human or electronic machines.
 It can be described as unprocessed facts and figures. It is represented with
the help of characters such as alphabets (A-Z, a-z), digits (0-9) or special
characters (+, -, /, *, <,>, =, etc.). Whereas
Cont…
Information is the processed data on which decisions and actions are based.
 It is data that has been processed into a form that is meaningful to the
recipient and is of real or perceived value in the current or the prospective
action or decision of recipient.
 Furtherer more, information is interpreted data; created from organized,
structured, and processed data in a particular context.
• Examples of Data and Information
• The history of temperature readings all over the world for the past 100 years is data. If this data is organized
and analyzed to find that global temperature is rising, then that is information.
• The number of visitors to a website by country is an example of data. Finding out that traffic from the U.S. is
increasing while that from Australia is decreasing is meaningful information.
Data Processing Cycle
 Data processing is the re-structuring or re-ordering of data by people or
machines to increase their usefulness and add values for a particular
purpose.
 Data processing consists of the following basic steps - input, processing,
and output.

Figure 2.1 Data Processing Cycle


Cont…

Input − in this step, the input data is prepared in some convenient form for

processing. This can be text, voice etc. Devices such as the keyboard, mouse,
scanner, digital camera are considered input devices.

Processing − in this step, the input data is changed to produce data in a more

useful form.

For example, interest can be calculated on deposit to a bank, or a summary of


sales for the month can be calculated from the sales orders.
Cont…

 Output − at this stage, the result of the proceeding processing step is collected.

The particular form of the output data depends on the use of the data. For
example, output data may be payroll for employees.
 Data storage_ the final stage of data processing cycle is storage. After all of
the data is processed, it is then stored for future use.
Data types and their representation
 A data type makes the values that expression, such as a variable or a function,
might take.
 This data type defines the operations that can be done on the data, the meaning
of the data, and the way values of that type can be stored.
 Data types can be described from diverse perspectives.

Data types from computer programming perspective


 In this perspective common data types include: Integers, Booleans,
Characters, Floating-point numbers, Alphanumeric strings. Let’s discuss those
Cont…
 Integers(int )- is used to store whole numbers, mathematically known as
integers
 Booleans(bool)- is used to represent restricted to one of two values: true or
false
 Characters(char)- is used to store a single character
 Floating-point numbers(float)- is used to store real numbers
 Alphanumeric strings(string)- used to store a combination of characters and
numbers
Data types from Data Analytics perspective

 In this perspective there are three common types of data types or structures:

Structured, Semi-structured, and Unstructured data types.

Figure 2.2 Data types from a data analytics perspective


Structured Data
 Structured data is data that adheres/follow to a pre-defined data model and
is therefore straightforward to analyze.
 It has a tabular format with a relationship between the different rows and
columns.
Common examples of structured data are Excel files or SQL databases. Each
of these has structured rows and columns that can be sorted.
Unstructured Data
 Unstructured data is information that either does not have a predefined data
model or is not organized in a pre-defined manner.
 Unstructured information is typically text-heavy but may contain data such
as dates, numbers, and facts as well.
 This results in irregularities and ambiguities that make it difficult to
understand using traditional programs as compared to data stored in
structured databases.

Common examples of unstructured data include audio, video files or NoSQL


Semi-structured Data
 Semi-structured data is a form of structured data that does not conform
with the formal structure of data models associated with relational
databases or other forms of data tables, but nonetheless, contains tags or
other markers to separate semantic elements and enforce hierarchies of
records and fields within the data.
 Therefore, it is also known as a self-describing structure.

Examples of semi-structured data include JSON and XML are forms of semi-
structured data.
Metadata – Data about Data
 It is one of the most important elements for big data analysis and big data
solutions. Metadata is data about data. It provides additional information
about a specific set of data.
Data value Chain
 The Data Value Chain is introduced to describe the information flow
within a big data system as a series of steps needed to generate value and
useful insights from data.
Cont…
The Big Data Value Chain identifies the following key high-level activities:

Figure 2.3 Data Value Chain


Data Acquisition
 It is the process of gathering, filtering, and cleaning data before it is put in a
data warehouse or any other storage solution on which data analysis can be
carried out.
Data Analysis
 Data analysis involves exploring, transforming, and modeling data with the
goal of highlighting relevant data, synthesizing and extracting useful hidden
information.
 The main purpose of data analysis is a decision-making as well as domain-
specific usage.
Data Curation
 Data curation processes can be categorized into different activities such as
content creation, selection, classification, transformation, validation, and
preservation.
 Data curation is improving the accessibility and quality of data. Data curation
is performed by expert curators that are responsible for improving the accessibility and
quality of data.
 Data curators (also known as scientific curators, or data annotators) hold the
responsibility of ensuring that data are trustworthy, discoverable, accessible, reusable,
and fit their purpose.


Data Usage
 Data usage in business decision-making can enhance competitiveness through
the reduction of costs, increased added value, or any other parameter that can
be measured against existing performance criteria.

What Is Big Data?


 Big data is a collection of data sets so large and complex that it becomes
difficult to process using on-hand database management tools or traditional
data processing applications.
 In this context, a “large dataset” means a dataset too large to reasonably
process or store with traditional tooling or on a single computer.
Cont…
Big data is characterized by 3V and more:
 Volume: large amounts of data Zeta bytes/Massive datasets
 Velocity: data is live streaming or in motion
 Variety: data comes in many different forms from diverse sources
 Veracity: can we trust the data? How accurate is it? etc.
Cont…

Figure 2.4 Characteristics of big data


Clustered Computing and Hadoop Ecosystem

Clustered Computing

A cluster is simply a combination of many computers designed to work together as one system. A
Hadoop cluster is, therefore, a cluster of computers used at Hadoop. Hadoop clusters are designed
specifically for analyzing and storing large amounts of unstructured data in distributed file systems
 To better address the high storage and computational needs of big data, computer clusters are a
better fit.
 Using clusters requires a solution for managing cluster membership, coordinating resource
sharing, and scheduling actual work on individual nodes. Cluster membership and resource
allocation can be handled by software like Hadoop’s YARN (which stands for yet another
Resource Negotiator).
 Big data clustering software combines the resources of many smaller machines, seeking to
Cont…

o Resource Pooling: Combining the available storage space to hold data is a clear
benefit, but CPU and memory pooling are also extremely important. Processing
large datasets requires large amounts of all three of these resources.

o High Availability: Clusters can provide varying levels of fault tolerance and
availability guarantees to prevent hardware or software failures from affecting
access to data and processing.

o Easy Scalability: Clusters make it easy to scale horizontally by adding additional


machines to the group. This means the system can react to changes in resource
requirements without expanding the physical resources on a machine.
Hadoop and its Ecosystem
 Hadoop is an open-source framework intended to make interaction with big
data easier. It is a framework that allows for the distributed processing of
large datasets across clusters of computers using simple programming
models.
The four key characteristics of Hadoop are:
 Economical: Its systems are highly economical as ordinary computers can
be used for data processing.
 Reliable: It is reliable as it stores copies of the data on different machines
and is resistant to hardware failure.
 Scalable: It is easily scalable both, horizontally and vertically. A few extra
nodes help in scaling up the framework.
 Flexible: It is flexible and you can store as much structured and
unstructured data as you need to and decide to use them later.
Cont…

Hadoop has an ecosystem that has evolved from its four core components:

Data management, data access, data processing, and data storage. It is


continuously growing to meet the needs of Big Data.

It comprises the following components and many others:


 HDFS: Hadoop Distributed File System
 YARN: Yet Another Resource Negotiator
 MapReduce: Programming based Data Processing
Cont…

 Spark: In-Memory data processing

 PIG, HIVE: Query-based processing of data services

 HBase: NoSQL Database

 Mahout, Spark MLLib: Machine Learning algorithm libraries

 Solar, Lucene: Searching and Indexing

 Zookeeper: Managing cluster

 Oozie: Job Scheduling


Figure 2.5 Hadoop Ecosystem components
Big Data Life Cycle with Hadoop

Ingesting data into the system (First Stage)


 Data is ingested or transferred to Hadoop from various sources such as
relational databases, systems, or local files. Sqoop transfers data from
RDBMS to HDFS, whereas Flume transfers event data.

Processing the data in storage (Second Stage)


 The data is stored and processed. The data is stored in the distributed file
system, HDFS, and the NoSQL distributed data, HBase. Spark and
MapReduce perform data processing.
Cont…

Computing and analyzing data (Third Stage)


 In this stage data is analyzed by processing frameworks such as Pig, Hive,
and Impala. Pig converts the data using a map and reduce and then
analyzes it. Hive is also based on the map and reduce programming and is
most suitable for structured data.

Visualizing the results (Fourth Stage)


 Performed by tools such as Hue and Cloudera Search. In this stage, the
analyzed data can be accessed by users.
Chapter Two Review Questions

1) Define data science; what are the roles of a data scientist?

2) Discuss data and its types from computer programming and data analytics
perspectives?

3) Discuss a series of steps needed to generate value and useful insights from
data?

4) What is the principal goal of data science?

5) List out and discuss the characteristics of Big Data?

6) How we ingest streaming data into Hadoop Cluster?


End of Chapter Two
.
.
.
Chapter Three

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy