0% found this document useful (0 votes)
206 views32 pages

Chapter - 2 Hadoop

Hadoop is an open-source framework for storing and processing large datasets in a distributed computing environment. It uses MapReduce for parallel processing across clusters of computers. The core components of Hadoop are the Hadoop Distributed File System (HDFS) for storage, and MapReduce for processing. HDFS stores data across clusters as blocks and replicates them for reliability. MapReduce processes data in parallel by splitting jobs into tasks executed on individual machines, then combining the results.

Uploaded by

Rahul Pawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
206 views32 pages

Chapter - 2 Hadoop

Hadoop is an open-source framework for storing and processing large datasets in a distributed computing environment. It uses MapReduce for parallel processing across clusters of computers. The core components of Hadoop are the Hadoop Distributed File System (HDFS) for storage, and MapReduce for processing. HDFS stores data across clusters as blocks and replicates them for reliability. MapReduce processes data in parallel by splitting jobs into tasks executed on individual machines, then combining the results.

Uploaded by

Rahul Pawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 32

Introduction to Hadoop

Chapter - 2
Introduction to Hadoop
2.1 What is Hadoop?
2.2 Core Hadoop Components
2.3 Hadoop Ecosystem
2.4 Physical Architecture
2.5 Hadoop limitations
Hadoop is an open-source framework that allows to
store and process big data in a distributed
environment across clusters of computers using
simple programming models. It is designed to scale
up from single servers to thousands of machines, each
offering local computation and storage.
This brief tutorial provides a quick introduction to
Big Data, MapReduce algorithm, and Hadoop
Distributed File System.
What is Big Data?
Big data is a collection of large datasets that cannot
be processed using traditional computing techniques.
It is not a single technique or a tool, rather it has
become a complete subject, which involves various
tools, technqiues and frameworks.
 What Comes Under Big Data?
 Big data involves the data produced by different devices and applications.
Given below are some of the fields that come under the umbrella of Big Data.
 Black Box Data − It is a component of helicopter, airplanes, and jets, etc. It
captures voices of the flight crew, recordings of microphones and earphones,
and the performance information of the aircraft.
 Social Media Data − Social media such as Facebook and Twitter hold
information and the views posted by millions of people across the globe.
 Stock Exchange Data − The stock exchange data holds information about
the ‘buy’ and ‘sell’ decisions made on a share of different companies made by
the customers.
 Power Grid Data − The power grid data holds information consumed by a
particular node with respect to a base station.
 Transport Data − Transport data includes model, capacity, distance and
availability of a vehicle.
 Search Engine Data − Search engines retrieve lots of data from different
databases.
Thus Big Data includes huge volume, high velocity,
and extensible variety of data. The data in it will be of
three types.
Structured data − Relational data.
Semi Structured data − XML data.
Unstructured data − Word, PDF, Text, Media Logs.
Benefits of Big Data
Using the information kept in the social network like
Facebook, the marketing agencies are learning about the
response for their campaigns, promotions, and other
advertising mediums.
Using the information in the social media like preferences
and product perception of their consumers, product
companies and retail organizations are planning their
production.
Using the data regarding the previous medical history of
patients, hospitals are providing better and quick service.
Big Data Technologies
Big data technologies are important in providing more accurate
analysis, which may lead to more concrete decision-making
resulting in greater operational efficiencies, cost reductions, and
reduced risks for the business.
To harness the power of big data, you would require an
infrastructure that can manage and process huge volumes of
structured and unstructured data in real-time and can protect
data privacy and security.
There are various technologies in the market from different
vendors including Amazon, IBM, Microsoft, etc., to handle big
data. While looking into the technologies that handle big data,
we examine the following two classes of technology −
Operational Big Data
 This include systems like MongoDB that provide operational capabilities for
real-time, interactive workloads where data is primarily captured and stored.
 NoSQL Big Data systems are designed to take advantage of new cloud
computing architectures that have emerged over the past decade to allow
massive computations to be run inexpensively and efficiently. This makes
operational big data workloads much easier to manage, cheaper, and faster to
implement.
 Some NoSQL systems can provide insights into patterns and trends based on
real-time data with minimal coding and without the need for data scientists
and additional infrastructure.
 Analytical Big Data
 These includes systems like Massively Parallel Processing (MPP) database
systems and MapReduce that provide analytical capabilities for retrospective
and complex analysis that may touch most or all of the data.
 MapReduce provides a new method of analyzing data that is complementary to
the capabilities provided by SQL, and a system based on MapReduce that can
be scaled up from single servers to thousands of high and low end machines.
 These two classes of technology are complementary and frequently deployed
together.
Big Data Challenges
The major challenges associated with big data are as follows −
Capturing data
Curation
Storage
Searching
Sharing
Transfer
Analysis
Presentation
Traditional Approach
In this approach, an enterprise will have a computer to store and
process big data. For storage purpose, the programmers will take
the help of their choice of database vendors such as Oracle, IBM,
etc. In this approach, the user interacts with the application,
which in turn handles the part of data storage and analysis.
Limitation
This approach works fine with those applications that process
less voluminous data that can be accommodated by standard
database servers, or up to the limit of the processor that is
processing the data. But when it comes to dealing with huge
amounts of scalable data, it is a hectic task to process such data
through a single database bottleneck.
Google’s Solution
Google solved this problem using an algorithm called
MapReduce. This algorithm divides the task into
small parts and assigns them to many computers, and
collects the results from them which when integrated,
form the result dataset.
Hadoop
Using the solution provided by Google, Doug
Cutting and his team developed an Open Source
Project called HADOOP.
Hadoop runs applications using the MapReduce
algorithm, where the data is processed in parallel with
others. In short, Hadoop is used to develop
applications that could perform complete statistical
analysis on huge amounts of data.
Hadoop is an Apache open source framework written
in java that allows distributed processing of large
datasets across clusters of computers using simple
programming models. The Hadoop framework
application works in an environment that provides
distributed storage and computation across clusters of
computers. Hadoop is designed to scale up from
single server to thousands of machines, each offering
local computation and storage.
Hadoop Architecture

Hadoop Architecture
At its core, Hadoop has two major layers namely −

Processing/Computation layer (MapReduce), and


Storage layer (Hadoop Distributed File System).
MapReduce
MapReduce is a parallel programming model for
writing distributed applications devised at Google for
efficient processing of large amounts of data (multi-
terabyte data-sets), on large clusters (thousands of
nodes) of commodity hardware in a reliable, fault-
tolerant manner.
The MapReduce program runs on Hadoop which is
an Apache open-source framework.
Hadoop Distributed File System
 The Hadoop Distributed File System (HDFS) is based on the Google
File System (GFS) and provides a distributed file system that is
designed to run on commodity hardware. It has many similarities with
existing distributed file systems. However, the differences from other
distributed file systems are significant. It is highly fault-tolerant and is
designed to be deployed on low-cost hardware. It provides high
throughput access to application data and is suitable for applications
having large datasets.
 Apart from the above-mentioned two core components, Hadoop
framework also includes the following two modules −
 Hadoop Common − These are Java libraries and utilities required by
other Hadoop modules.

 Hadoop YARN − This is a framework for job scheduling and cluster


resource management.
How Does Hadoop Work?
 It is quite expensive to build bigger servers with heavy configurations that
handle large scale processing, but as an alternative, you can tie together many
commodity computers with single-CPU, as a single functional distributed
system and practically, the clustered machines can read the dataset in parallel
and provide a much higher throughput. Moreover, it is cheaper than one
high-end server. So this is the first motivational factor behind using Hadoop
that it runs across clustered and low-cost machines.
 Hadoop runs code across a cluster of computers. This process includes the
following core tasks that Hadoop performs −
 Data is initially divided into directories and files. Files are divided into
uniform sized blocks of 128M and 64M (preferably 128M).
 These files are then distributed across various cluster nodes for further
processing.
 HDFS, being on top of the local file system, supervises the processing.
 Blocks are replicated for handling hardware failure.
 Checking that the code was executed successfully.
 Performing the sort that takes place between the map and reduce stages.
 Sending the sorted data to a certain computer.
 Writing the debugging logs for each job.
Advantages of Hadoop
Hadoop framework allows the user to quickly write and test
distributed systems. It is efficient, and it automatic distributes
the data and work across the machines and in turn, utilizes the
underlying parallelism of the CPU cores.
Hadoop does not rely on hardware to provide fault-tolerance
and high availability (FTHA), rather Hadoop library itself has
been designed to detect and handle failures at the application
layer.
Servers can be added or removed from the cluster dynamically
and Hadoop continues to operate without interruption.
Another big advantage of Hadoop is that apart from being open
source, it is compatible on all the platforms since it is Java
based.
Hadoop - HDFS Overview

Hadoop File System was developed using distributed


file system design. It is run on commodity hardware.
Unlike other distributed systems, HDFS is highly
faulttolerant and designed using low-cost hardware.
HDFS holds very large amount of data and provides
easier access. To store such huge data, the files are
stored across multiple machines. These files are
stored in redundant fashion to rescue the system from
possible data losses in case of failure. HDFS also
makes applications available to parallel processing.
Features of HDFS

It is suitable for the distributed storage and


processing.
Hadoop provides a command interface to interact
with HDFS.
The built-in servers of namenode and datanode help
users to easily check the status of cluster.
Streaming access to file system data.
HDFS provides file permissions and authentication.
HDFS Architecture
HDFS follows the master-slave architecture
and it has the following elements.
Namenode
The namenode is the commodity hardware that
contains the GNU/Linux operating system and the
namenode software. It is a software that can be run
on commodity hardware. The system having the
namenode acts as the master server and it does the
following tasks −
Manages the file system namespace.
Regulates client’s access to files.
It also executes file system operations such as
renaming, closing, and opening files and directories.
Datanode
The datanode is a commodity hardware having the
GNU/Linux operating system and datanode software.
For every node (Commodity hardware/System) in a
cluster, there will be a datanode. These nodes manage
the data storage of their system.
Datanodes perform read-write operations on the file
systems, as per client request.
They also perform operations such as block creation,
deletion, and replication according to the instructions
of the namenode.
Block
Generally the user data is stored in the files of HDFS.
The file in a file system will be divided into one or
more segments and/or stored in individual data
nodes. These file segments are called as blocks. In
other words, the minimum amount of data that HDFS
can read or write is called a Block. The default block
size is 64MB, but it can be increased as per the need
to change in HDFS configuration.
Goals of HDFS
Fault detection and recovery − Since HDFS includes a
large number of commodity hardware, failure of
components is frequent. Therefore HDFS should have
mechanisms for quick and automatic fault detection and
recovery.
Huge datasets − HDFS should have hundreds of nodes
per cluster to manage the applications having huge
datasets.
Hardware at data − A requested task can be done
efficiently, when the computation takes place near the
data. Especially where huge datasets are involved, it
reduces the network traffic and increases the throughput.
Quiz
1.Hadoop is an _______________that allows to store and
process big data in a distributed environment.
2.______ is a collection of large datasets that cannot be
processed using traditional computing techniques.
3._________ divides the task into small parts and assigns
them to many computers, and collects the results from
them which when integrated, form the result dataset.
4._______ is used to develop applications that could perform
complete statistical analysis on huge amounts of data.
5.Hadoop is an Apache open source framework written in
java that allows ______________of large datasets across
clusters of computers using simple programming models.
Quiz
6.The Hadoop framework application works in an
environment that provides
distributed _______ and _________ across clusters of
computers.
7.Map Reduce is a ______________for writing distributed
applications devised at Google for efficient processing of
large amounts of data.
8.The built-in servers of _______and ________ help users to
easily check the status of cluster.
9.Generally the user data is stored in the files of HDFS. The
file in a file system will be divided into one or more
segments and/or stored in individual data nodes. These
file segments are called as _______.
10.They also perform operations such as block creation,
deletion, and replication according to the instructions of
the _________.
Answers:
1. Open source framework
2. Big data
3. Map Reduce
4. Hadoop
5. Data processing system
6. Storage and Computation
7. Parallel Programming model
8. Name node and Data node
9. Blocks
10. Name node

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy