0% found this document useful (0 votes)
15 views14 pages

Map Reduce 2

MapReduce is a distributed computing model that processes data through two main tasks: Map, which converts input data into key/value pairs, and Reduce, which combines those pairs into a smaller set. The framework operates on <key, value> pairs and includes stages such as mapping, shuffling, and reducing, with Hadoop managing task distribution and data handling. Examples include counting word frequencies and identifying top salaried employees from a dataset.

Uploaded by

Kavvya Mridul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views14 pages

Map Reduce 2

MapReduce is a distributed computing model that processes data through two main tasks: Map, which converts input data into key/value pairs, and Reduce, which combines those pairs into a smaller set. The framework operates on <key, value> pairs and includes stages such as mapping, shuffling, and reducing, with Hadoop managing task distribution and data handling. Examples include counting word frequencies and identifying top salaried employees from a dataset.

Uploaded by

Kavvya Mridul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 14

MAP REDUCE EXAMPLES

WHAT IS MAPREDUCE?
• MapReduce is a processing technique and a program model for distributed computing
based on java. The MapReduce algorithm contains two important tasks, namely Map
and Reduce. Map takes a set of data and converts it into another set of data, where
individual elements are broken down into tuples (key/value pairs). Secondly, reduce
task, which takes the output from a map as an input and combines those data tuples
into a smaller set of tuples. As the sequence of the name MapReduce implies, the
reduce task is always performed after the map job.
• The major advantage of MapReduce is that it is easy to scale data processing over
multiple computing nodes. Under the MapReduce model, the data processing
primitives are called mappers and reducers. Decomposing a data processing
application into mappers and reducers is sometimes nontrivial. But, once we write an
application in the MapReduce form, scaling the application to run over hundreds,
thousands, or even tens of thousands of machines in a cluster is merely a
configuration change. This simple scalability is what has attracted many programmers
to use the MapReduce model.
THE ALGORITHM
• Generally MapReduce paradigm is based on sending the computer to where
the data resides!
• MapReduce program executes in three stages, namely map stage, shuffle
stage, and reduce stage.
• Map stage − The map or mapper’s job is to process the input data.
Generally the input data is in the form of file or directory and is stored in
the Hadoop file system (HDFS). The input file is passed to the mapper
function line by line. The mapper processes the data and creates several
small chunks of data.
• Reduce stage − This stage is the combination of the Shuffle stage and the
Reduce stage. The Reducer’s job is to process the data that comes from the
mapper. After processing, it produces a new set of output, which will be
stored in the HDFS.
• During a MapReduce job, Hadoop sends the Map and Reduce tasks to the appropriate
servers in the cluster.
• The framework manages all the details of data-passing such as issuing tasks, verifying
task completion, and copying data around the cluster between the nodes.
• Most of the computing takes place on nodes with data on local disks that reduces the
network traffic.
• After completion of the given tasks, the cluster collects and reduces the data to form an
appropriate result, and sends it back to the Hadoop server.
INPUTS AND OUTPUTS (JAVA
PERSPECTIVE)
• The MapReduce framework operates on <key, value> pairs, that is, the framework views the
input to the job as a set of <key, value> pairs and produces a set of <key, value> pairs as the
output of the job, conceivably of different types.

• The key and the value classes should be in serialized manner by the framework and hence,
need to implement the Writable interface. Additionally, the key classes have to implement the
Writable-Comparable interface to facilitate sorting by the framework. Input and Output types
of a MapReduce job − (Input) <k1, v1> → map → <k2, v2> → reduce → <k3, v3>(Output).

• Input Output
• Map <k1, v1> list (<k2, v2>)
• Reduce <k2, list(v2)> list (<k3, v3>)
TERMINOLOGY
• Mapper − Mapper maps the input key/value pairs to a set of intermediate key/value
pair.
• NamedNode − Node that manages the Hadoop Distributed File System (HDFS).
• DataNode − Node where data is presented in advance before any processing takes
place.
• MasterNode − Node where JobTracker runs and which accepts job requests from
clients.
• SlaveNode − Node where Map and Reduce program runs.
• JobTracker − Schedules jobs and tracks the assign jobs to Task tracker.
• Task Tracker − Tracks the task and reports status to JobTracker.
• Job − A program is an execution of a Mapper and Reducer across a dataset.
• Task − An execution of a Mapper or a Reducer on a slice of data.
• Task Attempt − A particular instance of an attempt to execute a task on a SlaveNode.
STEPS IN MAP REDUCE

• The MapReduce algorithm contains two important tasks, namely Map


and Reduce.
• The Map task takes a set of data and converts it into another set of
data, where individual elements are broken down into tuples (key-value
pairs).
• The Reduce task takes the output from the Map as an input and
combines those data tuples (key-value pairs) into a smaller set of tuples.
• The reduce task is always performed after the map job.
LET US NOW TAKE A CLOSE LOOK AT EACH
OF THE PHASES AND TRY TO UNDERSTAND
THEIR SIGNIFICANCE.
• Input Phase − Here we have a Record Reader that translates
each record in an input file and sends the parsed data to the
mapper in the form of key-value pairs.
• Map − Map is a user-defined function, which takes a series of
key-value pairs and processes each one of them to generate
zero or more key-value pairs.
• Intermediate Keys − They key-value pairs generated by the
mapper are known as intermediate keys.
• Combiner − A combiner is a type of local Reducer that groups
similar data from the map phase into identifiable sets. It
takes the intermediate keys from the mapper as input and
applies a user-defined code to aggregate the values in a
small scope of one mapper. It is not a part of the main
• Shuffle and Sort − The Reducer task starts with the Shuffle and Sort
step. It downloads the grouped key-value pairs onto the local
machine, where the Reducer is running. The individual key-value
pairs are sorted by key into a larger data list. The data list groups the
equivalent keys together so that their values can be iterated easily
in the Reducer task.
• Reducer − The Reducer takes the grouped key-value paired data as
input and runs a Reducer function on each one of them. Here, the
data can be aggregated, filtered, and combined in a number of ways,
and it requires a wide range of processing. Once the execution is
over, it gives zero or more key-value pairs to the final step.
• Output Phase − In the output phase, we have an output formatter
that translates the final key-value pairs from the Reducer function
and writes them onto a file using a record writer.
LET US TRY TO UNDERSTAND THE TWO
TASKS MAP &F REDUCE WITH THE HELP OF
A SMALL DIAGRAM −
1. MapReduce example to count the frequency of each word in
a given input text. Our input text is, “Big data comes in
various formats. This data can be stored in multiple data
servers.”
2. Find the top 3 salaried employees in following data using
mapreduce
George Vetti caden 3300
Jamie Engesser 3300
Paul Coddin 2800
Joe Niemiec 3100
Adis Cesir 3200
Rohit Bakshi 3300
Tom McCuch 3000
Eric Mizell 3300
Grant Liu 3200
Ajay Singh 2500
Chris Harris 2900
Jeff Markham 3100
Nadeem Asghar 3300
Adam Diaz 3300
Don Hilborn 3300
Jean-Philippe Playe 3400
Michael Aube 3300
Mark Lochbihler 3300
Olivier Renault 3300
Teddy Choi 1200
Dan Rice 2500
Rommel Garcia 3300
Ryan Templeton 3300
Sridhara Sabbella 3300
Frank Romano 3300

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy