0% found this document useful (0 votes)
17 views

Chap 7-1

This document discusses data analysis and interpretation in business research. It covers the key steps in data processing including editing, coding, classification, and tabulation to organize raw data for analysis. Common descriptive techniques used in research like percentages, frequency tables, contingency tables, and various graphs are also outlined. The interpretation of statistical data requires appropriate use of analytical tools combined with judgment, experience, and accuracy.

Uploaded by

Milkias Muse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Chap 7-1

This document discusses data analysis and interpretation in business research. It covers the key steps in data processing including editing, coding, classification, and tabulation to organize raw data for analysis. Common descriptive techniques used in research like percentages, frequency tables, contingency tables, and various graphs are also outlined. The interpretation of statistical data requires appropriate use of analytical tools combined with judgment, experience, and accuracy.

Uploaded by

Milkias Muse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Business Research Methods

Chapter Seven
7. Data Analysis and Interpretation
Introduction
A researcher's important function is the appropriate interpretation of different types of statistical data with
the help of his tools. The preliminary statistical work consists of collection, classification, tabulation,
presentation and analysis of data. The most important part of the statistical work consists in the proper use
of the statistical tools in the interpretation of data. The most commonly used tools are 'Mean, Median,
Mode; Geometric Mean, Measures of Dispersion such as Range; Mean Deviation, Standard Deviation and
also other measures such as Coefficient of Correlation, Index Numbers etc. It is necessary to note that
technical interpretation of data has to be combined with a high degree of sound judgment, statistical
experience, skill and accuracy. After all figures do not lie, they are innocent. But figures obtained
haphazardly, compiled unscientifically and analyzed incompetently would lead to general distrust in
statistics by the citizens. It should be understood that "statistical methods are the most dangerous tools in the
hands of an expert".
3.1Data Analysis
Data continues to be in raw form, unless and until they are processed and analyzed. Processing is a statistical
method by which the collected data is so organized the further analysis and interpretation of data become
easy. It is an intermediary stage between the collection of data and their analysis and interpretation. There
are four important stages in the processing of data. They are; editing, coding, classification &tabulation.
1. Editing: As soon as the researcher receives the data, s/he should screen it for accuracy. Editing is the
process of examining the data collected through various methods to detect errors and omissions and
correct them for further analysis. Though editing, it is ensured that the collected data are accurate,
consistent with other facts gathered, uniformly entered and well arranged so that further analysis is
made easier.
2. Coding: It is the process by which r response categories are summarized by numerals or other symbols
to carry out subsequent operations of data analysis. This process of assigning numerals or symbols to
the responses is called coding. It facilitates efficient analysis of the collected data and helps in reducing
several replies to a small number of classes which contain the critical information required for analysis.
In general it reduces the huge amount of information collected in to a form that is amenable (agreeable)
to analysis.
3. Classification: It is the process of reducing large mass of data in to homogeneous groups for
meaningful analysis. It converts data from complex to understandable and unintelligible to intelligible
forms. It divides data in to different groups or classes according to their similarities and dissimilarities.
When the data are classified, they give summary of whole information.
Objectives of classification
 To organize data in to concise, logical and intelligible form.
 To take the similarities and dissimilarities s between various classes clear.
 To facilitate comparison between various classes of data.
 To help the researcher in understanding the significance of various classes of data.
 To facilitate analysis and formulate generalizations.
Types of classification
A) Classification according to external characteristics: In this classification, data may be classified
either on geographical basis or periodical basis. On geographical basis, the data that are collected
from different places are placed in different classes while on periodical basis (chronological
classification), the data belonging to a particular time or period are put under one and this type of
classification is based on period.
B) Classification according to internal characteristics: Data may be classified either according to
attributes or according to the magnitude of variables. According to attributes, data are classified on
the basis of some attributes and characteristics. If the classification is based on one particular
attribute only it is called simple classification. Eg; classification on the basis of sex. Whereas, if the
classification is based on more than one or several attributes it is called manifold or multiple
classifications. In this data are classified in several groups. According to variables, here the data are
classified to some characteristics that can be measured. Data are classified on the basis of
quantitative characteristics such as age, height; weight etc. & quantitative variables are grouped in to

WCU, CBE, Dep’t of Mgt. Page 1


Business Research Methods

two; if the variables can take only exact value, it is called discrete variable while the variables that
can take any numerical value within a specified range are called continuous variable.
4. Tabulation: Tabulation is the next step to classification. It is an orderly arrangement of data in rows
and columns. It is defined as the “measurement of data in columns and rows”. Data presented in tabular
form is much easier to read and understand than the data presented in the text the main purpose of
tabulation is to prepare the data for final analysis. It is a stage between classification of data and final
analysis.
Types of Tables
Simple Table: Here the data are presented only for one variable or characteristic. Any frequency
distribution of a single variable is simple table.
Complex table: In complex table, two or more characteristics are shown. If the study is related to
more than two variables, it is called multivariate analysis. They may be of the following tables.
a) One- way table: In this type of table, data of only one characteristic will be shown. It means that
when one type of information is secured about different groups or individuals, it can be displayed
with the help of one- way table.
b) Two- way table: When mutually related attributes of a phenomenon are to be displayed, two
way tables are used. In other words, this table shows two types of characteristics.
c) Three-way table: It displays three types of attributes. It is used when three inter- related or
mutually related attributes or characteristics of a phenomenon are to be displayed.
d) Manifold tables: When information about different mutually attributes or characteristics of a
phenomenon is to be displayed, manifold table is used. Such tables display information about
various characteristics or attributes.
Common Descriptive Techniques
The most common descriptive statistics used in research consist of percentages and frequency tables.
Percentages: Percentages are a popular method of displaying distribution. Percentages are the most
powerful in making comparisons. In percentages, we simplify the data by reducing all numbers in a
range of 10 to 100.
Frequency Tables: One of the most common ways to describe a single variable is with a frequency
distribution. Frequency distribution can be depicted in two ways, as table or as a graph. If the
frequency distribution is depicted in the form of a table, we call it frequency table.
Contingency Tables: A Contingency table shows the relationship between two variables in tabular
form. The term Contingency table was first used by the statistician Karl Pearson in 1904.
Contingency tables are especially used in Chi- square test.
Graphs and Diagrams: In research, the data collected may be of complex nature. Diagrams and
graphs is one of the methods which simplifies the complexity of quantitative data and make them
easily intelligible. They present dry and uninteresting statistical facts in the shape of attracting and
appealing pictures. They have a lasting effect on the human mind than the conventional numbers.
The following graphs are commonly used to represent data.
1) Line Graphs, or Charts: A line graph displays information in a series of data points that each
represents an individual measurement or piece of data. The series of points are then connected by a line
to show a visual trend in data over a period of time. The line is connected through each piece
chronologically.
2) The Bar Graph: is a common type of graph which consists of parallel bars or rectangles with lengths
that are equal to the quantities that occur in a given data set. The bars can be presented vertically or
horizontally to show the contrast and record information. Bar graphs are used for plotting discontinuous
(discrete) data. Discrete data contains discrete values and are not continuous.
3) Circle charts or pie diagram: A pie graph is a circle divided into sections which each display the size
of a relative piece of information. Each section of the graph comes together to form a whole. In a pie
graph, the length of each sector is proportional to the percentage it represents. Pie graphs work
particularly well when each slice of the pie represents 25 to 50 percent of the given data.
4) Pictograms: A pictogram, also called a pictogram or pictograph, is an ideogram that conveys its
meaning through its pictorial resemblance to a physical object. Pictographs are often used in writing
and graphic systems in which the characters are to a considerable extent pictorial in appearance.
Pictography is a form of writing which uses representational, pictorial drawings. It is a basis of
cuneiform and, to some extent, hieroglyphic writing, which also uses drawings as phonetic letters or
determinative rhymes.

WCU, CBE, Dep’t of Mgt. Page 2


Business Research Methods

After the above comprehensive & step-wise processing of data, analysis of data is considered to be highly
skilled and technical job which should be carried out .Only by the researcher himself or under his close
supervision. Analysis of data means critical examination of the data for studying the characteristics of the
object under study and for determining the patterns of relationship among the variables relating to it’s using
both quantitative and qualitative methods.
Purpose of Analysis
Statistical analysis of data saves several major purposes.
 It summarizes large mass of data in to understandable and meaningful form.
 It makes descriptions to be exact.
 It aids the drawing of reliable inferences from observational data.
 It facilitates identification of the casual factors underlying complex phenomena.
 It helps making estimations or generalizations from the results of sample surveys.
 Inferential analysis is useful for assessing the significance of specific sample results under assumed
population conditions.
Steps in Analysis
1. Different steps in research analysis consist of the following.
2. The first step involves construction of statistical distributions and calculation of simple measures like
averages, percentages, etc.
3. The second step is to compare two or more distributions or two or more subgroups within a distribution.
4. Third step is to study the nature of relationships among variables.
5. Next step is to find out the factors which affect the relationship between a set of variables
6. Testing the validity of inferences drawn from sample survey by using parametric tests of significance.
Types of Analysis
Statistical analysis may broadly classified as descriptive analysis and inferential analysis
i) Descriptive Analysis: Descriptive statistics are used to describe the basic features of the data in a
study. They provide simple summaries about the sample and the measures. Descriptive statistics is the
discipline of quantitatively describing the main features of a collection of data or the quantitative
description itself. In such analysis there are univariate analysis bivariate analysis and multivariate
analysis.
 Univariate analysis: Univariate analysis involves describing the distribution of a single
variable, including its central tendency (including the mean, median, and mode) and dispersion
(including the range and quartiles of the data-set, and measures of spread such as the variance
and standard deviation). The shape of the distribution may also be described via indices such as
skewness and kurtosis. Characteristics of a variable's distribution may also be depicted in
graphical or tabular format, including histograms and stem-and-leaf display.
 Bivariate analysis: Bivariate analysis is one of the simplest forms of the quantitative
(statistical) analysis. It involves the analysis of two variables (often denoted as X, Y), for the
purpose of determining the empirical relationship between them. Common forms of bivariate
analysis involve creating a percentage table or a scatter plot graph and computing a simple
correlation coefficient.
 Multivariate analysis: In multivariate analysis multiple relations between multiple variables
are examined simultaneously. Multivariate analysis (MVA) is based on the statistical principle
of multivariate statistics, which involves observation and analysis of more than one statistical
outcome variable at a time. In design and analysis, the technique is used to perform trade studies
across multiple dimensions while taking into account the effects of all variables on the
responses of interest.
ii) Inferential Analysis: Inferential statistics is concerned with making predictions or inferences about
a population from observations and analyses of a sample. That is, we can take the results of an
analysis using a sample and can generalize it to the larger population that the sample represents.
There are two areas of statistical inferences (a) statistical estimation and (b) the testing of hypothesis.
WCU, CBE, Dep’t of Mgt. Page 3
Business Research Methods

Tools and Statistical Methods For Analysis


The tools and technique of statistics can be studied under two divisions of statistics.
(A)Descriptive Statistics
In descriptive statistics we develop certain indices and measures of raw data. They are;
1) Measures of Central Tendency: involves estimates such as mean, median, mode, geometric
mean, and harmonic mean.
2) Measures of Dispersion: common measures of dispersion, the range and the standard deviation. It
can be used to compare the variability in two statistical series
3) Measures of skeweness and kurtosis: A fundamental task in many statistical analyses is to
characterize the location and variability of a data set. A further characterization of the data
includes skewness and kurtosis. Skewness is a measure of symmetry, or more precisely, the lack of
symmetry. A distribution, or data set, is symmetric if it looks the same to the left and right of the
center point. Kurtosis is a measure of whether the data are peaked or flat relative to a normal
distribution.
4) Measures of correlation: simple correlation (two variables), partial correlation (more than two
variables and we want to study relation between two of them only, treating the others as constant),
& multiple correlations (more than two variables and we want to study relation of one variable
with all other variables together)
5) Regression analysis: is a statistical process for estimating the relationships among variables. It
includes many techniques for modeling and analyzing several variables, when the focus is on the
relationship between a dependent variable and one or more independent variables.
6) Index numbers: An index is a statistical measure of changes in a representative group of
individual data points. Index numbers are designed to measure the magnitude of economic changes
over time.
7) Time series analysis: is a sequence of data points, measured typically at successive points in time
spaced at uniform time intervals.
8) Coefficient of association: Coefficient of association like, Yule’s coefficient, Pearson Coefficient,
etc. measures the extent of association between two attributes.
(B) Inferential Statistics
Inferential statistics deals with forecasting, estimating or judging some results of the universe based on some
units selected from the universe. This process is called Sampling. It facilitates estimation of some population
values known as parameters. It also deals with testing of hypothesis to determine with what validity the
conclusions are drawn.
3.2Interpretation
Interpretation refers to the technique of drawing inference from the collected facts and explaining the
significance of those inferences after an analytical and experimental study. It is a search for broader and
more abstract means of the research findings. If the interpretation is not done very carefully, misleading
conclusions may be drawn. The interpreter must be creative of ideas he should be free from bias and
prejudice.
Fundamental principles of interpretation
Sound interpretation involves willingness on the part of the interpreter to see what is in the data.
Sound interpretation requires that the interpreter knows something more than the mere figures.
Sound interpretation demands logical thinking.
Clear and simple language is necessary for communicating the interpretation.
Errors of interpretation
The errors of interpretation can be classified into two groups.
1) Errors due to false generalizations:
Errors occur when (i) unwarranted conclusions are drawn from the facts available. (ii) Drawing
conclusions from an argument running from effect to cause. (iii) Comparing between two sets of data
with unequal base. (iv)Conclusions are drawn from data irrelevant to the problem. (v) False
generalizations and faulty statistical methods are made.
2) Errors due to misuse of statistical measures
When (i) conclusions are based on what is true, on an average. (ii) Percentages are used for
comparisons, when total numbers are different. (iii) Index numbers are used without proper care. (iv)
Casual correlation is used as real correlation.

WCU, CBE, Dep’t of Mgt. Page 4

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy