18ME505 M&M Teaching Notes Unit-1 &2
18ME505 M&M Teaching Notes Unit-1 &2
18ME505 M&M Teaching Notes Unit-1 &2
1 2 3 4 5 6 7
Day/Time 09:30 10:20 11:30 12:20 02:00 02:50 03:40
10:20 11:20 12:20 01:10 02:50 03:40 04:30
Monday M&M
Tuesday M&M
Wednesday M&M
Thursday
Friday M&M
Saturday M&M
Types of Metrology:
1.Legal Metrology. 'Legal metrology' is that part of metrology which treats units of
measurements, methods of measurements and the measuring instruments, in relation to the
technical and legal requirements.
1 The importance of the science of measurement as a tool for scientific research (by which
accurate and reliable information can be obtained) was emphasized by Ga1ileo and Gvethe.
This is essential for solving almost all technical problems in the field of engineering in
general, and in production engineering and experimental design in particular. The design
engineer should not only check his design from the point of view of strength or economical
production, but he should also keep in mind how the specified can be checked or measured.
Unfortunately, a considerable amount of engineering work is still being executed without
realizing the importance of inspection and quality control for improving the function of
product and achieving the economical production.
2 Higher productivity and accuracy is called for by the present manufacturing techniques.
This cannot be achieved unless the science of metrology is understood, introduced and
applied in industries. Improving the quality of production necessitates proportional
improvement of the measuring accuracy, and marking out of components before machining
and the in-process and post process control of the dimensional and geometrical accuracies of
the product. Proper gauges should be designed and used for rapid and effective inspection.
Also automation and automatic control, which are the modem trends for future developments,
are based on measurement. Means for automatic gauging as well as for position and
displacement measurement with feedback control have to be provided.
NEED OF INSPECTION
In order to determine the fitness of anything made, man has always used inspection. But
industrial inspection is of recent origin and has scientific approach behind it. It came into
being because of mass production which involved interchangeability of parts. In old craft,
same craftsman used to be producer as well as assembler. Separate inspections were not
required. If any component part did not fit properly at the time of assembly, the craftsman
would make the necessary adjustments in either of the mating parts so that each assembly
functioned properly. So actually speaking, no two parts will be alike/and there was practically
no reason why they should be. Now new production techniques have been developed and
parts are being manufactured in large scale due to low-cost methods of mass production. So
hand-fit methods cannot serve the purpose any more. When large number of components of
same part is being produced, then any part would be required to fit properly into any other
mating component part. This required specialisation of men and machines for the
performance of certain operations. It has, therefore, been considered necessary to divorce the
worker from all round crafts work and to supplant hand-fit methods with interchangeable
manufacture. The modern production techniques require that production of complete article
be broken up into various component parts so that the production of each component
part becomes an independent process. The various parts to be assembled together in assembly
shop come from various shops. Rather some parts are manufactured in other factories also
and then assembled at one place. So it is very essential that parts must be so fabricated that
the satisfactory mating of any pair chosen at random is possible. In order that this may be
possible, the dimensions of the component part must be confined within the prescribed limits
which are such as to permit the assembly with a predetermined fit. Thus industrial inspection
assumed its importance due to necessity of suitable mating of various components
manufactured separately. It may be appreciated that when large quantities of work- pieces are
manufactured on the basis of interchangeability, it is not necessary to actually measure the
important features and much time could be saved by using gauges which determine whether
or not a particular feature is within the prescribed limits. The methods of gauging, therefore,
determine the dimensional accuracy of a feature, without reference to its actual
size. The purpose of dimensional control is however not to strive for the exact size as it is
impossible to produce all the parts of exactly same size due to so many inherent and
random sources of errors in machines and men. The principal aim is to control and restrict
the variations within the prescribed limits. Since we are interested in producing the parts such
that assembly meets the prescribed work standard, we must not aim at accuracy beyond the
set limits which, otherwise is likely to lead to wastage of time and uneconomical results.
Lastly, inspection led to development of precision inspection instruments which caused the
transition from crude machines to better designed and precision machines. It had also led to
improvements in metallurgy and raw material manufacturing due to demands of high
accuracy and precision. Inspection has also introduced a spirit of competition
and led to production of quality products in volume by eliminating
tooling bottle-necks and better processing techniques.
INTRODUCTION TO MEASUREMENT
ACCURACY OF MEASUREMENTS:
Temperature variations
Thus, the true dimension of the part cannot be determined but can only by approximate. The
agreement of the measured value with the true value of the measured quantity is called
accuracy. If the measurement of dimensions of a part approximates very closely to the true
value of that dimension, it is said to be accurate. Thus the term accuracy denotes the
closeness of the measured value with the true value. The difference between the measured
value and the true value is the error of measurement. The lesser the error, more is the
accuracy.
Accuracy: Accuracy is the degree to which the measured value of the quality characteristic
agrees with the true value. The difference between the true value and the measured value is
known as error of measurement. It is practically difficult to measure exactly the true value
and therefore a set of observations is made whose mean value is taken as the true value of the
quality measured.
Precision: The terms precision and accuracy are used in connection with the performance of
the instrument. Precision is the repeatability of the measuring process. It refers to the group
of measurements for the same characteristics taken under identical conditions. It indicates to
what extent the identically performed measurements agree with each other. If the instrument
is not precise it will give different (widely varying) results for the same dimension when
measured again and again. The set of observations will scatter about the mean. The scatter of
these measurements is designated as σ, the standard deviation. It is used as an index of
precision. The less the scattering more precise is the instrument. Thus, lower, the value of σ,
the more precise is the instrument.
Accuracy is very often confused with precision though much different. The distinction
between the precision and accuracy will become clear by the following example. Several
measurements are made on a component by different types of instruments (A, B and C
respectively) and the results are plotted. In any set of measurements, the individual
measurements are scattered about the mean, and the precision signifies how well the various
measurements performed by same instrument on the same quality characteristic agree with
each other. The difference between the mean of set of readings on the same quality
characteristic and the true value is called as error. Less the error more accurate is the
instrument. Figure shows that the instrument A is precise since the results of number of
measurements are close to the average value. However, there is a large difference (error)
between the true value and the average value hence it is not accurate. The readings taken by
the instruments are scattered much from the average value and hence it is not precise but
accurate as there is a small difference between the average value and true value.
METHODS OF MEASUREMENTS:
l. Direct method
2. Indirect method
4. Comparative method
5. Transposition method
6. Coincidence method
7. Deflection method
8.Complementary method
9.Contact method
10.Contactless method
11.Composite method
9.Contact method: In this method, the surface to be measured is touched by the sensor or
measuring tip of the instrument. Care needs to be taken to provide constant contact pressure
in order to avoid errors due to excess constant pressure. Examples of this method include
measurements using micrometer, vernier calliper, and dial indicator.
10.Contactless method: As the name indicates, there is no direct contact with the surface to
be measured. Examples of this method include the use of optical instruments, tool maker’s
microscope, and profile projector.
11.Composite method: The actual contour of a component to be checked is compared with
its maximum and minimum tolerance limits. Cumulative errors of the interconnected
elements of the component, which are controlled through a combined tolerance, can be
checked by this method. This method is very reliable to ensure interchangeability and is
usually effected through the use of composite GO gauges. The use of a GO screw plug gauge
to check the thread of a nut is an example of this method.
12. Method of measurement by substitution: It is a method of direct comparison in which
the value of a quantity to be measured is replaced by a known value of the same quantity, so
selected that the effects produced in the indicating device by these two values are the same.
13. Method of null measurement: It is a method of differential measurement. In this method
the difference between the value of the quantity to be measured and the known value of the
same quantity with which it is compared is brought to zero
When the value of the measured quantity remains the same irrespective of whether the
measurements have been obtained in an ascending or a descending order, a system is said to
be free from hysteresis. Many instruments do not reproduce the same reading due to the
presence of hysteresis. Slack motion in bearings and gears, storage of strain energy in the
system, bearing friction, residual charge in electrical components, etc., are some of the
reasons for the occurrence of hysteresis.
Figure shows a typical hysteresis loop for a pressure gauge.
If the width of the hysteresis band formed is appreciably more, the average of the two
measurements (obtained in both ascending and descending orders) is used. However, the
presence of some hysteresis in measuring systems is normal, and the repeatability of the
system is affected by it.
It is desirable to design instruments having a linear relationship between the applied static
input and the indicated output values, as shown in Fig.
Best-fit line The plot of the output values versus the input values with the best line fit is
shown in Fig.
The line of best fit is the most common way to show the correlation between two variables.
This line, which is also known as the trend line, is drawn through the centre of a group of
data points on a scatter plot. The best-fit line may pass through all the points, some of the
points, or none of the points.
End point line This is employed when the output is bipolar. It is the line drawn by joining
the end points of the data plot without any consideration of the origin. This is represented in
Fig.
Terminal line When the line is drawn from the origin to the data point at full scale output, it
is known as terminal line. The terminal line is shown in Fig. 12.6.
Threshold
If the input to the instrument is gradually increased from zero, a minimum value of that input
is required to detect the output. This minimum value of the input is defined as the threshold
of the instrument. The numerical value of the input to cause a change in the output is called
the threshold value of the instrument.
Drift
Drift can be defined as the variation caused in the output of an instrument, which is not
caused by any change in the input. Drift in a measuring instrument is mainly caused by
internal temperature variations and lack of component stability. A change in the zero output
of a measuring instrument caused by a change in the ambient temperature is known as
thermal zero shift. Thermal sensitivity is defined as the change in the sensitivity of a
measuring instrument because of temperature variations. These errors can be minimized by
maintaining a constant ambient temperature during the course of a measurement and/or by
frequently calibrating the measuring instrument as the ambient temperature changes.
Zero Stability
It is defined as the ability of an instrument to return to the zero reading after the input signal
or measurand comes back to the zero value and other variations due to temperature, pressure,
vibrations, magnetic effect, etc., have been eliminated.
Loading Effects
Any measuring instrument generally consists of different elements that are used for sensing,
conditioning, or transmitting purposes. Ideally, when such elements are introduced into the
measuring system, there should not be any distortion in the original signal. However, in
practice, whenever any such element is introduced into the system, some amount of distortion
occurs in the original signal, making an ideal measurement impossible. The distortion may
result in wave form distortion, phase shift, and attenuation of the signal (reduction in
magnitude); sometimes, all these undesirable features may combine to affect the output of the
measurement. Hence, loading effect is defined as the incapability of a measuring system to
faithfully measure, record, or control the measurand in an undistorted form. It may occur in
any of the three stages of measurement or sometimes it may be carried right down to the
basic elements themselves.
System Response
One of the essential characteristics of a measuring instrument is to transmit and present
faithfully all the relevant information included in the input signal and exclude the rest. We
know that during measurements, the input rapidly changes with time, and hence, the output.
The behaviour of the measuring system under the varying conditions of input with respect to
time is known as the dynamic response.
Measuring lag
It is the time when an instrument begins to respond to a change in the measured quantity.
This lag is normally due to the natural inertia of the measuring system.
Measuring lag is of two types:
Retardation type In this case, the measurement system instantaneously begins to respond
after the changes in the input have occurred.
Time delay type In this type, the measuring system begins to respond after a dead time to
the applied input. Dead time is defined as the time required by the measuring system to begin
its response to a change in the quantity to be measured. Dead time simply transfers the
response of the system along the time scale, thereby causing a dynamic error. This type of
measurement lag can be ignored as they are very small and are of the order of a fraction of a
second. If the variation in the measured quantity occurs at a faster rate, the dead time will
have an adverse effect on the performance of the system.
Fidelity
It is defined as the degree to which a measurement system indicates the changes in the
measured quantity without any dynamic error.
Dynamic error
It is also known as a measurement error. It can be defined as the difference between the true
value of a physical quantity under consideration that changes with time and the value
indicated by the measuring system if no static error is assumed. It is to be noted here that
speed of response and fidelity are desirable characteristics, whereas measurement lag and
dynamic error are undesirable.
ERRORS IN MEASUREMENTS:
It is never possible to measure the true value of a dimension there is a always some error. The
error in measurement is the difference between the measured value and the true value of the
measured dimension.
Error in measurement = Measured value - True value
The error in measurement may be expressed or evaluated either as an absolute error or as a
relative error.
Absolute Error:
True absolute error: It is the algebraic difference between the result of measurement and the
conventional true value of the quantity measured.
Apparent absolute error: If the series of measurement are made then the algebraic
difference between one of the results of measurement and the arithmetical mean is known as
apparent absolute error.
Relative Error:
It is the quotient of the absolute error and the value of comparison use or calculation of that
absolute error. This value of comparison may be the true value, the conventional true value or
the arithmetic mean for series of measurement. The accuracy of measurement, and hence the
error depends upon so many factors, such as: - calibration standard - Work piece - Instrument
- Person - Environment etc.
No matter how modern is the measuring instrument, how skillful is the operator, how
accurate the measurement process, there would always be some error. It is therefore
attempted to minimize the error. To minimize the error, usually a number of observations are
made and their average is taken as the value of that measurement. If these observations are
made under identical conditions i.e., same observer, same instrument and similar working
conditions excepting for time, then, it is called as 'Single Sample Test'.
If however, repeated measurements of a given property using alternate test conditions, such
as different observer and/or different instrument are made, the procedure is called as 'Multi-
Sample Test'. The multi-sample test avoids many controllable errors e.g., personal error,
instrument zero error etc. The multi-sample test is costlier than the single sample test and
hence the later is in wide use. In practice good number of observations is made under single
sample test and statistical techniques are applied to get results which could be approximate to
those obtainable from multi-sample test.
Types of Errors
There are three types of errors that are classified on the basis of the source they arise from;
They are:
1.Systematic Errors
2.Random Errors
3.Gross Errors
1.Systematic Errors: These errors include calibration errors, error due to variation in the
atmospheric condition Variation in contact pressure etc. If properly analyzed, these errors can
be determined and reduced or even eliminated hence also called controllable errors. All other
systematic errors can be controlled in magnitude and sense except personal error. These
errors results from irregular procedure that is consistent in action. These errors are repetitive
in nature and are of constant and similar form.
Systematic errors can be better understood if we divide it into subgroups; They are:
Environmental Errors: This type of error arises in the measurement due to the effect
of the external conditions on the measurement. The external condition includes
temperature, pressure, and humidity and can also include an external magnetic field.
If you measure your temperature under the armpits and during the measurement, if the
electricity goes out and the room gets hot, it will affect your body temperature thereby
affecting the reading.
Observational Errors: These are the errors that arise due to an individual’s bias, lack
of proper setting of the apparatus, or an individual’s carelessness in taking
observations. The measurement errors also include wrong readings due to Parallax
errors.
Instrumental Errors: These errors arise due to faulty construction and calibration of
the measuring instruments. Such errors arise due to the hysteresis of the equipment or
due to friction. Lots of the time, the equipment being used is faulty due to misuse or
neglect which changes the reading of the equipment. The zero error is a very common
type of error. This error is common in devices like Vernier calipers and screw gauge.
The zero error can be either positive or negative. Sometimes the readings of the scale
are worn off and this can also lead to a bad reading.
These errors are integral in devices due to their features namely mechanical arrangement.
These may happen due to the instrument operation as well as the operation or computation of
the instrument. These types of errors will make the mistake to study very low otherwise very
high.
For instance – If the apparatus uses the delicate spring then it offers the high-value of
determining measure. These will happen in the apparatus due to the loss of hysteresis or
friction.
Misuse of Apparatus
The error in the instrument happens due to the machinist’s fault. A superior device used in an
unintelligent method may provide a vast result. For instance – the abuse of the apparatus may
cause the breakdown to change the zero of tools, poor early modification, with lead to very
high resistance. Improper observes of these may not reason for lasting harm to the device,
except all the similar, they cause faults.
Effect of Loading
The most frequent type of this error will occur due to the measurement work in the device.
For instance, as the voltmeter is associated to the high-resistance circuit which will give a
false reading, as well as after it is allied to the low-resistance circuit, this circuit will give the
reliable reading, and then the voltmeter will have the effect of loading on the circuit.
The fault which is caused by this effect will be beaten with the help of meters cleverly. For
illustration, once calculating a low-resistance with the method of ammeter-voltmeter, a
voltmeter will have an extremely high resistance value should be used.
2.Random Errors
The random errors are those errors, which occur irregularly and hence are random. These can
arise due to random and unpredictable fluctuations in experimental conditions (Example:
unpredictable fluctuations in temperature, voltage supply, mechanical vibrations of
experimental set-ups, etc, errors by the observer taking readings, etc. For example, when the
same person repeats the same observation, it is very likely that he may get different readings
every time.
3.Gross Errors
This category basically takes into account human oversight and other mistakes while reading,
recording, and readings. The most common errors, the human error in the measurement fall
under this category of errors in measurement. For example, the person taking the reading
from the meter of the instrument he may read 23 as 28. Gross errors can be avoided by using
two suitable measures, and they are written below:
Proper care should be taken in reading, recording the data. Also, the calculation of
error should be done accurately.
By increasing the number of experimenters, we can reduce the gross errors. If each
experimenter takes different reading at different points, then by taking the average of
more readings we can reduce the gross errors
ERRORS CALCULATION
Absolute Error
The difference between the measured value of a quantity and its actual value gives the
absolute error. It is the variation between the actual values and measured values. It is given
by
Percent Error
It is another way of expressing the error in measurement. This calculation allows us to gauge
how accurate a measured value is with respect to true value. Percent error is given by the
formula
Relative Error
The ratio of the absolute error to the accepted measurement gives the relative error. Relative
error is given by the formula:
Keeping an eye on the procedure and following below listed points can help to reduce
the error.
Make sure the formulas used for measurement are correct.
Cross check the measured value of a quantity for improved accuracy.
Use the instrument that has the highest precision.
It is suggested to pilot test measuring instruments for better accuracy.
Use multiple measures for the same construct.
Note the measurements under controlled conditions.
FACTORS AFFECTING THE ACCURACY OF THE MEASURING SYSTEM:
The basic components of an accuracy evaluation are the five elements of a measuring system
such as:
Factors affecting the calibration standards.
Factors affecting the work piece.
Factors affecting the inherent characteristics of the instrument.
Factors affecting the person, who carries out the measurements,
Factors affecting the environment.
1. Factors affecting the Standard: It may be affected by: - coefficient of thermal expansion,
- calibration interval, - stability with time, - elastic properties, - geometric compatibility
2. Factors affecting the Work piece: These are: - cleanliness, surface finish, waviness,
scratch, surface defects etc., - hidden geometry, - elastic properties, - adequate datum on the
work piece, - arrangement of supporting work piece, - thermal equalization etc.
3. Factors affecting the inherent characteristics of Instrument: - adequate amplification
for accuracy objective, - scale error, - effect of friction, backlash, hysteresis, zero drift error, -
deformation in handling or use, when heavy work pieces are measured, - calibration errors, -
mechanical parts (slides, guide ways or moving elements), - repeatability and readability, -
contact geometry for both work piece and standard.
4. Factors affecting person : - training, skill, - sense of precision appreciation, - ability to
select measuring instruments and standards, - sensible appreciation of measuring cost, -
attitude towards personal accuracy achievements, - planning measurement techniques for
minimum cost, consistent with precision requirements etc.
5. Factors affecting Environment: - temperature, humidity etc., - clean surrounding and
minimum vibration enhance precision, - adequate illumination, - temperature equalization
between standard, work piece, and instrument, - thermal expansion effects due to heat
radiation from lights, - heating elements, sunlight and people, - manual handling may also
introduce thermal expansion.
Higher accuracy can be achieved only if, ail the sources of error due to the above five
elements in the measuring system are analyzed and steps taken to eliminate them. The above
analysis of five basic metrology elements can be composed into the acronym. SWIPE, for
convenient reference.
where
S - STANDARD
W- WORKPIECE
I - INSTRUMENT
P-PERSON
E – ENVIRONMENT
MEASUREMENT UNCERTINITY
By international agreement, this uncertainty has a probabilistic basis and reflects incomplete
knowledge of the quantity value. It is a non-negative parameter.
The measurement uncertainty is often taken as the standard deviation of a state-of-knowledge
probability distribution over the possible values that could be attributed to a measured
quantity.
Uncertanity in Measurement
Let’s say we want to measure the length of a room with tape or by pacing it. We are likely to
have different counts each time if we pace it off, or we will have a fraction of a pace left over.
As a result, the measurement’s result isn’t entirely correct. The method of measurement has
an impact on accuracy. The measure is more exact when using a tape than when pacing off a
length. Repeating a measurement is one way to assess its quality. Take the average figure
because each measurement is likely to yield a somewhat different result.
If the different measurements of the average value are close to the correct value, the measure
is accurate (the individual measurements may not be comparable to each other).
If the different measurement values are near to one another and hence near to their mean
value, the estimation is said to be precise. (The average value of different measurements may
not be close to the correct value). The precision depends upon the measuring device as well
as the skill of the operator.
CALCULATE UNCERTAINTY
Whenever you make a measurement while collecting data, you can assume that there's a "true
value" that falls within the range of the measurements you made.
To calculate the uncertainty of your measurements, you'll need to find the best estimate of
your measurement and consider the results when you add or subtract the measurement of
uncertainty.
If you want to know how to calculate uncertainty, just follow these steps.
1. State uncertainty in its proper form. Let's say you're measuring a stick that falls near 4.2
cm, give or take one millimeter. This means that you know the stick falls almost on 4.2 cm,
but that it could actually be just a bit smaller or larger than that measurement, with the error
of one millimeter.
State the uncertainty like this: 4.2 cm ± 0.1 cm.
You can also rewrite this as 4.2 cm ± 1 mm, since 0.1 cm = 1 mm.
2.Always round the experimental measurement to the same decimal place as the
uncertainty. Measurements that involve a calculation of uncertainty are typically rounded to
one or two significant digits. The most important point is that you should round your
experimental measurement to the same decimal place as the uncertainty to keep your
measurements consistent.
3.Calculate uncertainty from a single measurement. Let's say you're measuring the
diameter of a round ball with a ruler. This is tricky because it'll be difficult to say exactly
where the outer edges of the ball line up with the ruler since they are curved, not straight.
Let's say the ruler can find the measurement to the nearest .1 cm -- this does not mean that
you can measure the diameter to this level of precision.
Study the edges of the ball and the ruler to get a sense of how reliably you can
measure its diameter. In a standard ruler, the markings at .5 cm show up clearly -- but
let's say you can get a little bit closer than that. If it looks like you can get about
within .3 cm of an accurate measurement, then your uncertainty is .3 cm.
Now, measure the diameter of the ball. Let's say you get about 7.6 cm. Just state the
estimated measurement along with the uncertainty. The diameter of the ball is 7.6 cm
± .3 cm.
Let's say that you can't get much closer than to .2 cm of measurements by using a ruler. So,
your uncertainty is ± .2 cm.
Let's say you measured that all of the CD cases stacked together are of a thickness of 22 cm.
Now, just divide the measurement and uncertainty by 10, the number of CD cases. 22 cm/10
= 2.2 cm and .2 cm/10 = .02 cm. This means that the thickness of one CD case is 2.20 cm
± .02 cm.
1.Take several measurements. Let's say you want to calculate how long it takes a ball to
drop to the floor from the height of a table. To get the best results, you'll have to measure the
ball falling off the table top at least a few times -- let's say five. Then, you'll have to find the
average of the five measured times and then add or subtract the standard deviation from that
number to get the best results.
Let's say you measured the five following times: 0.43 s, 0.52 s, 0.35 s, 0.29 s, and 0.49 s.
2.Find the average of the measurements. Now, find the average by adding up the five
different measurements and dividing the result by 5, the amount of measurements. 0.43 s +
0.52 s + 0.35 s + 0.29 s + 0.49 s = 2.08 s. Now, divide 2.08 by 5. 2.08/5 = 0.42 s. The average
time is 0.42 s.
3.Find the variance of these measurements. To do this, first, find the difference between
each of the five measurements and the average. To do this, just subtract the measurement
from 0.42 s. Here are the five differences:
Now, add up the squares of these differences: (0.01 s)2 + (0.1 s)2 + (-0.07 s)2 + (-0.13 s)2 +
(0.07 s)2 = 0.037 s.
Find the average of these added squares by dividing the result by 5. 0.037 s/5 = 0.0074 s.
4.Find the standard deviation. To find the standard deviation, simply find the square root of
the variance. The square root of 0.0074 s = 0.09 s, so the standard deviation is 0.09 s
5. State the final measurement. To do this, simply state the average of the measurements
along with the added and subtracted standard deviation. Since the average of the
measurements is .42 s and the standard deviation is .09 s, the final measurement is .42 s ± .09
s.
(5 cm ± .2 cm) + (3 cm ± .1 cm) =
(5 cm + 3 cm) ± (.2 cm +. 1 cm) =
8 cm ± .3 cm
Therefore:
(2.0 cm ± 1.0 cm)3 = (2.0 cm)3 ± (50%) x 3 = 8.0 cm3 ± 150 % or 8.0 cm3 ±12 cm3
STATISTICS
Statistics is the discipline that concerns the collection, organization, analysis, interpretation,
and presentation of data. In applying statistics to a scientific, industrial, or social problem, it
is conventional to begin with a statistical population or a statistical model to be studied.
Populations can be diverse groups of people or objects such as "all people living in a country"
or "every atom composing a crystal". Statistics deals with every aspect of data, including the
planning of data collection in terms of the design of surveys and experiments.
METHODS OF STATISTICS
Descriptive statistics are most often concerned with two sets of properties of a distribution
(sample or population), central tendency seeks to characterize the distribution's central or
typical value, while dispersion or variability characterizes the extent to which members of the
distribution depart from its center and each other.
Inferential statistics: which draw conclusions from data that are subject to random variation
(e.g., observational errors, sampling variation).
Inferences on mathematical statistics are made under the framework of probability theory,
which deals with the analysis of random phenomena.
There are four major types of descriptive statistics used to measure a given set of data
characteristics.
A) Measures of Frequency
This measures how often a particular variable occurs in the distribution. It can be measured in
numbers or percentages and shows how frequently a response or variable occurs.
Measures of central tendency indicate the average or the most common variable in the data
set. They identify certain points by computing the mean, median, and mode.
This shows how spread out the responses in the data set are. It helps identify the gap between
the highest and lowest values and how far apart individual values are from the mean or the
average. Measures of variation are calculated using the range, standard deviation, and
variance.
D) Measure of Position
This measures how individual values are positioned with one another. This method of
calculation relies on a standardized value. Percentiles and quartile ranks indicate the
measures of position.
The various descriptive statistics methods used to arrive at the characteristics of the data set
include:
A) Mean: Mean is the average of all the values and can be calculated by adding up all the
values and dividing the total sum by the number of values.
B) Median: The median of the set is the value that is at the exact center of the set. If there are
two values at the center, their mean is calculated to find the median.
C) Mode: The mode is the value that appears most frequently in the set. Arranging the values
in order from lowest to highest helps identify the mode. Any data set can have no mode, one
mode, or multiple modes.
D) Range: The range is the difference between the highest value of the data set and the
lowest value. It can be calculated by subtracting the lowest value from the highest value. The
range indicates how far apart the values are.
E) Standard Deviation: Standard deviation measures the average variability of the values in
the data set or how far individual values are from the mean. A large value of the standard
deviation indicates high variability.
√
N
1
σ= ∑
N i=1
( x i−μ)2
F) Variance: Variance measures the degree of spread in the data set and is the average of
squared deviations from the mean.
The univariate analysis considers only one variable at a particular time. This allows the
examination of each variable in the data set using different measures of frequency, variation,
and central tendency.
The bivariate analysis identifies any available relationship between two different variables.
The frequency and variability of the two variables are measured together to see if they vary
together. The measure of central tendency can also be taken during bivariate analysis.
The Multivariate analysis is similar to bivariate analysis within the exception that it takes
more than two variables into account to identify any relationship between them.
9+2+5+4+12+7+8+11+9+3+7+4+12+5+4+10+9+6+9+4/20
= 140/20 = 7
And so μ = 7
Step 2. Then for each number: subtract the Mean and square the result
This is the part of the formula that says:
(xi - μ)^2
(9 - 7)2 = (2)2 = 4
(2 - 7)2 = (-5)2 = 25
(5 - 7)2 = (-2)2 = 4
(4 - 7)2 = (-3)2 = 9
(7 - 7)2 = (0)2 = 0
(8 - 7)2 = (1)2 = 1
We already calculated (x1-7)2=4 etc. in the previous step, so just sum them up:
= 4+25+4+9+25+0+1+16+4+16+0+9+25+4+9+9+4+1+4+9 = 178
But that isn't the mean yet, we need to divide by how many, which is done by multiplying by
1/N (the same as dividing by N):
Mean of squared differences = (1/20) × 178 = 8.9
σ = √(8.9) = 2.983
Example: Sam has 20 rose bushes, but only counted the flowers on 6 of them!
and the "sample" is the 6 bushes that Sam counted the flowers of.
9, 2, 5, 4, 12, 7
√
N
1
N −1 ∑
σ= ( xi −μ)2
i=1
The important change is "N-1" instead of "N" (which is called "Bessel's correction").
Using sampled values 9, 2, 5, 4, 12, 7
σ = √(13.1) = 3.619
Various descriptive statistics tools can be called on for specific scenarios. Choosing the right
tool depends entirely on the objective of the analysis and the type and number of variables at
hand.
Mean
Median
Mode
Standard deviation
Variance
Range
Coefficient of variation
Percentiles
Contingency tables
Frequency tables
Correlation
RV coefficient
Graphic Tools: These allow the representation of various data points as graphs or tables:
Box plots
Scatter plots
Whisker plots
Bar chart
Pie chart
Histogram
Ternary diagram
Correlation map
Probability plot
Strip plot
The first step in any data analysis process is to define your objective. In data analytics jargon,
this is sometimes called the ‘problem statement’.
Defining your objective means coming up with a hypothesis and figuring how to test it. Start
by asking: What business problem am I trying to solve? While this might sound
straightforward, it can be trickier than it seems. For instance, your organization’s senior
management might pose an issue, such as: “Why are we losing customers?” It’s possible,
though, that this doesn’t get to the core of the problem. A data analyst’s job is to understand
the business and its goals in enough depth that they can frame the problem the right way.
Once you’ve established your objective, you’ll need to create a strategy for collecting and
aggregating the appropriate data. A key part of this is determining which data you need. This
might be quantitative (numeric) data, e.g. sales figures, or qualitative (descriptive) data, such
as customer reviews. All data fit into one of three categories: first-party, second-party, and
third-party data. Let’s explore each one.
Once you’ve collected your data, the next step is to get it ready for analysis. This means
cleaning, or ‘scrubbing’ it, and is crucial in making sure that you’re working with high-
quality data. Key data cleaning tasks include:
Removing major errors, duplicates, and outliers—all of which are inevitable problems when
aggregating data from numerous sources.
Bringing structure to your data—general ‘housekeeping’, i.e. fixing typos or layout issues,
which will help you map and manipulate your data more easily.
Filling in major gaps—as you’re tidying up, you might notice that important data are missing.
Once you’ve identified gaps, you can go about filling them.
A good data analyst will spend around 70-90% of their time cleaning their data. This might
sound excessive. But focusing on the wrong data points (or analyzing erroneous data) will
severely impact your results. It might even send you back to square on. so don’t rush it!
You’ll find a step-by-step guide to data cleaning here.
Finally, you’ve cleaned your data. Now comes the fun bit—analyzing it! The type of data
analysis you carry out largely depends on what your goal is. But there are many techniques
available. Univariate or bivariate analysis, time-series analysis, and regression analysis are
just a few you might have heard of. More important than the different types, though, is how
you apply them. This depends on what insights you’re hoping to gain.
The last ‘step’ in the data analytics process is to embrace your failures. The path we’ve
described above is more of an iterative process than a one-way street. Data analytics is
inherently messy, and the process you follow will be different for every project. For instance,
while cleaning data, you might spot patterns that spark a whole new set of questions. This
could send you back to step one (to redefine your objective). Equally, an exploratory analysis
might highlight a set of data points you’d never considered using before. Or maybe you find
that the results of your core analyses are misleading or erroneous. This might be caused by
mistakes in the data, or human error earlier in the process.
Measurement Systems Analysis (MSA) is a tool for analyzing the variation present in each
type of inspection, measurement, and test equipment. It is the system to assess the quality of
the measurement system. In other words, it allows us to make sure that the variation in our
measurement is minimal compared to the variation in our process.
Measurement is key and essential in six sigma. Measurement System Analysis (MSA) is an
experimental and mathematical method of determining how much the variation within the
measurement process contributes to overall process variability.
Determine the type of data collection. Identify whether the data is continuous or
discrete.
Determine the number of appraisers, number of sample parts, and also the number of
repeat readings.
Larger numbers of parts and repeat readings give results with a higher confidence
level. But also consider the time, cost, and disruption.
Use appraisers who normally perform the measurement and who are familiar with the
equipment and procedures.
In particular, make sure there all the appraisers follow measurement procedures.
Select the sample parts to represent the entire process spread. This is a very critical
point.
If applicable, mark the exact measurement location on each part to minimize the
impact of within-part variation (e.g. out-of-round).
Furthermore, ensure that the measurement device has adequate
discrimination/resolution, as discussed in the Requirements section.
Parts should be numbered, and the measurements should be taken in random order so
that the appraisers do not know the number assigned to each part or any previous
measurement value for that part. Also, a third party should record the measurements,
the appraiser, the trial number, and the number for each part on a table.
Measurement System Analysis aims to qualify a measurement system for use by quantifying
its accuracy, precision, and stability.
1-Measurement are said to be accurate if their tendency is to center around the actual value of
the entity being measured. Measurement accuracy is attained when the measured value has a
little deviation from the actual value.
2-Measurement are precise if they differ from one another by a small amount.
Accuracy: Accuracy is the difference between the true average and the observed average. If
the average value differs from the true average, then the system is not accurate. This is an
indication of an inaccurate system.
Precision: The precision of the measurement system is the degree to which repeated
measurement under unchanged conditions show the same result. In other words, precision
refers to the closeness of two or more measurements to each other.
Bias: Bias is the difference between observed average measurement to the true or reference
value. To measure the Bias first needs to measure the same part number of times and then
calculate the average of measurement.
Linearity: Linearity is the difference in Bias value over the normal operating range of the
measuring instrument. In other words, it is the change in Bias over the operating range of the
measurement equipment.
Stability: Stability refers to the capacity of the measurement system to produce the same
values over time when measuring the same sample. In other words, it is the difference in the
average of at least 2 sets of measurements with a gage over time. A measurement system is
stable if there is no special cause of variation affecting the measurement system bias over
time.
Consistency:
(i) It is another characteristic of the measuring instrument. It is the consistency of the
reading on the instrument scale. When the same dimension is measured number of
times.
(ii) It affects the performance of the measuring instrument and complete confidence in
the accuracy of the process.
Dirt Error: Sometimes, dirt particles can enter in the inspection room through the door and
the windows. These particles can create small dirt errors at the time of measurement. These
errors can be reduced by making dust proof, laboratories.
Contact Error: The rings as show in Figure whose thickness is to be measured. Number of
times, the contact of jaws with work piece plays an important role while measure in
laboratory or work shops. The following example shows the contact error. If the jaws of the
instrument are placed as shown in Figure the error 'e' is developed, which is because of poor
contact only.
Parallax Error (Reading Error): The position of the observer at the time of taking a
reading (on scale) can create errors in measurement. For this two positions of the observers
are shown (X and Y), which will be the defect generating positions. Position Z shows the
correct position of the observer i.e. he should take readings by viewing eye position exactly
perpendicular to the scale.
According to AIAG (2002), a general rule of thumb for measurement system acceptability is:
ROLE OF STANDARDS
The role of standards is to achieve uniform, consistent and repeatable measurements
throughout the world. Today our entire industrial economy is based on the
interchangeability of parts the method of manufacture. To achieve this, a measuring system
adequate to define the features to the accuracy required & the standards of sufficient
accuracy to support the measuring system are necessary.
STANDARDS
The term standard is used to denote universally accepted specifications for devices.
Components or processes which ensure conformity and interchangeability throughout a
particular industry. A standard provides a reference for assigning a numerical value to a
measured quantity. Each basic measurable quantity has associated with it an ultimate
standard. Working standards, those used in conjunction with the various measurement
making instruments. The national institute of standards and technology (NIST) formerly
called National Bureau of Standards (NBS), it was established by an act of congress in 1901,
and the need for such body had been noted by the founders of the constitution. In order to
maintain accuracy, standards in a vast industrial complex must be traceable to a single source,
which may be nationals standards.
The following is the generalization of echelons of standards in the national measurement
system.
1. Calibration standards: Working standards of industrial or governmental laboratories.
2. Metrology standards: Reference standards of industrial or Governmental laboratories.
3. National standards: It includes prototype and natural phenomenon of Sl (Systems
International), the world wide system of weight and measures standards. Application of
precise measurement has increased so much, that a single national laboratory to perform
directly all the calibrations and standardization required by a large country with high
technical development. It has led to the establishment of a considerable number of
standardizing laboratories in industry and in various other areas. A standard provides a
reference or datum for assigning a numerical value to a measured quantity. The two standard
systems of linear measurements are yard (English) and meter (metric). For linear
measurements various standards are used.
Line standard: The measurement of distance may be made between two parallel lines or two
surfaces. When, the length, being measured, is expressed as a distance between the centers of
two engraved lines as in a steel rule, it is known as line measurement. Line standards are used
for direct length comparison and they have no auxiliary devices. Yard or meter is the line
standard. Yard or meter is defined as the distance between scribed lines on a bar of metal
under certain environmental condition.
These are the legal standards.
Meter: It is the distance between the center portions of two lines etched on a polished surface
of a bar of pure platinum alloy (90%) or irridum alloy (10%). It has overall width and depth
of 16 mm each and is kept at 0°C and under normal atmospheric pressure.
The bar has a wing-like section, with a web whose surface lines arc on the neutral axis. The
relationship between meter and yard is given by, 1 meter = 1.09361 yard
Yard: Yard is a bronze bar with square cross-section and 38 inches long. A bar of 38 inches
long has a round recess of 0.5 inches diameter and 0.5 inches deep. A round recess is 1 inch
away from the two ends. A gold plug of 0.1 inch diameter, having three lines is etched
transversely and two lines engraved longitudinally arc inserted into these holes. The yard is
then distance between two central transverse lines on the plugs when the temperature of bar is
at 62°F. 1 yard = 0.9144 meter
CHARACTERISTICS OF LINE STANDARDS:
End Standard:
End standards, in the form of the bars and slip gauges, are in general use in precision
engineering as well as in standard laboratories such as the N.P.L (National Physical
Laboratory). Except for applications where microscopes can be used, scales are not generally
convenient for the direct measurement of engineering products, whereas slip gauges are in
everyday use in tool-rooms, workshops, and inspection departments throughout the world. A
modern end standard consists fundamentally of a block or bar of steel generally hardened
whose end faces are lapped flat and parallel to within a few millionth of a cm. By the process
of lapping, Its size too can be controlled very accurately. Although, from time to time,
various types of end bar have been constructed, some having flat and some spherical faces,
the flat, parallel faced bar is firmly established as the most practical method of end
measurement.
End bars: Primary end standards usually consist of bars of carbon steel about 20 mm in
diameter and made in sizes varying from 10 mm to 1200 mm. These are hardened at the ends
only. They are used for the measurement of work of larger sizes.
Slip gauges: Slip gauges are used as standards of measurement in practically every precision
engineering works in the world. These were invented, by C.E. Johansom of Sweden early in
the present century. These are made of high-grade cast steel and are hardened throughout.
With the set of slip gauges, combination of slip gauge enables measurements to be made in
the' range of 0.0025 to 100 mm but in combinations with end/length bars measurement range
upto 1200 mm is possible
SUBDIVISIONS OF STANDARDS
The imperial standard yard and metre are master standards that cannot be used for daily
measurement purposes. In order to facilitate measurement at different locations depending
upon the relative importance of standard, they are subdivided into the following four groups:
Primary standards For defining the unit precisely, there shall be one and only one material
standard. Primary standards are preserved carefully and maintained under standard
atmospheric conditions so that they do not change their values. This has no direct application
to a measuring problem encountered in engineering. These are used only for comparing with
secondary standards. International yard and international metre are examples of standard
units of length.
Secondary standards These are derived from primary standards and resemble them very
closely with respect to design, material, and length. Any error existing in these bars is
recorded by comparison with primary standards after long intervals. These are kept at
different locations under strict supervision and are used for comparison with tertiary
standards (only when it is absolutely essential). These safeguard against the loss or
destruction of primary standards.
Tertiary standards Primary and secondary standards are the ultimate controls for standards;
these are used only for reference purposes and that too at rare intervals. Tertiary standards are
reference standards employed by NPL and are used as the first standards for reference in
laboratories and workshops. These standards are replicas of secondary standards and are
usually used as references for working standards.
Working standards These are used more frequently in workshops and laboratories. When
compared to the other three standards, the materials used to make these standards are of a
lower grade and cost. These are derived from fundamental standards and suffer from loss of
instrumental accuracy due to subsequent comparison at each level in the hierarchical chain.
Working standards include both line and end standards
CLASSIFICATION OF STANDARDS:
The Brookes level comparator is used to calibrate standards by comparing with a master
standard. End standards can be manufactured very accurately using a Brookes level
comparator. A.J.C. Brookes devised this simple method in 1920 and hence the name. The
Brookes level comparator has a very accurate spirit level. In order to achieve an accurate
comparison, the spirit level is supported on balls so that it makes only a point contact with the
gauges. The table on which the gauges are placed for comparison are first levelled properly
using the spirit level. The two gauges (the master standard gauge and the standard gauge) that
are to be compared are wrung on the table and the spirit level is properly placed on them. The
bubble reading is recorded at this position. The positions of the two gauges are interchanged
by rotating the table by 180°. The spirit level is again placed to note down the bubble reading
at this position. The arrangement is shown in Fig. The two readings will be the same if the
gauges are of equal length and different for gauges of unequal lengths. When the positions of
the gauges are interchanged, the level is tilted through an angle equal to twice the difference
in the height of gauges divided by the spacing of level supports. The bubble readings can be
calibrated in terms of the height difference, as the distance between the two balls is fixed. The
effect of the table not being levelled initially can be eliminated because of the advantage of
turning the table by 180°
DISPLACEMENT METHOD
The displacement method is used to compare an edge gauge with a line standard. The line
standard, which is placed on a carrier, is positioned such that line A is under the cross-wires
of a fixed microscope, as seen in Fig. (a). The spindle of the micrometer is rotated until it
comes in contact with the projection on the carrier and then the micrometer reading is
recorded. The carrier is moved again to position line B under the cross-wires of the
microscope. At this stage, the end gauge is inserted as shown in Fig. (b) and the micrometer
reading is recorded again. Then the sum of the length of the line standard and the difference
between the micrometer readings will be equal to the length of the end gauge
CALIBRATION OF END BARS
In order to calibrate two bars having a basic length of 500 mm with the help of a one piece
metre bar, the following procedure is adopted.
The metre bar to be calibrated is wrung to a surface plate. The two 500 mm bars to be
calibrated are wrung together to form a bar that has a basic length of 1 m, which in turn is
wrung to the surface plate beside the metre bar, as shown in Fig. (a). The difference in
height e1 is obtained. The two 500 mm bars are then compared to determine the difference
in length, as shown in Fig. (b).
Fig. Calibration of end bars (a) Comparison of metre bar and end bars wrung together (b)
Comparison of individual end bars
Let LX and LY be the lengths of the two 500 mm bars. Let e 1 be the difference in height
between the calibrated metre bar and the combined lengths of X and Y. Let the difference
between the lengths of X and Y be e2 . Let L be the actual length of the metre bar. Then the
first measurement gives a length of L ± e1 = LX + LY, depending on whether the combined
length of LX and LY is longer or shorter than L. The second measurement yields a length of LX
± e2 = LY , again depending on whether X is longer or shorter than Y. Then substituting the
value of LY from the second measurement in the first measurement,
For calibrating three, four, or any other number of length standards of the same basic size, the
same procedure can be followed. One of the bars is used as a reference while comparing the
individual bars and the difference in length of the other bar is obtained relative to this bar.
Example 1 It is required to obtain a metre standard from a calibrated line standard using a
composite line standard. The actual length of the calibrated line standard is 1000.015 mm.
The composite line standard comprises a length bar having a basic length of 950 mm and two
end blocks, (a + b) and (c + d), each having a basic length of 50 mm. Each end block contains
an engraved line at the centre.
Four different measurements were obtained when comparisons were made between the
calibrated line standard and the composite bar using all combinations of end blocks: L1 =
1000.0035mm, L2 = 1000.0030mm, L3 = 1000.0020mm, and L4 = 1000.0015mm.
Determine the actual length of the metre bar. Block (a + b) was found to be 0.001mm greater
than block (c + d) when two end blocks were compared with each other.
Solution
The sum of all the four measurements for different combinations is as follows:
4L = L1 + L2 + L3 + L4 = 4L1 + 4(a + b) + 2x
Example 2 A calibrated metre end bar, which has an actual length of 1000.0005mm, is to be
used in the calibration of two bars X and Y, each having a basic length of 500mm. When
compared with the metre bar, the sum of LX and LY is found to be shorter by 0.0003mm.
When X and Y are compared, it is observed that X is 0.0004mm longer than Y. Determine
the actual length of X and Y
Solution
However, LX = LY + e2
We have LX = LY + e2
UNIT-II
MESUREMENTS OF LINEAR, ANGULAR DIMENSIONS
Linear Measuring Instruments – Vernier caliper, Micrometer, Vernier height gauge, Depth
Micrometer, Bore gauge, Comparators – Working and advantages; Opto-mechanical
measurements using measuring microscope and Profile projector - Angular measuring
instruments – Bevel protractor, Clinometer, Angle gauges, Sine bar, Autocollimator, Angle
dekkor..
LINEAR MEASURMENT
SCALES
• The most common tool for crude measurements is the scale (also known as rules, or rulers).
• Although plastic, wood and other materials are used for common scales, precision scales
use tempered steel alloys, with graduations scribed onto the surface.
• These are limited by the human eye. Basically, they are used to compare two dimensions.
• The metric scales use decimal divisions, and the imperial scales use fractional divisions.
• Some scales only use the fine scale divisions at one end of the scale. It is advised that the
end of the scale not be used for measurement. This is because s they become worn with use,
the end of the scale will no longer be at a `zero' position.
• Instead the internal divisions of the scale should be used. Parallax error can be a factor
when making measurements with a scale
CALIPERS
Many types of calipers permit reading out a measurement on a ruled scale, a dial, or a digital
display. Some calipers can be as simple as a compass with inward or outward-facing points,
but no scale. The tips of the caliper are adjusted to fit across the points to be measured and
the dimension read by measuring between the tips with another measuring tool, such as a
ruler.
Inside caliper
The inside calipers are used to measure the internal size of an object.
The upper caliper in the image (at the right) requires manual adjustment prior to fitting.
Fine setting of this caliper type is performed by tapping the caliper legs lightly on a
handy surface until they will almost pass over the object. A light push against the
resistance of the central pivot screw then spreads the legs to the correct dimension and
provides the required, consistent feel that ensures a repeatable measurement.
The lower caliper in the image has an adjusting screw that permits it to be carefully
adjusted without removal of the tool from the workpiece.
Outside caliper
Outside calipers are used to measure the external size of an object.
The same observations and technique apply to this type of caliper, as for the above inside
caliper. With some understanding of their limitations and usage, these instruments can
provide a high degree of accuracy and repeatability. They are especially useful when
measuring over very large distances; consider if the calipers are used to measure a large
diameter pipe. A vernier caliper does not have the depth capacity to straddle this large
diameter while at the same time reach the outermost points of the pipe's diameter. They are
made from high carbon steel.
Divider caliper
In the metalworking field, a divider caliper, popularly called a compass, is used in the
process of marking out locations. The points are sharpened so that they act as scribers; one
leg can then be placed in the dimple created by a center or prick punch and the other leg
pivoted so that it scribes a line on the workpiece's surface, thus forming an arc or circle.
Oddleg caliper
Oddleg calipers, Hermaphrodite calipers, or Oddleg Jennys, as pictured on the left, are
generally used to scribe a line at a set distance from the edge of a workpiece. The bent leg is
used to run along the workpiece edge while the scriber makes its mark at a predetermined
distance, this ensures a line parallel to the edge.
In the diagram at left, the uppermost caliper has a slight shoulder in the bent leg allowing it to
sit on the edge more securely, the lower caliper lacks this feature but has a renewable scriber
that can be adjusted for wear, as well as being replaced when excessively worn.
VERNIER CALIPERS:
The vernier instruments generally used in workshop and engineering metrology have
comparatively low accuracy. The line of measurement of such instruments does not coincide
with the line of scale. The accuracy therefore depends upon the straightness of the beam and
the squareness of the sliding jaw with respect to the beam. To ensure the squareness, the
sliding jaw must be clamped before taking the reading. The zero error must also be taken into
consideration. Instruments are now available with a measuring range up to one meter with a
scale value of 0.1 or 0.2 mm. They are made of alloy steel, hardened and tempered (to about
58 Rockwell C), and the contact surfaces are lap-finished. In some cases stainless steel is
used.
Main Scale
Vernier scale
Thumbscrew
Lock screw
Depth Rod
Fixed jaw, and
Sliding jaw
Vernier caliper consists of two steel rules and they can slide along with each other.
One is a long rectangular metal strip that has a fixed jaw on one end. It is graduated in
inches at its upper end and centimeter at its lower end which is called the main scale.
The main scale is marked on solid L shape frames, on which cm graduates are divided
into 20 parts so that a small division is equal to 0.05 cm. This allows improvements in
the commonly used measuring techniques, over direct measurement with line
graduated method.
There is another small rectangular metal strip which is graduated with a special
relation to that of the main scale, which is called the vernier scale which slides over
this long metal strip it has a jaw similar to that of the main scale.
There are two jaws on vernier caliper upper jaw and lower jaw. These jaws together
are used to hold the object firmly while measuring its length which is not possible
with a metre scale.
The external or lower jaws which are generally used to measure the diameter of a
sphere or a cylinder. The internal jaws or upper jaws which are generally used to
measure the internal diameter of a hollow cylinder.
There is also a metal strip attached at the back of the vernier calipers which is used to
measure the internal depth of a cylinder.
The principle vernier is that when two scales or division slightly different in size are used, the
difference between them can be utilized to enhance the accuracy of measurement.
A scale cannot measure objects which are smaller than 1mm but a vernier caliper can
measure objects up to 1mm. As already know that vernier caliper has two scales the main
scale and the vernier scale together this arrangement is used to measure very small lengths
like 0.1mm.
Here the main scale has the least count of 1mm and vernier scale has the least count of
0.9mm. So therefore 10 unit of the main scale is 1cm whereas 10 unit of vernier scale is
0.9mm.
The unit of the vernier scale is 9mm. So this difference between the main scale and vernier
scale which is 0.1mm is the working principle of vernier caliper.
The difference between the value of one main scale division and the value of one Vernier
scale division is known as the least count of the Vernier.
Least count of vernier caliper is the smallest value that we can measure from this device. To
calculate the least count of vernier caliper is the value of one main scale division divided by
the total number of division on the vernier scale.
Let’s assume if the value of one main scale division is 1mm and the total number of division
on vernier scale 10mm then the least count will be 0.1mm. Thus least count is defined as the
smallest distance that can be measured from an instrument.
Zero error in the vernier caliper is a mathematical error due to which, The zero of the vernier
scale does not coincide with the zero of the main scale.
In other words, if the zero mark on the vernier scale doesn’t coincide with the zero mark on
the main scale, then the error that occurs is called zero error. They are of 2 types.
1. No zero error
2. Positive zero error
3. Negative zero error
No Zero Error
In no zero error, when we bring two jaws together. You will see zero of the Main scale is
coinciding with the zero of the vernier scale. they are exactly in a straight line so this vernier
caliper is free from zero error or you can say there is no zero error in this vernier caliper.
In positive zero error, Let’s bring these jaws together. you see, the zero of vernier scale is
ahead of main scale zero. Or you can say zero of vernier scale is at the right side of main
scale zero.
In both cases either it is ahead of main scale zero or it is the right side of main scale zero. this
is called as zero error and it is positive
In negative zero error, we will bring the two jaws together. Here you can see zero of
vernier scale is the back side of main scale zero. Or to the left of main scale zero.
2. With vernier caliper, always use the stationary caliper jaw on the reference point and
obtain the measured point by advancing or withdrawing the sliding jaw. For this purpose, all
vernier calipers are equipped with a fine adjustment attachment as a part of sliding jaw.
3. Grip the vernier calipers near or opposite the jaws; one hand for stationary jaw and the
other hand generally supporting the sliding jaw.
4. Before reading the vernier, try the calipers again for feel and location.
5. Where the vernier calipers are used for inside diameter measurement, even more than usual
precaution is needed to be taken to rock the instrument true diameter. This action or
technique is known as centralizing.
6. Don't use the vernier calipers as a wrench or hammer. It should be set down gently
preferably in the box it camps in and not dropped or tossed aside.
7. Vernier caliper must be kept wiped free from grit , chips an oil.
There are three types of vernier caliper used in the physics laboratory to measure lengths of
small objects accurately which could not have been possible with a metre scale.
1. Flat edge vernier caliper (Type A)
2. Knife edge vernier caliper (Type B)
3. Flat and Knife edge vernier caliper (Type C)
4. Vernier gear tooth caliper
5. Vernier depth gauge
6. Flat and knife edge vernier caliper
7. Vernier height gauge
8. Vernier dail caliper
This type of vernier is used for normal functions. We can take outer measurement of a job’s
length, breadth, thickness, and diameter, etc.
As the jib of its edge is of a special type, the inner measurement can also be taken with it. But
from that measurement the jobs breadth has to be subtracted. This measurement is often
written on the jaw otherwise it should be measured with a micrometer.
2. Knife Edge Vernier Caliper
The edge of this vernier caliper is like a knife. Other parts of this vernier caliper are like other
vernier calipers as shown in the figure. This vernier caliper is used for measuring narrow
space, a distance of holes of I bolt, etc.
Its main shortcoming is that because of the thin edge of its jaw, it wears out quickly and starts
giving inaccurate measurements. It should be used sparingly and carefully.
some companies also make vernier calipers which have their jaw like an ordinary vernier
caliper from one side but have knife-edge jaw at the other side, as shown in the figure. With
this vernier caliper, all types of jobs can be measured easily.
All parts of the vernier caliper should be of good quality steel and the measuring faces should
possess a minimum hardness of 650 HV. The recommended measuring ranges (nominal size)
of vernier calipers as per IS 3651-1947 are: 0-125,0--200,0--250,0-300,0-500,0-750,0-
1000,750-1500,750-2000mm.
4. Vernier Gear Tooth Caliper
This is a special type of instrument, which is made like a combined form of two vernier
calipers. It contains two separate scales vertical and horizontal as shown in the figure
With the vernier caliper, the thickness of a tooth of gear can be taken form its pitch circle. In
other words, the vernier caliper is used to measure different parts of gear.
As is evident from its name, this instrument is used for measuring the depth of the slot of a
job, Its hole or groove. This is almost similar to vernier caliper. Its reading is also taken in the
same way. But instead of a jaw, a flat-shaped base is used in it as shown in figure.
This depth gauge is made of a thin beam like a narrow rule. Main scale and vernier scale are
also in inch or metric system in it. Its speciality is that we can take meadsurement of three
types with it:
Its main scale is marked in parts of inches which is divided into 64 sub-sections.
The other end is divided into 40 sub-sections and every fourth line is slightly bigger.
It contains the local size in 1,2,3, to 9. On the same and, there is a vernier scale, with
whose help a minimum of 0.001″ measurement can be taken.
On its back graduation is in mm which can take a minimum measurement of 0.02 mm
with the help of a vernier scale.
• While using the vernier depth gauge, first of all, make sure that the reference surface, on
which the depth gauge base is rested, is satisfactorily true, flat and square. Measuring depth is
a little like measuring an inside diameter. The gauge itself is true and square but can be
imperceptibly tipped or canted, because of the reference surface perhaps and offer erroneous
reading.
• In using a depth gauge, press the base or anvil firmly on the reference surface and keep
several kilograms hand pressure on it. Then, in manipulating the gauge beam to measure
depth, be sure to apply only standard light, measuring pressure one to two kg like making a
light dot on paper with a pencil.
It is used for taking auccrate measurement of height of a job or for marking. It is almost
similar to vernier caliper but it is used by attaching some additional attachments. Beam
reamains fitted on a base in lenght form. Off-set scriber is fitted on the beam itself with which
height of a job is measured or marking is done. Its bases are of two types:
1. Solid Base
2. Moveable Base
In vernier Height gauge, the slide base remains joined with the beam permanently as shown.
In this type of vernier height gauge there in no facility to set the beam or base according to
the requirement of the job. In Moveable base vernier, this facility exists. This type of vernier
height gauge is in the form of a set, which has a base vernier caliper, offset scriber fixing
screw, etc. All its parts have been shown in the figure.
This type of vernier height gauges can be used as an ordinary vernier caliper by separating its
base. While using both types of vernier height gauges, it is necessary to keep in mind the
following points:
Base: It is made quite robust to ensure rigidity and stability of the instrument. The underside
of the base is relieved leaving a surface round the outside edge of a least 7 mm width and an
air gap is provided across the surface to connect the relieved part with the outside. The base is
ground and lapped to an accuracy of 0.005 mm as measured over the total span of the surface
considered.
Beam: The section of the beam is so chosen as to ensure rigidity during the use. The guiding
edge of the beam should be perfectly flat within the tolerances of 0.02, 0.04, 0.06, 0.08 mm
for measuring range of 250, 500, 750, }(XX) mm respectively. The faces of the beam should
also be flat within the tolerances of 0.04, 0.06. 0.10, 0.12 mm for vernier measuring heights
of 250, 500, 750, 1000 mm respectively.
Measuring jaw and scriber: The clear projection of the measuring jaw from the edge of the
beam should be at least equal to the projection of the beam from the base. For all position of
the slider, the upper and lower gauging surfaces of. the measuril1gjaw should be flat and
parallel to the base to within 0.008 mm. The measuring faces of the scriber should be flat and
parallel to within 0.005 mm. The projection of the scriber beyond the jaw should be at least
25 mm. Vernier height gauges may also have an offset scriber and the scales on the beam is
so positioned that when the scriber is co-planar with the base, the vernier is at zero position.
Graduations: The following requirements should be fulfilled in respect of graduations on
scales :
• All graduations on the scale and vernier should be clearly engraved and the
thickness of graduation both on scale and vernier should be identical and should be in
between 0.05 mm and 0.1 mm.
• The visible length of the shortest graduation should be about 2 to 3 times the width
of the interval between the adjacent lines.
• The perpendicular distance between the graduations on scale and the graduations on
vernier should in no case be more than 0.01 mm.
• For easy reading, it is recommended that the surfaces of the beam and vernier should
have dull finish and the graduations lines blackened in. Sometimes a magnifying lens
is also provided to facilitate taking the readings
Slider: The slider has a good sliding fit along the full working length of the beam. A suitable
fitting is incorporated to give a fine adjustment of the slider and a suitable clamp provided so
that the slider could be effectively clamped to the beam after the fine adjustment has been
made. An important feature of the height gauge is that a special attachment can be fitted to
the part to which the scriber is normally fitted, to convert it, in effect, into a depth gauge.
Precautions:
1. When using any height gauge or surface gauge, care must be taken to ensure that the base
is clean and free from burrs. It is essential; too, that final setting of vernier height gauges be
made after the slider has been locked to the vertical column.
2. The height gauges are generally kept in their cases when not in use. Every care should be
taken, particularly in case of long height gauges, to avoid its heating by warmth from the
hands. The springing of measuring jaw should be always avoided.
In ordinary vernier caliper, there are chances of mistakes as far as clear reading is concerned.
For this purpose, nowadays Vernier Dial calipers are being used. In place of the vernier scale,
it contains a graduation dial as shown in the figure.
Like vernier calipers, it can measure in inches as well as in milimeters. Like a dial test
indicator, rack and pinion are used in it. The rack remains on the main scale which is
connected to the pinion of the dial.
For using it movable jaw is moved by the thumb roller. For taking a reading, we have to
check how many main and sub marks an inch the bevel edge of the movable jaw has crossed
and added the reading, the needle on the dial give.
Advantages
Amplification is achieved by design and it is not dependent on the parts that can go
out of wear or calibration.
No interpolation is possible in reading, let alone required.
Zero setting adjustment is easy.
There is no theoretical limit to the scale range.
Disadvantages
The main disadvantages lie in the instruments on which verniers are used.
The reliability of reading depends more upon the observer that must instruments.
No way to adjust for any errors other than zero settings.
The discrimination is limited.
MICROMETERS:
In general, the term "micrometer" refers to outside micrometers. A variety of other types of
micrometers also exist according to different measurement applications. Examples include
inside micrometers, bore micrometers, tube micrometers, and depth micrometers. The
measurable range differs every 25 mm—such as 0 to 25 mm and 25 to 50 mm—depending on
the size of the frame, so using a micrometer that matches the target is necessary.
Parts of a Micrometer
A micrometer is composed of the following parts:
U shaped steel frame- The outside micrometer has a U shaped or C shaped frame. It
holds all the micrometer parts together. The gap of the frame permits the maximum
diameter or length of the job to be measured. The frame is generally made of steel,
cast iron, malleable cast iron, or light alloy. It is desirable that the frame of the
micrometer be provided with conveniently placed finger grips of heat-insulating
materials.
Anvil – The shiny part the spindle moves toward and the sample rests against.
The micrometer has a fixed anvil protruding 3mm from the left-hand side frame. The
diameter of the anvil is the same as the diameter of the spindle. Another movable
anvil is provided on the front of the spindle. The anvils are accuracy ground and
lapped with its measuring faces flat and parallel to the spindle. These are also
available with WC faces. The spindle is the movable measuring face with the anvil on
the front side. The spindle engages with the nut. It should run freely and smoothly
throughout the length of its travel. There should be no backlash between the spindle
screw and nut. There should be a full engagement of nut& screw when the
micrometer is at its full reading.
The sleeve is accurately divided and clearly marked in 0.5mm division along its
length which serves as the main scale. It is chrome plated and adjustable for zero
settings.
Screw – Found inside the barrel and is considered the heart of the micrometer.
Thimble-The thimble can be moved over the barrel, it has 50 equal divisions around
its circumference.
Locknut – Component that one can tighten to hold the spindle stationary.
Ratchet Stop – The device on the end of the handle that limits applied pressure by
slipping at a calibrated torque.
o The ratchet is provided at the end of the thimble. It is used to assure accurate
measurement and to prevent too much pressure from being applied to the
micrometer.When the spindle ratches near the work surface to be measured the
operator uses the ratchet screw to tighter the thimble. The ratchet
o automatically slips when the correct (uniform) pressure is applied and prevents
the application of too much pressure.
Principle of micrometer
Micrometers works on the principle of screw and nut. The screw is attached to a
concentric cylinder or thimble the circumference of which is divided into a number of
equal parts. We know that when a screw is turned through a nut by one revolution, its
axial movement is equal to the pitch of the thread of a screw.
If I pitch or lead of screw-in mm, each rotation of screw advances it in relation to the
internal threads a distance equal to I mm. If the circumference of the concentric
cylinder is divided into n equal divisions, movement (rotation) of a cylinder through
one division indicates 1/n rotation of the screw or 1/n mm axial advance.
In millimeter micrometer instruments the screw has a pitch (lead) of 0.5 mm and
thimble has 50 divisions so that the least count of the micrometer is equal to 0.5/50
=mm. By reducing the pitch of the screw thread or by increasing the number of
divisions on the thimble, the axial advance value per one circumferential division can
be reduced and the accuracy of measurement can be increased.
Least count of micrometer= pitch of the spindle screw / no of division in the spindle
Range Of Micrometer :
The micrometer usually has a maximum opening of 25 mm. They are available in measuring
ranges of 0 to 25 mm, 25 to 50 mm, 125 to 150 mm up to 575 to 600 mm.
HANDLING PRECAUTIONS
2. Lack of parallelism and squareness of anvil or spindle at some or all parts of the scale.
5. Wear on the faces of anvil & spindle, and wear in the threads of the spindle.
6. Error due to too much pressure on the thimble or not using the ratchet.
TYPES OF MICROMETERS
There is a number of different micrometers available for specific application and accuracy.
1. Outside micrometer
2. Inside micrometer
3. Vernier micrometer
4. Depth micrometer
5. Bench micrometer
6. Digital micrometer
7. Differential screw micrometer
8. Micrometer with dial gauge
9. Screw thread micrometer
OUTSIDE MICROMETER
Just like the outside vernier caliper, the outside micrometers are designed in a way to measure
the external dimensions of objects. This type of micrometers is the most common type on the
market. An outside micrometer is very suitable to measure outside diameter and thickness.
1. Flat Micrometer
This is mostly used outside micrometer ever. You can easily find it on any market. It has flat
surface both on the anvil and spindle. It’s a standard micrometer that has to be on your
toolbox because of its wide uses, however, the best one is to measure the thickness and
diameter.
2. V-Anvil Micrometer
It has V-shaped anvil which is specifically designed to measure the outside diameter of a ball
object. V micrometer is truly able to lock the ball so that it can’t move even a little. It makes
sure that the object can’t slip from during measurement. This V anvil together with the
spindle allows them to secure the object in three touching points.
3. Blade Micrometer
4. Bench Micrometer
Bench micrometer is a micrometer that is set in a bench. It is typically so precise and accurate
that many works use it for inspection. Its least count reaches 0.000050” or 0.002mm.
Therefore, it’s great to use in a laboratory.
5. Laser-Scan Micrometer
Laser-scan micrometer employs a laser to measure the object with high precision and
accuracy. It could measure with precision 0.000002″/0.00005mm. The precision is extremely
wonderful. Typically it’s similar to bench type because it’s set up on a bench. It’s available in
digital display and needs another additional tool to work with. The cost of a non-contact laser
scan micrometer is also fantastic but it works amazingly.
6. Limit Micrometer
The limit micrometer has two sets of anvil and spindle. It works to examine whether an
object is under range or not. For example, objects only pass the measurement if its diameter
ranges in 1.001″ to 1.005″. You then adjust the anvil and spindle on the upper section sets
apart 1.001″ in length and the lower section sets apart 1.005″ in length. Any object that you
measure; they have to get measured with the two sections, if they are not oversized or
undersized, they are not. Whereas they pass the measurement then they are what you want.
7. Ball Micrometer
Like its name, this micrometer’s anvil looks like a ball (spherical). It allows the anvil to touch
only one point on the object. Surely, this improves accuracy. Therefore, ball micrometer is
very useful to measure the thickness of wall tubes or wall rounded surfaces. It’s not rare that
many people use ball micrometer for reloading.
8. Tube Micrometer
Tube micrometer is similar to ball micrometer in terms of the anvil stands vertically to the
spindle. However, the anvil looks like a tube which is different than ball micrometer that
looks like a ball. Tube micrometer is also best to use for measuring the thickness of
cylindrical objects whether the thickness of all sides have been uniform or not. It’s also
reasonable for reloading purpose.
9. Uni Micrometer
It means universal micrometer. You can replace the anvil with another one (interchangeable)
which has a different shape and use. Uni micrometer is helpful if you are going to measure
various objects with different shapes. However, its price is also amazing.
10. Depth Micrometer
As the name suggests, the depth micrometers are designed to measure the depth of different
holes and steps. They have different interchangeable rods which make them versatile enough
to measure different depths.
INSIDE MICROMETER
The inside micrometer is used to measure the internal dimensions of the workpiece. Fig.
shows the inside micrometer.
the construction is similar to the outside micrometer. However, the inside micrometer has
no U-shape frame and spindle. The measuring tips are constituted by the jaws whose faces
are hardened and ground to a radius.
One one of the jaw is held stationary at the end and the second one moves by the rotation
of the thimble. The locking arrangement is provided with a fixed jaw. Fig. shows another
inside micrometer is used for a larger internal dimension.
It consists of two anvils, sleeve, thimble, rachet, stop, and extension rods.
The range of this micrometer is 50 mm to 210 mm. however, the range can be increased
by anyone extension rod provided with it. This micrometer has no frame and spindle. The
measuring points are at Extreme ends provided with anvils. The axial movement of
endpoints is taken place by thimble rotation about the barrel axis.
A series of extension rods are provided in order to obtain a wide measuring range. Before
taking the measurement, the approximate internal dimension of a workpiece (whose
dimension is to be measured by inside micrometer) is measured by a scale.
The extension rod is then selected to the nearest one and inserted in a micrometer head.
Then, the micrometer is checked for zero error with the help of a standard-sized specimen
whose internal dimension is known. The micrometer is then adjusted at a dimension
slightly smaller than the internal (bore) diameter of the workpiece.
The micrometer head is then held finely against the bore as shown in Fig.
other contact surface is adjusted by moving the thimble till the correct feel is sensed. The
micrometer is then removed and reading is taken. The lengths of extension rod and collar
are added to the micrometer reading.
The inside micrometers are designed in a way to measure the internal dimension of an
object such as the inside diameter of a tube or hole.
There are other sub-divisions of inside micrometer such as tubular, caliper-type, and bore
micrometer.
1. Caliper-type Micrometer
At a glance, it looks like a caliper, but this one is more accurate and price. It can measure
until the micron scale. Instead of anvil and spindle, these micrometers have jaws which are
inserted inside the objects. The jaws are adjusted by the ratchet according to the space of the
object.
2. Tubular Micrometer
These micrometers lack a C-frame and they are placed in the space to be measured. After
that, they are extended to the desired length so that the micrometers meet the two edges of the
object. Once they are secure, the readings can be taken.
3. Bore Micrometer
Bore micrometer or bore gauge is still the family of the inside micrometer. It’s useful to
quantify the internal diameter of a hole or a cylindrical object such as an engine cylinder. It
has no spindle, only anvil. The anvil extends until reaching the inside wall of the hole, then
the readout can be taken. There are sub-categories of bore micrometer. They are available in
the dial, mechanical, digital, one anvil, two anvils, and three anvils. Typically the range is 6
inches.
Vernier micrometer
1. The main scale is graduated on the barrel with two sets of division marks. The set below
the reference line reads in mm and set above the line reads in 1/2 mm.
2. A thimble scale is graduated on the thimble with 50 equal divisions. Each small division of
thimble represents 1/50 of a minimum division of the main scale. The main scale minimum
division value is 1/2 mm.
3. Vernier scale is marked on the barrel. There are 10 divisions on the barrel and this is
equivalent to 9 divisions on the thimble. Hence one division on a vernier scale is equal
to9/10thatofthimble. But one division on the thimble is equal to to0.01 mm. Therefore,
one division on a vernier scale is equal to Least count of vernier micrometer
L.C. = Value of smallest division on thimble- Value of smallest division on the vernier scale.
= 0.01 – 0.009
= 0.001 mm
Hence the accuracy of the vernier micrometer is 0.001 mm.
Thimble reading
= No. of thimble division coinciding with reference line x L.C. of a thimble
= 12 x 0.01
= 0.12 mm
The 4th vernier scale line is coincident with the divisions of a thimble
If vernier line coincident with the reference line is 0, then no vernier reading added to the
final reading.
Depth micrometer
Depth micrometer (micrometer depth gauge) is used to measure the depth of holes, slots
and recessed areas.
It consists of a base (measuring face) which is fixed on the barrel and measuring spindle
which is attached with thimble as shown in Fig. The axial movement of the spindle takes
place by rotation of thimble.
The measurement is made between the end face of the spindle and the measuring face of
the base. As spindle moves away from the base, the measurement increases due to scales
on the barrel are reversed from the normal.
The scale indicates zero when spindle flush with the face and maximum when the spindle
is fully extended from the base Fig.shows the use of depth micrometer.
The main scale reading is 17. The 14th division line of the thimble match with the
reference line. Hence, thimble reading is
Total reading= 17 + 0.14 = 17.14 mm
The depth micrometer is available in ranges 0 -25 mm or0-50 mm. The range can be
increased up to 0-90 mm by using extension rods in steps of 25 mm. The extension rod
can easily be inserted by removing the spindle cap.
Differential screw micrometer
Differential screw micrometer uses differential screw principle and hence the accuracy of
-this micrometer is increased compared to an ordinary micrometer.
In this micrometer, the screw has two types of pitches as shown in Fig., one smaller and
other larger, instead of one uniform pitch as in ordinary micrometer.
Both the screws are right-handed and the screws are so arranged that the rotation of
thimble, one screw. Moves forward and other moves backward. The anvil is not
attached to the thimble, but it slides inside the barrel.
The smaller screw nut is fixed in the anvil while larger screw nut fixed with a barrel,
hence screw rotates with the thimble. In the case of a metric micrometer, the normally
employed pitch for the screws is 0.4 mm and 0.5 mm.
Therefore, one revolution of the thimble, the measuring anvil will advance by an
amount equal to 0.5-0.4 = 0.1 mm. The thimble circumference is graduated in 100
equal divisions. Hence anvil moves in an axial direction by mm corresponding one
division of thimble this micrometer has a smaller range due to small total axial
movement (differential axial movement) of the spindle.
Digital micrometer
The mechanical measuring device as micrometer and vernier are suitable for making
measurements that are accurate within 0.001 mm. This device is inexpensive, lightweight,
compact, and relatively rugged.
But when greater accuracy of measurement is desired, these mechanical devices are
inadequate. In the case of digital or electronic instruments, measuring instruments having
an electronic digital readout has become common in the industrial measuring instrument
in order to get superior precision and ease of reading provided by the electronic digital
readout.
A digital micrometer consists of the frame, anvil, spindle, locknut, barrel, thimble, ratchet,
LCD display, and ON/OFF ZERO key as shown in Fig. It has incorporated a digital
readout into the structure of the micrometer’s body.
The digital readout is integrated with a rotary encoder that is capable of reading the axial
displacement of a spindle which rotates as a thimble is rotated.
The digital micrometers are available in a large number of different sizes, normally 0-25
mm, 25-50 mm, 50- 75 mm, and 75-100 mm. They are used to measure length, diameter,
or thickness.
BORE GAUGE
A bore gauge is a tool used to measure the inside of a bore, or hole. Once a bore gauge is
inserted into the hole that needs measuring, small parts called anvils expand outward to
determine the diameter. Bore gauges are also known as cylinder tests, hole tests, bore mics,
holtests, internal micrometers, hold bore gauges, or telescoping gauges.
Bore gauges with three anvils are called internal micrometers or tri mics, and are calibrated
with setting rings.
An inside micrometer or vernier bore gauge measures a bore directly. The gauge has three
symmetrical anvils that protrude from the gauge body that are connected to the dial or
micrometer mechanism. As the knob is rotated it moves the anvils in or out with respect to
the measurements. The knob usually has a slipping mechanism to take the feel out of the
device and increase reliability between measurements. The measurement given is the mean
diameter of the three anvils, and is usually good to 0.001 mm (3.9×10−5 in).
The more common, and less expensive, type of gauges feature two anvils and are calibrated
with gauge blocks. Plug gauges are the simplest type; they feature a plug of slightly different
size on each end. A correctly sized bore will not be able to fit the larger plug inside.
Both three and two anvil gauges can use a dial or digital readout to show the interior width of
a hole, though some gauges as mentioned below don’t use either.
Apart from these broad types, more specific types of bore gauges are suited for more
specialty measurements:
The gauges are locked by twisting the knurled end of the handles, this action is performed to
exert a small amount of friction on the telescopic portions of the gauge (the smaller diameter
rods found at the T head of the gauge). To use, the gauge is inserted at a slight angle to the
bore and gently locked to a size slightly larger than the bore while at that angle. Then,
rocking the handle side-to-side, slowly move the handle across the bore to the other side. The
rocking will first align the gauge with the bore axis and the act of moving the handle to the
other side of the bore will bring it to the exact bore diameter. This action compresses the two
anvils where they remain locked at the bores dimension after being withdrawn.
The gauge is then removed and measured with the aid of a micrometer anvil heads, move the
head of the gauge around while making the measurement to ensure you get the maximum
reading. Grasp the gauge near the head to aid in your maneuvering of the gauge while
adjusting the micrometer so it just stops the gauge's motion at one spot only. A bit of practice
will quickly give you the idea.
Dial bore gauges are both easy to use and accurate, as well as good for measuring how deep
bores taper. However, they need to be calibrated every time they’re used.
A dial bore gauge is a comparative instrument similar to a telescoping gauge, but includes a
digital or analog readout. The dial bore gauge must be set to the nominal value of the bore,
and it will measure the variation and direction of the bore from nominal. There are multiple
ways to set this gauges to the nominal value. The most common method is using an outside
micrometer that is set to the nominal value. This is the quickest and least expensive way to
set the dial bore gauge. This method is not the most accurate because there can be high
human error and variation in the micrometer is passed down into the dial bore gauge. The
more accurate setting options include ring gauges (also called master rings) and designated
bore gauge setting equipment that utilize gauge blocks or other standards. When using a
micrometer to set a dial bore gauge, the accuracy of the measurement will be 0.002 inches or
0.0508 millimeters. A ring gauge can be used to obtain higher accuracy at a higher cost and
higher time requirement. When a dial bore gauge is set using a ring gauge, overall accuracy
can be within 0.0001 inches or 0.00254 millimeter.
Small hole gauges, available in full ball and half ball types, are better suited to smaller bores,
and can be used to see if a bore’s shape is off. Half ball gauges are used when the
measurement needs to be made near the bottom of a hole.
Small-hole gauges require a slightly different technique to the telescopic gauges, the small
hole gauge is initially set smaller than the bore to be measured. It is then inserted into the
bore and adjusted by rotating the knurled knob at the base, until light pressure is felt when the
gauge is slightly moved in the bore. The gauge is then removed and measured with a caliper
or micrometer. To accurately detect the maximal distance between the two halves of the
gauge head, move the head of the gauge around while making the measurement to ensure you
get the maximal reading. Grasp the gauge near the head to aid in your maneuvering of the
gauge while adjusting the micrometer so it just stops the gauge's motion at one spot only.
There are two styles of small-hole gauges:
The full-ball gauges are easier to set correctly and maintain, under the pressure of
measurement, a better representation of the bore.
The Half-ball gauges tend to spring just a little bit, and this may be enough to make a
measurement incorrect. A lighter "touch" is required to accurately use the half-ball gauges.
Bore gauges are used in applications where holes, cylinders, and pipes need the be measured,
such as automotive, manufacturing, and inspection and calibration uses. Mechanics and
machinists use bore gauges to measure wear in cylinder heads. They also use them to
measure holes in an engine block so pistons fit tightly enough that they don’t leak the gases
they compress. Inspectors and maintenance employees also use bore gauges to check the
dimensions inside injection moldings for quality assurance, or in extruder barrels to track
wear over time for preventative maintenance.
COMPARATORS
Comparators are one form of linear measurement device which is quick and more convenient
for checking large number of identical dimensions. Comparators can give precision
measurements, with consistent accuracy by eliminating human error. They are employed to
find out, by how much the dimensions of the given component differ from that of a known
datum. If the indicated difference is small, a suitable magnification device is selected to
obtain the desired accuracy of measurements. It is an indirect type of instrument and used for
linear measurement. If the dimension is less or greater, than the standard, then the difference
will be shown on the dial. It gives only the difference between actual and standard dimension
of the workpiece
CHARACTERISTICS OF GOOD COMPARATORS:
1. It should be compact.
2. It should be easy to handle.
3. It should give quick response or quick result.
4. It should be reliable, while in use.
5. There should be no effects of environment on the comparator.
6. Its weight must be less.
7. It must be cheaper.
8. It must be easily available in the market.
9. It should be sensitive as per the requirement.
10. The design should be robust.
11. It should be linear in scale so that it is easy to read and get uniform response.
12. It should have less maintenance.
13. It should have hard contact point, with long life.
14. It should be free from backlash and wear.
CLASSIFICATION COMPARATOR:
2. Pneumatic Comparator: Pneumatic comparator works by using high pressure air, valves,
back pressure etc.
3. Optical Comparator: Optical comparator works by using lens, mirrors, light source etc.
6. Combined Comparator: The combination of any two of the above types can give the best
result.
Types Of Comparators :
1. Mechanical comparators
Dial Indicator
Reed Type comparator
Johansson Mikrokator
Sigma Comparator
Optical Lever
Zeiss Optimeter
Zeiss Ultra Optimeter
Zeiss Optotest Comparators
3. Electrical and Electronics Comparators
4. Pneumatic Comparators
6) Projection Comparators
8) Automatic Gauging
9) Electro-Mechanical Comparators
MECHANICAL COMPARATORS
Mechanical Comparators are the type of comparator which is made up of mechanical means.
Mechanical Means are levers, gears, racks and pinion, such kind of means are used to
magnify the movement of means to improve the accuracy of the instrument.
1. Dial Indicator
2. Johansson Mikrokator
3. Reed type Mechanical Comparator
4. Sigma Comparator
1.Dial Indicator
A dial indicator or dial gauge is used as a mechanical comparator. The essential parts of the
instrument are like a small clock with a plu ger projecting at the bottom as shown in fig.
Very slight upward movement on the plunger moves it upward and the movement is indicated
by the dial pointer. The di l is graduated into 100 divisions. A full revolution of the pointer
about this scale corresponds to 1mm travel of the plunger. Thus, a turn of the pointer b one
scale division represents a plunger travel of 0.01mm.
Experimental setup
The whole setup consists of worktable, dial indicator and vertical post. The dial indicator is
fitted to vertical post by on adjusting screw as shown in fig. The vertical post is fitted on the
work table; the top surface of the worktable is finely finished. The dial gauge can be adjusted
vertically and locked in position by a screw.
Dial
Indicator
The plunger
Mini dial
Locking screw
Magnification Mechanism(Lever/Gear and Pinion)
1. It should give trouble free and dependable readings over a long period.
2. The pressure required on measuring head to obtain zero reading must remain constant over
the whole range.
3. The pointer should indicate the direction of movement of the measuring plunger.
4. The accuracy of the readings should be within close limits of the various sizes and ranges.
5. The movement of the measuring plunger should be in either direction without affecting the
accuracy.
6. The pointer movement should be damped, so that it will not oscillate when the readings are
being taken.
Applications:
2. To determine the errors in geometrical form such as ovality, roundness and taper.
5. To check the alignment of lathe centers by using suitable accurate bar between the centers.
6. To check trueness of milling machine arbours and to check the parallelism of shaper arm
with table surface or vice.
2.Johansson Mikrokator
Johansson Mikrocator was first developed by C.E Johansson so from then on words it was
named as Johansson Mikrokator.
Instead of gears or rack and Pinion, it uses a twisted strip to magnify the small linear
movement of the plunger to Indicator(Pointer). So it can also be called as Twisted strip
Comparator.
There is another name is available for this Comparator. That is Abramson Movement this is
Why because of the mechanical magnification designed by H. Abramson.
The main components are a Twisted strip, Bell crank lever, Cantilever strip, Plunger.
Principle: It works on the principle of a Button spring, spinning on a loop of string like in the
case of Children‘s toys.
The magnification of plunger movement can be obtained mechanical means such as levers,
gear and pinion arrangement, or other mechanical means.
The magnification is approximately equal to the ratio of change of pointer movement to the
length of change in length of strip. i.e. d /dL
When the twisting movement is minor, it is magnified up to 4000X for better results.
Reed is a Long supporting rod/Shaft. In Reed type mechanical comparator we use a number
of Reeds to avoid frictional contacts. See the following image of a Reed type mechanical
comparator.
Out of four reeds, two were placed vertically(D) and two placed horizontally to connect the
two blocks( A and B).
Here ‘A’ block is fixed block, whereas ‘B’ block is floating Block.
Out of the two vertical reeds Left vertical is Fixed and the right vertical is movable and
connected to the block ‘B’
Advantages
1) It is usually robust, compact and easy to handle.
2) There is no external supply such as electricity, air required.
3) It has very simple mechanism and is cheaper when compared to other types
4) It is suitable for ordinary workshop and also easily portable.
Disadvantages
Accuracy of the comparator mainly depends on the accuracy of the rack and pinion
arrangement. Any slackness will reduce accuracy.
1. It has more moving parts and hence friction is more and accuracy is less.
2. The range of the instrument is limited since pointer is moving over a fixed scale.
4.Sigma Comparator
Principle:
In mechanical optical comparator, small variation in the plunger movement is magnified: first
by mechanical system and then by optical system.
Construction: The movement of the plunger is magnified by the mechanical system using a
pivoted lever. From the Figure the mechanical magnification = x2/ x1 . High optical
magnification is possible with a small movement of the mirror. The important factor is that
the mirror used is of front reflection type only.
The back reflection type mirror will give two reflected images as shown in Figure, hence the
exact reflected image cannot be identified.
The second important factor is that when the mirror is tilted by an angle θ, then the image
will be tilted by an angle 2θ, this is shown in the Figure
Advantages:
1. These Comparators are almost weightless and have less number of moving parts, due to
this there is less wear and hence less friction.
2. Higher range even at high magnification is possible as the scale moves past the index.
3. The scale can be made to move past a datum line and without having any parallax errors.
4. They are used to magnify parts of very small size and of complex configuration such as
intricate grooves, radii or steps.
Disadvantages:
5. Projection type instruments occupy large space and they are expensive.
6. When the scale is projected on a screen, then it is essential to take the instrument to a dark
room in order to take the readings easily.
Zeiss ultra- Optimeter
The optical system of this instrument involves a double reflection of light and thus gives a
higher degree of magnification.
A lamp sends light rays through the green filter to filter all rays except green light, which
causes less fatigue to the eye.
The green light then passes through a condenser which via an index mark projects it on to
a movable mirror M1. It is then reflected to another fixed mirror M2 and back again to the
first movable mirror.
The objective lens brings the reflected beam from the movable mirror to a focus at a
transparent graticule containing a precise scale that is viewed by eye-piece.
The projected image of the index line on the graticule can be adjusted by means of a screw
in order to set the initial zero reading.
When correctly adjusted, the image of the index line is seen against that of the graticule
scale.
The end of the contact plunger rests against the other end of the first movable mirror so
that any vertical movement of the plunger will tilt the mirror.
This causes a shift in the position of the reflected index line on the eyepiece graticule
scale, which in turn measures the displacement of the plunger.
Advantages of optical comparators :
Optical comparators have few moving linkages and hence are not subjected to friction,
wear, and tear.
High accuracy of the measurement.
The magnification is usually high.
Disadvantages of optical comparators
OPTICAL COMPARATOR
There are no pure optical comparators but the instruments classed as optical comparators
obtain large magnification in these instruments contributes principles through mechanical
magnification. All-optical comparators are capable of giving a high degree of measuring
precision.
The operating principle of this type, of the comparator, is based on the laws of light reflection
and refraction. The magnification system depends on the tilting of a mirror, deflects a beam
of light, thus providing an optical lever.
Then the reflected light beam moves through an angle “2α” which is twice the angle of tilt
produced by the plunger movement. The illuminated dot moves to “B” thus a linear
movement “h” of the plunger produces a movement of the dot equivalent to the distance OB
on the screen. It also clear that as the distance (OC) of the screen from tilting mirror
increases, greater will be the magnification and is called the principle of enlarge image.
PNEUMATIC COMPARATOR:
These instruments utilize the variations in the air pressure or velocity as an amplifying
medium.
A jet or jets of air are applied to the surface being measured and the variations in the
backpressure or velocity of air caused due to variations in loused to amplify the output
signals.
Based on the physical phenomena, the pneumatic comparators are classified into two types.
This instrument was first commercially introduced by Solex Air. Gauges Ltd. It uses a
water manometer for the indication of backpressure.
It consists of a vertical metal cylinder filled with water up to a certain level and a dip tube
immersed into it up to a depth corresponding to the air pressure required.
A calibrated manometer tube is connected between the cylinder and control orifice as
shown in the fig.
The pressure of the air supplied is higher than the desired pressure, some air will bubble
out from the bottom of the dip tube and air moving to the control volume will be at the
desired constant pressure.
The constant pressure air then passes through the control orifice and escapes from the
measuring jets.
When there is no restriction to the escape of air, the level of water in the manometer tube
will coincide with that in the cylinder.
But, if there is a restriction to the escape of air through the jets, back pressure will be
induced in the circuit and level of water in the manometer tube will fall.
The restriction to the escape of air depends upon the variations in the dimensions to be
measured.
Thus the variations in the dimensions to be measured are converted into corresponding
pressure variations, which can be read from the calibrated scale provided with the
manometer.
Advantages of Pneumatic Comparators :
1. Very high magnification
2. Less friction, wear, and inertia
3. Less measuring pressure
4. Determines ovality and taper of circular bores
ELECTRICAL COMPARATOR
Electrical comparators convert the linear movement of the plunger into electrical signals and
these signals further calibrated with the help of Galvanometer on to the graduated scale.
Whereas Electronic comparators are used frequency modulation as the magnification system
and projected it on the meter.
A device used to measure the unknown resistance. The construction of Whetstone is quite
simple. there is two series set of resistance connected parallelly and a voltmeter is connected
in between them. Simply we can say two legs of resistance. In which one leg consists of the
unknown resistor, that has to be calibrated. but this circuit structure will act as an
actuator(Galvanometer) in this electrical comparators. With the whetstone bridge circuit, we
will incorporate the galvanometer.
An armature is supported on the thin steel strip and is placed in between the two coils A and
B.(as shown in below diagram)
When the armature is at its centre position that means, it will have equal distance from both
coils A and B, then there will be no current passing thru the whetstone bridge galvanometer.
The galvanometer is an actuator to produce a rotary movement of the pointer (deflection )
with respect to electric current passing thru it
A plunger has captured the movement as the measuring tip and passes this movement to the
thin steel strip.
Now the Armature will be unbalanced and the electric current will be passing thru it. It will
be shown as the deflection with the help of galvanometer.
An electrical comparator consists of the following three major part such as 1. Transducer 2.
Display device as meter 5. Amplifier
Transducer: An iron armature is provided in between two coils held by a lea spring at one
end. The other end is supported against a plunger. The two coils act as two arms of an A C.
wheat stone bridge circuit.
Amplifier: The amplifier is nothing but a device which amplifies the give input signal
frequency into magnified output
Display device or meter: The amplified input signal is displayed on some terminal stage
instruments. Here, the terminal instrument is a meter.
Working principle
If the armature is centrally located between the coils, the inductance of both coils will be
equal but in opposite direction with the sign change. Due to this, the bridge circuit of A.C.
wheat stone bridge is balanced. Therefore, the meter will read zero value. But practically, it is
not possible. In real cases, the armature may be lifted up or lowered down by the plunger
during the measurement. This would upset the balance of the wheat stone bridge circuit Due
to this effect, the change in current or potential will be induced correspondingly On that time,
the meter will indicate some value as displacement. This indicated value may be either for
larger or smaller components. As this induced current is too small, it should be suitably
amplified before being displayed in the meter.
Checking of accuracy
To check the accuracy of a given specimen or work, first a standard specimen is placed under
the plunger. After this, the resistance of wheat stone bridge is adjusted so that the scale
reading shows zero. Then the specimen is removed. Now, the work is introduced under the
plunger. If height variation of work presents, it will move the plunger up or down. The
corresponding movement of the plunger is first amplified by the amplifier then it is
transmitted to the meter to show the variations. The east count of this electrical comparator is
0.001mm (one micron).
Advantage of electrical comparator
ELECTRONIC COMPARATOR
comparators uses frequency modulation and calibrate and projected it to the scale.
The working principle of this electronic comparators is something similar to the electrical
comparators. but in electrical we use galvanometer as the magnifying system, in electronic
comparators, we will use frequency modulation as the magnifying system.
apart from the magnifying systems remaining are the same similar (Working Principle and
construction)
A Fluid Displacement comparator is a type of comparator which has very limited applications
and works on the principle of the liquid rise in capillarity tube.
Capillarity tube
Graduated Scale
Fluid Chamber
Diaphragm
Plunger
Due to the certain disadvantages, fluid displacement comparators have limited applications.
The major one is that, the liquid will be expanded to the temperature rise so not suitable for
all temperature conditions.
MULTICHECK COMPARATORS
Multicheck Comparator is one types of comparators.
Multicheck comparator is meant to inspect the multiple dimensions of an object at the same
time. This is very helpful while inspecting relative dimensions. ( Example: Diameter of a
bore and its concentricity). So multicheck computer is used to check multiple measurements.
There are different types of multi-check comparators are available. They are listed below
There are electric heads used to check the dimensions in electrical comparators. Electrical
multicheck comparators are used a number of the electric-check heads to check multiple
dimension in a part.
In each measuring head, there are some signal lights which represents whether the dimension
is a correct size or oversized or undersized. There is only one signal light which will collect
all the other light signals of various dimensions and if there is any deviation from the working
standard, the master light represent there is a deviation by examining the other light signals.
This type of Multi-check comparators is widely used in the mass production lines.
Group of Air comparators are used in Air multicheck comparators. This is also similar to the
above electrical multicheck comparators
With Air multicheck comparators it is easy to check diameter and concentricity at a time.
This is the advantage over the electrical Multichek comparators.
The combination of both Air comparators and electric comparators to check the multiple
dimensions in at a time.
While checking measurements of an object having multiple dimensions, the Diameter and
concentricity will be measured with the air comparators and the remaining dimensions will be
checked by the electrical comparators.
A projector is an optical device, which enlarges the image. This is the principle we are going
to use it in the Optical projector comparator.
This Optical Profile of projectors is used to check relatively small engineering components
with the working standard.
1. Light source
2. Condenser Lens (C)
3. Projection Lens (P)
4. Screen
A beam of light from the light source is passed thru the condenser lens(C) and Projection
lens(P) and fall on the Screen. the workpiece will be placed in between the light source and
condenser lens. A shadow image of the workpiece will be created. while we placed the
workpiece.
The magnified image will be shown on the screen. This magnification is up to 5 to 100.
When an object is placed in between condenser lens and the light source, a shadow of the
profile is projected at some enlarged scale on the screen. this enlarged profile will be used to
compare with the working standard.
MICROSCOPE
It is scientific equipment that magnifies very small objects that are not visible to the naked
eyes.
1. Compound Microscope
It is an instrument that has two lenses (set of two lenses) these lenses is objectives and ocular.
Furthermore, they use visible light as a source of illumination.
2. Darkfield Microscope
These microscopes have a device that scatters light from the illuminator. In addition, it does this
to make the specimen appear white against the black background.
3. Electron Microscope
It is a scope that instead of light uses a flow of electron to produce an image. Moreover, this
microscope enhances the images of viruses, protein, lipids, ribosomes, and even small
molecules.
4. Fluorescence Microscope
These scopes use ultraviolet light to illuminate specimens that fluoresce. Besides, mostly, a
fluorescent antibody or dye is added on the viewed specimen.
5. Contrast/Phase Microscope
This scope uses a special condenser that allows the examination of structures inside the cells.
Also, they use a compound light. Furthermore, these microscopes take advantage of different
refractive indexes for the examination of live organisms.
In addition, the final image produced by these microscopes is a combination of light
and dark.
1. Arm
It is in the back of the microscope and supports the objectives and ocular. Also, it is the part that
we use to carry or lift it.
2. Base
It’s the bottom of the scope. In addition, it houses the light source and the back section of base
acts as a handle to carry the scope.
5. Illuminator
It is the light source of the microscope.
6. Numerical Aperture or Objective lens
It is found in a compound scope and is the lens that is closest to the specimen.
7. Ocular Lens
This is the lens closest to the viewer in a compound light microscope.
Working: The component being measured is illuminated by the through light method. A
parallel beam of light illuminates the lower side of workpiece which is then received by the
objective lens in its way to a prism that deflects the light rays in the direction of the
measuring ocular and the projection screen. Incident illumination can also be provided by an
extra attachment. Exchangeable objective lens having magnification 1X, 1.5X, 3X and 5X are
available so that a total magnification of l0X, 15X, 30 X and 50X can be achieved with an
ocular of l0X. The direction of illumination can be tilted with respect to the workpiece by
tilting the measuring head and the whole optical system. This inclined illumination is
necessary in some cases as in screw thread measurements.
USES OF COMPARATORS
They are used as Working Gauges to maintain the tolerance in all stages of manufacturing.
For Final Inspection, Comparators are used after production of parts before assembly.
The angle is a measurement that we can measure between the two lines which meet at one
point. Angular measurements are playing a very crucial role in measurements. The best
example is that the Ships and the Aeroplanes will navigate with the help of The precise
angular directions of the landing area.
Angular Measurement
As we said above the angle is defined as the opening between the two lines when they
coincide at one point.
The basic unit for the Angular measurement is Degree (°). If a circle is divided into 360
equal parts by a line passing thru the centre. then each part can be called as a Degree (°).
Each degree (°) divided into 60 Minutes (‘).
Each minute (‘) divided into 60 Seconds(“).
Another unit for the angle is Radian.
Radian defined as the angle at the centre of an arc whose length is equal to the radius of
the arc(Circle).
1 Radian = 57.2958°
There are various types of Angular Measurements which are based on various standards, and
those are the following:
The basic difference between them is, the protractor is used to measure the angle of the flat
surfaces whereas the bevel protractor is used to measure the angles of inclined surfaces.
PROTRACTOR:
It is a piece of simple angular measuring equipment used for measurement of angles from
zero degrees to 180 degrees with an accuracy of + or -0.5 degree.
This is generally used for measurement and setting of angles in drawings only.
BEVEL PROTRACTOR:
It is used to measure the angle of the given specimen by placing the component in between
the two blades of the bevel protractor.
It is calibrated in Main Scale Divisions and Vernier Scale Divisions. Main scale on the
protractor is divided into degrees from 0-90 each way such that it can complete 180 degrees.
The Vernier scale has 12 Divisions and they are divided into (0-60)minutes such that each
division is equal to 1/12th of 60. i.e. 5 mins.
VERNIER BEVEL PROTRACTOR
Vernier Bevel protractor is the simplest angular measuring device which is having a Vernier
Scale along with the acute angle attachment.
1. Main body
2. Base plate stock
3. Adjustable blade
4. Circular Plate with graduated vernier scale divisions.
5. Acute angle attachment
The base plate(Stock) consisting of the Working edge will be mounted on the Main
body.
And the Acute angle attachment is also mounted on the main body.
This acute angle attachment can be readily attachable/detachable with the Locking Nut.
A circular plate having a vernier scale in it, also mounted on the Main body frame.
This circular plate is carrying an adjustable blade which can travel along its length and
locked at any position with the help of the blade locking nut.
The adjustable Blade one end is bevelled at angles 45° and the other end is bevelled at
60°.
This Main body frame itself having a graduated scale called the main scale.
The circular plate can rotate freely on the main body.
There is a slow-motion device which helps to control the rotation of the circular plate
on the main body.
2. This Adjustment Blade can be rotated along with the circular plate on the main body.
3. Which means the vernier scale on the circular plate will be rotated on the Main scale which
is graduated on the Main body as shown in below.
4. The Vernier scale has 12 divisions on each side of the centre zero. (It means there are 24
divisions on the vernier scale)
5. The 12 divisions are denoted as 60 min on the vernier scale (Like this 15, 30, 45, 60). That
means 12 division = 60 minutes
7. On the Main scale, the same portion is represented as 23° (12 divisions on vernier scale =
23° on the main scale )
8. One division on the Vernier scale = 1.91666° = 1° 55′ (one degree 55 minutes).
9. As similar to the vernier calliper working principle as Zero on the vernier scale moves on
the main scale.
10. While taking a measurement, The Zero line on the vernier scale shows the reading on the
main scale, called main scale reading.
11. At somewhere The divisions on the vernier scale will coincide with the divisions on the
Main scale. this reading is noted as the Vernier scale reading.
12. With these values along with the least count of the Vernier bevel protractor, we can
calculate the Reading.
The total reading =the main scale reading + the number of the division at which it
exactly coincides with any division on the main scale × least count of the vernier scale.
Vernier scale reading (the number of the division at which it exactly coincides with any
division on the main scale) = 3rd division
Optical Bevel Protractor is the development of the Vernier Bevel Protractor which has the
ability to measure the angles up to 2 minutes.
1. Main body
2. Base plate stock
3. Adjustable blade
4. Circular Plate with graduated Vernier scale divisions
5. Acute angle attachment
6. Optical Magnifying device (Eyepiece)
The base plate(Stock) consisting of the Working edge will be mounted on the Main
body.
And the Acute angle attachment is also mounted on the main body.
This acute angle attachment can be readily attachable/detachable with the Locking Nut.
A circular plate having a vernier scale in it, also mounted on the Main body frame.
(The divisions on the vernier scale in the optical bevel protractor are very close than
The divisions on the vernier scale in the Vernier bevel protractor.
There is an Optical Magnifying system(EyePiece) which helps to read the reading on
the vernier scale in a more efficient way.
This circular plate is carrying an adjustable blade which can travel along its length and
locked at any position with the help of the blade locking nut.
The adjustable Blade one end is bevelled at angles 45° and the other end is bevelled at
60°.
This Main body frame itself having a graduated scale called the main scale.
The circular plate can rotate freely on the main body.
There is a slow-motion device which helps to control the rotation of the circular plate
on the main body.
SINE BAR:
It is used to measure the angle of the given specimen by the usage of Slip gauges. The sine
bar is made up of high carbon steel.
The name Sine bar has come into the picture because the operation of the sine bar is
completely based on Trigonometry.
i.e. Sin θ = Opposite/hypotenuse
Sin(θ) = h/L
where 'h' is the height of the slip gauges placed below the roller and 'L' is the length of Sine
bar i.e. L=200 mm or 300 mm.
One end of the roller of the sine bar is placed on the ground and the slip gauges whose height
needs to be constructed are to be placed below the other end of the roller.
To know the value of 'h', you need to know the value of 'θ' first and that can be solved by
using Bevel Protractor.
Bevel Protractor measures the angle of the given specimen directly whereas Sine bar
measures indirectly by the usage of Slip gauges.
First, the angle of the specimen is measured by the use of Bevel Protractor and from this, we
need to calculate the value of 'h'.
Note: It is not possible to measure the angle of a given specimen directly employing a Sine
bar. Instead of that, take the reference value of 'θ' and find the value of h.The construction of
'h' can be done employing Slipgauges.
The length of the sine bar is used for specifying a sine bar i.e. at 200mm,sine bar is
nothing but the distance between the centers of the two rollers equal to 200 mm.
The top and bottom surface of the sine bar must be parallel to the line joining to the
center of the rollers.
Holes are produced in the sine bar for reducing the weight of the sine bar so that
handling of the sine bar will become easier.
Only sine bar can't be used for measurement of angles of a component, but the sine
bar is always used in association with slip gauges or Height Gauge for measurement
of the angles.
Any unknown projections present in the component will cause to induce errors in the
angle measured.
For the building of the slip gauges, there is no scientific approach available and it is to
be built on the trial and error basis and it is a time-consuming process.
During measurement of an angle by using sine bar, the length of the sine bar should
be greater than or equal to a length of the component to be inspected.
Sin(θ) = h/L
If the length of the component Inspected is very long then there is no sine bar
available which is longer than the Component. In such cases, the sine bar will be used
in association with Height Gauge for measurement of the angles.
Sin(θ) = (h2-h1)/L
ANGLE GAUGES:
In this method, the auto collimator used in conjunction with the angle gauges. It compares the
angle to be measured of the given component with the angle gauges. Angles gauges are
wedge shaped block and can be used as standard for angle measurement. They reduce the set
uptime and minimize the error. These are 13 pieces, divided into three types such as degrees,
minutes and seconds. The first series angle are 1°, 3°, 9°, 27° and 41 ° And the second series
angle are 1', 3', 9' and 27' And the third series angle are 3", 6", 18" and 30" These gauges can
be used for large number of combinations by adding or subtracting these gauges, from each
other.
Note:
When 2 slip gauges are given, it is possible to build only 1 dimension whereas when 2 angle
gauges are given, it is possible to build 2 different angles.
Angle gauges are used as a reference gauge or masterpieces for calibrating the angular
measuring equipment.
MEASUREMENT OF INCLINES
SPIRIT LEVEL:
In general, the spirit level will not be used for measurement of the angles, but it will be used
for the measurement of angular deviations or angular errors only.
The body of the spirit level is made by using very low-density material because the weight of
the spirit level must be as minimum as possible to avoid the disturbance taking place on the
angular deviations.
Spirit is used as a liquid because it has low density or viscosity and the wettability of spirit
with a glass surface is zero.
Note:
The length of one scale division must be greater than the diameter of the air bubble.
To ensure that only one division must be present within the air bubble.
The air bubble is always set at the topmost point of the curvature so that when the
spirit level is placed on the perfectly flat surface, the air bubble is setting at the Zero
location.
CLINOMETER:
Individually, Protractor and Spirit Level cannot be used for the measurement of angles of a
component.
Therefore by combining protractor and spirit levels, it is possible to measure the angle of the
component which is called a Clinometer.
It is a piece of simple angular measuring equipment used for measuring an approximate angle
of the component.
This can measure the accuracy of the angles up to 0.5 degrees and any unknown projections
present in the component will also create errors in the angle measured.
Clinometer principle :
A clinometer is a special case of application of spirit level for measuring, in the vertical
plane, the incline of a surface in relation to the basic horizontal plane, over an extended
range. The main functional element of a clinometer is the sensitive vial mounted on a
rotatable disc, which carries a graduated ring with its horizontal axis supported in the housing
of the instrument. The bubble of the vial is in its centre position, when the clinometer is
placed on a horizontal surface and the scale of the rotatable disc is at zero position. If the
clinometer is placed on an incline surface, the bubble deviates from the centre. It can be
brought to the centre by rotating the disc. The rotation of the disc can be read on the scale. It
represents the deviation of the surface over which the clinometer is placed from the
horizontal plane. Figure shows a diagram of a clinometer.
A number of commercially available clinometers with various designs are available. They
differ in their sensitivity and measuring accuracy. Sensitivity and measuring accuracy of
modern clinometers can be compared with any other high precision measuring instruments.
For shop uses, clinometers with 10′ graduations are available.
Clinometer Applications
Two categories of measurement are possible with clinometer. Care must be taken to keep the
axis of the rotatable disc parallel to the hinge line of the incline.
(i) Measurement of an incline place with respect to a horizontal plane. this is done by placing
the instrument on the surface to be measured and rotating graduated disc to produce zero
inclination on the bubble. The scale value of the disc position will be equal to the angle of
incline.
(ii) Measurement of the relative position of two mutually inclined surfaces. This is done by
placing the clinometer on each of the surface in turn, and taking the readings with respect to
the horizontal. The difference of both the readings will indicate the angular value of the
relative incline.
ANGLE COMPARATORS
ANGLE DEKKER:
It is not the direct angle measuring equipment but it is an angle comparator that compares the
angle of the given component with a standard set of masterpieces which can also be taken as
Angle gauges.
Corresponding to Theta (θ)Approx., built the angle gauges and keep it below the optical head
and switch on the optical head because of an incident ray of light and reflected ray of light,
the scales are observed to be distributed completely which is shown in the figure.
By using the knobs, adjust the scales in such a way that both the scales will be coming to the
center of each other.
Now fix the knobs by removing the angle gauges from the optical head and keep the actual
component whose angle is to be measured below the optical head because of the presence of
difference of the actual angle of the component and approximate angle, the reflected ray is in
the different direction.
The auto-dekkor is used with the combination of angle gauges; the reading is taken for the
angle gauges and the reflected image of angle gauge is obtained in the field of view of
eyepiece as shown in Figure.
The angle gauges is then replaced by the work piece to be measured and again the reading is
measured. If the angle measured from the angle gauge is different than the angle measured
from the work piece then two different readings will be seen in the view of eyepiece. The
error in the angle will be shown in minutes of arc on the scale as shown in the Figure. Auto-
dekkor is not a precision instrument in compare to Auto-Collimator, but it can be used for
general angular measurement.
AUTOCOLLIMATOR
Autocollimators are optical instruments that measure angular displacements with high
sensitivity. They are used to align optical components and measure optical and mechanical
deflections.
Autocollimation Principle :
The two main principles used in an autocollimator are
(a) the projection and the refraction of a parallel beam of light by a lens, and
(b) the change in direction of a reflected angle on a plane reflecting surface with the change
in the angle of incidence.
To understand this, let us imagine a converging lens with a point source of light O at its
principle focus, as shown in Figure a. When a beam of light strikes a flat reflecting surface, a
part of the beam is absorbed and the other part is reflected back. If the angle of incidence is
zero, i.e. incident rays fall perpendicular to the reflecting surface, the reflected rays retrace
the original path. When the reflecting plane is tilted at a certain angle, the total angle through
which the light is deflected is twice the angle through which the mirror is tilted. Thus,
alternately, if the incident rays are not at the right angle to the reflecting surface they can be
brought to the focal plane of the light sources by tilting the reflecting plane at an angle half
the angle of reflection as shown in Figure b.
Now, from the diagram, OO’ = 2Θ × f = x, where f is the focal length of the lens.
Thus, by measuring the linear distance x, the inclination of the reflecting surface Θ can be
determined. The position of the final image does not depend upon the distance of the reflector
from the lens. If, however, the reflector is moved too long, the reflected ray will then
completely miss the lens and no image will be formed.
Working Of Autocollimator:
In actual practice, the work surface whose inclination is to be obtained forms the reflecting
surface and the displacement x is measured by a precision microscope that is calibrated
directly to the values of inclination Θ.
The optical system of an autocollimator is shown in Figure The target wires are illuminated
by the electric bulb and act as a source of light since it is not convenient to visualize the
reflected image of a point and then to measure the displacement x precisely. The image of the
illuminated wire after being reflected from the surface being measured is formed in the same
plane as the wire itself. The eyepiece system containing the micrometer microscope
mechanism has a pair of setting lines that may be used to measure the displacement of the
image by setting to the original cross lines and then moving over to those of the image.
Generally, calibration is supplied with the instrument. Thus, the angle of inclination of the
reflecting surface per division of the micrometer scale can be directly read.
Autocollimators are quite accurate and can read up to 0.1 seconds, and may be used for a
distance up to 30 meters.
TYPES OF AUTOCOLLIMATORS
1) Visual Autocollimators
Visual autocollimators measure the angle of optically flat (1/4 wave or better), reflective
surfaces in arc seconds by viewing a graduated reticle through an eyepiece. The longer the
focal length of the visual autocollimator, the greater the angular resolution and the smaller the
field of view
2)Digital Autocollimators
Autocollimators are PC-based instruments that are designed to operate in the lab as well as
in a machine shop environment.
Use an electronic photodetector to detect the reflected beam.
No external controller is required.
Advantages: 1) High precision. 2) Real-Time measurements. 3) User – friendly interface.
4) Creating data reports and transferring in other programs.
The focal length determines the basic sensitivity and angular measuring range of the
instrument. A longer focal length gives a greater measuring sensitivity and measurement
accuracy (due to larger linear displacement for a given reflector tilt). But as the focal length
increases the measuring range decreases proportionally. Also a longer focal length affects the
mechanical extension of the tube.
A geometrical beam splitter results in smaller image angles but greater image brightness.
These are used mainly with small targets and due to their internal layout cannot be used for
measurement of corner cubes. A physical beam splitter is recommended in most cases due to
the larger measuring range.
When the distance between the autocollimator and target mirror remains fixed, extremely
close readings can be taken and repeatability is excellent. For variable focal length, an
objective tube with focus adjustment is used.
Application Of Autocollimator :
Autocollimators are used by the optical industry and mechanical engineers in a variety of
applications. Their specific functions include precision alignment, the detection of angular
movement, the verification of angle standards, and angular monitoring over long periods.
Testing Application
Measurement Applications
In addition to testing applications, autocollimators can be used to measure the
Advantages of Autocollimators :