0% found this document useful (0 votes)
27 views105 pages

Measurements

Uploaded by

Ranasiri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views105 pages

Measurements

Uploaded by

Ranasiri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 105

Measurements

Measurement is the act, or the result, of a quantitative comparison


between a predetermined standard and an unknown magnitude.
Range
• It represents the highest possible value that can be measured by an
instrument.
Scale sensitivity
• It is defined as the ratio of a change in scale reading to the
corresponding change in pointer deflection. It actually denotes the
smallest change in the measured variable to which an instrument
responds.
True or actual value
• It is the actual magnitude of a signal input to a measuring system which can only be approached
and never evaluated.

Accuracy
• It is defined as the closeness with which the reading approaches an accepted standard value or true
value.

Precision
• It is the degree of reproducibility among several independent measurements of the same true value
under specified conditions. It is usually expressed in terms of deviation in measurement.

Repeatability
• It is defined as the closeness of agreement among the number of consecutive measurement of the
output for the same value of input under the same operating conditions. It may be specified in
terms of units for a given period of time.
Reliability
• It is the ability of a system to perform and maintain its function in routine circumstances.
Consistency of a set of measurements or measuring instrument often used to describe a test.

Systematic Errors
• A constant uniform deviation of the operation of an instrument is known as systematic error.
Instrumentational error, environmental error, Systematic error and observation error are
systematic errors.

Random Errors
• Some errors result through the systematic and instrument errors are reduced or at least
accounted for. The causes of such errors are unknown and hence, the errors are called
random errors.

Calibration
• Calibration is the process of determining and adjusting an instruments accuracy to make
sure its accuracy is within the manufacturer‘s specifications.
GENERAL CONCEPT

Introduction to Metrology
• Metrology word is derived from two Greek words such as metro which
means measurement and logy which means science. Metrology is the
science of precision measurement. The engineer can say it is the science of
measurement of lengths and angles and all related quantities like width,
depth, diameter and straightness with high accuracy.
• Metrology demands pure knowledge of certain basic mathematical and
physical principles. The development of the industry largely depends on the
engineering metrology. Metrology is concerned with the establishment,
reproduction and conservation and transfer of units of measurements and
their standards. Irrespective of the branch of engineering, all engineers
should know about various instruments and techniques.
Introduction to Measurement
• Measurement is defined as the process of numerical evaluation of a
dimension or the process of comparison with standard measuring
instruments. The elements of measuring system include the
instrumentation, calibration standards, environmental influence,
human operator limitations and features of the work-piece. The basic
aim of measurement in industries is to check whether a component
has been manufactured to the requirement of a specification or not.
Types of Metrology
• Legal Metrology - 'Legal metrology' is that part of metrology which
treats units of measurements, methods of measurements and the
measuring instruments, in relation to the technical and legal
requirements. The activities of the service of 'Legal Metrology' are:
(i) Control of measuring instruments
(ii) Testing of prototypes/models of measuring instruments;
(iii) Examination of a measuring instrument to verify its
conformity to the statutory requirements etc.
Dynamic Metrology
'Dynamic metrology' is the technique of measuring small variations of
a continuous nature. The technique has proved very valuable, and a
record of continuous measurement, over a surface, for instance, has
obvious advantages over individual measurements of an isolated
character.
Deterministic metrology
Deterministic metrology is a new philosophy in which part
measurement is replaced by process measurement. The new
techniques such as 3D error compensation by CNC (Computer
Numerical Control) systems and expert systems are applied, leading to
fully adaptive control. This technology is used for very high precision
manufacturing machinery and control systems to achieve micro
technology and nanotechnology accuracies.
OBJECTIVES OF METROLOGY
Although the basic objective of a measurement is to provide the
required accuracy at a minimum cost, metrology has further objectives
in a modern engineering plant with different shapes which are:
1. Complete evaluation of newly developed products.
2. Determination of the process capabilities and ensure that these are
better than the relevant component tolerances.
3. Determination of the measuring instrument capabilities and ensure
that they are quite sufficient for their respective measurements.
4. Minimizing the cost of inspection by effective and efficient use of
available facilities.
5. Reducing the cost of rejects and rework through application of
Statistical Quality Control Techniques. 6. To standardize the measuring
methods
7. To maintain the accuracies of measurement.
8. To prepare designs for all gauges and special inspection fixtures.
METHODS OF MEASUREMENTS
These are the methods of comparison used in measurement process.
In precision measurement various methods of measurement are
adopted depending upon the accuracy required and the amount of
permissible error. The methods of measurement can be classified as:
l. Direct method 2. Indirect method
3. Absolute or Fundamental method 4. Comparative method
5. Transposition method 6. Coincidence method
7. Deflection method 8. Complementary method
9. Contact method 10. Contact less method
1. Direct method of measurement:
This is a simple method of measurement, in which the value of the
quantity to be measured is obtained directly without any calculations.
For example, measurements by using scales, Vernier calipers,
micrometers, bevel protector etc. This method is most widely used in
production. This method is not very accurate because it depends on
human insensitiveness in making judgment.
2. Indirect method of measurement:
• In indirect method the value of quantity to be measured is obtained
by measuring other quantities which are functionally related to the
required value. E.g. Angle measurement by sine bar, measurement of
screw pitch diameter by three wire method etc.

3. Absolute or Fundamental method:


• It is based on the measurement of the base quantities used to define
the quantity. For example, measuring a quantity directly in
accordance with the definition of that quantity, or measuring a
quantity indirectly by direct measurement of the quantities linked
with the definition of the quantity to be measured.
4. Comparative method:
• In this method the value of the quantity to be measured is compared with
known value of the same quantity or other quantity practically related to
it. So, in this method only the deviations from a master gauge are
determined, e.g., dial indicators, or other comparators.

5. Transposition method:
• It is a method of measurement by direct comparison in which the value of
the quantity measured is first balanced by an initial known value A of the
same quantity, and then the value of the quantity measured is put in place
of this known value and is balanced again by another known value B. If the
position of the element indicating equilibrium is the same in both cases,
the value of the quantity to be measured is AB. For example,
determination of amass by means of a balance and known weights, using
the Gauss double weighing.
6. Coincidence method:
• It is a differential method of measurement in which a very small difference
between the value of the quantity to be measured and the reference is
determined by the observation of the coincidence of certain lines or
signals. For example, measurement by vernier calliper micrometer.
7. Deflection method:
• In this method the value of the quantity to be measured is directly
indicated by a deflection of a pointer on a calibrated scale.
8. Complementary method:
• In this method the value of the quantity to be measured is combined with
a known value of the same quantity. The combination is so adjusted that
the sum of these two values is equal to predetermined comparison value.
For example, determination of the volume of a solid by liquid
displacement.
9. Method of measurement by substitution:
• It is a method of direct comparison in which the value of a quantity
to be measured is replaced by a known value of the same quantity, so
selected that the effects produced in the indicating device by these
two values are the same.
10. Method of null measurement:
• It is a method of differential measurement. In this method the
difference between the value of the quantity to be measured and the
known value of the same quantity with which it is compared is
brought to zero
GENERALIZED MEASUREMENT SYSTEM

• A measuring system exists to provide information about the physical


value of some variable being measured. In simple cases, the system
can consist of only a single unit that gives an output reading or signal
according to the magnitude of the unknown variable applied to it.
However, in more complex measurement situations, a measuring
system consists of several separate elements as shown in Figure below
Standards
• The term standard is used to denote universally accepted
specifications for devices. Components or processes which ensure
conformity and interchangeability throughout a particular industry. A
standard provides a reference for assigning a numerical value to a
measured quantity. Each basic measurable quantity has associated
with it an ultimate standard. Working standards, those used in
conjunction with the various measurement making instruments.
The following is the generalization of echelons of standards in the
national measurement system.
1. Calibration standards 2. Metrology standards
3. National standards

2. Calibration standards: Working standards of industrial or


governmental laboratories.
2. Metrology standards: Reference standards of industrial or
Governmental laboratories.
3. National standards:
It includes prototype and natural phenomenon of SI (Systems
International), the world wide system of weight and measures
standards. Application of precise measurement has increased so much,
that a single national laboratory to perform directly all the calibrations
and standardization required by a large country with high technical
development. It has led to the establishment of a considerable number
of standardizing laboratories in industry and in various other areas. A
standard provides a reference or datum for assigning a numerical value
to a measured quantity.
Classification of Standards
• To maintain accuracy and interchangeability it is necessary that Standards to be
traceable to a single source, usually the National Standards of the country, which are
further linked to International Standards. The accuracy of National Standards is
transferred to working standards through a chain of intermediate standards in a
manner given below.
•National Standards
National Reference Standards
•Working Standards •Plant Laboratory Reference Standards •Plant Laboratory
Working Standards
•Shop Floor Standards Evidently, there is degradation of accuracy in passing from the
defining standards to the shop floor standards. The accuracy of particular standard
depends on a combination of the number of times it has been compared with a
standard in a higher echelon, the frequency of such comparisons, the care with which
it was done, and the stability of the particular standards itself.
Accuracy of Measurements
The purpose of measurement is to determine the true dimensions of a
part. But no measurement can be made absolutely accurate. There is
always some error. The amount of error depends upon the following
factors:
• The accuracy and design of the measuring instrument
• The skill of the operator
Method adopted for measurement
• Temperature variations
• Elastic deformation of the part or instrument etc.
• Thus, the true dimension of the part cannot be determined but can
only by approximate. The agreement of the measured value with the
true value of the measured quantity is called accuracy. If the
measurement of dimensions of a part approximates very closely to
the true value of that dimension, it is said to be accurate. Thus the
term accuracy denotes the closeness of the measured value with the
true value. The difference between the measured value and the true
value is the error of measurement. The lesser the error, more is the
accuracy.
Precision
• The terms precision and accuracy are used in connection with the
performance of the instrument. Precision is the repeatability of the
measuring process. It refers to the group of measurements for the
same characteristics taken under identical conditions. It indicates to
what extent the identically performed measurements agree with each
other. If the instrument is not precise it will give different (widely
varying) results for the same dimension when measured again and
again. The set of observations will scatter about the mean. The scatter
of these measurements is designated as σ, the standard deviation. It
is used as an index of precision. The less the scattering more precise is
the instrument. Thus, lower, the value of σ, the more precise is the
instrument.
Accuracy
• Accuracy is the degree to which the measured value of the quality
characteristic agrees with the true value. The difference between the
true value and the measured value is known as error of
measurement. It is practically difficult to measure exactly the true
value and therefore a set of observations is made whose mean value
is taken as the true value of the quality measured.
Distinction between Precision and Accuracy
• Accuracy is very often confused with precision though much different.
The distinction between the precision and accuracy will become clear
by the following example. Several measurements are made on a
component by different types of instruments (A, B and C respectively)
and the results are plotted. In any set of measurements, the individual
measurements are scattered about the mean, and the precision
signifies how well the various measurements performed by same
instrument on the same quality characteristic agree with each other.
• The difference between the mean of set of readings on the same
quality characteristic and the true value is called as error. Less the
error more accurate is the instrument. Figure shows that the
instrument A is precise since the results of number of measurements
are close to the average value. However, there is a large difference
(error) between the true value and the average value hence it is not
accurate. The readings taken by the instruments are scattered much
from the average value and hence it is not precise but accurate as
there is a small difference between the average value and true value.
Factors affecting the accuracy of the Measuring System
The basic components of an accuracy evaluation are the five elements
of a measuring system such as:
• Factors affecting the calibration standards.
• Factors affecting the work piece.
• Factors affecting the inherent characteristics of the instrument.
Factors affecting the person, who carries out the measurements, •
Factors affecting the environment.
1. Factors affecting the Standard: It may be affected by: -Coefficient of
thermal expansion -Calibration interval -Stability with time -Elastic
properties -Geometric compatibility
Factors affecting the Work piece:
These are:
-Cleanliness
-Surface finish, waviness, scratch, surface defects etc.,
-Hidden geometry
-Elastic properties,
-adequate datum on the work piece
-Arrangement of supporting work piece
-Thermal equalization etc.
Factors affecting the inherent characteristics of Instrument: -Adequate
amplification for accuracy objective -Scale error
-Effect of friction, backlash, hysteresis, zero drift error
-Deformation in handling or use, when heavy work pieces are
measured -Calibration errors -Mechanical parts (slides, guide ways or
moving elements) -Repeatability and readability -Contact geometry for
both work piece and standard.
Factors affecting person
-Training, skill
-Sense of precision appreciation
-Ability to select measuring instruments and standards -Sensible
appreciation of measuring cost -Attitude towards personal accuracy
achievements
-Planning measurement techniques for minimum cost, consistent
with precision requirements etc.
• Factors affecting Environment: -Temperature, humidity etc. -Clean
surrounding and minimum vibration enhance precision -Adequate
illumination -Temperature equalization between standard, work
piece, and instrument -Thermal expansion effects due to heat
radiation from lights -Heating elements, sunlight and people
• -Manual handling may also introduce thermal expansion.
• Higher accuracy can be achieved only if, ail the sources of error due
to the above five elements in the measuring system are analyzed and
steps taken to eliminate them. The above analysis of five basic
metrology elements can be composed into the acronym SWIPE, for
convenient reference where,
S – STANDARD W – WORKPIECE
I – INSTRUMENT P – PERSON
E – ENVIRONMENT
SENSITIVITY
Sensitivity may be defined as the rate of displacement of the indicating
device of an instrument, with respect to the measured quantity. In
other words, sensitivity of an instrument is the ratio of the scale
spacing to the scale division value. For example, if on a dial indicator,
the scale spacing is 1.0 mm and the scale division value is 0.01 mm,
then sensitivity is 100.
It is also called as amplification factor or gearing ratio. If we now
consider sensitivity over the full range of instrument reading with
respect to measured quantities as shown in Figure the sensitivity at any
value of y=dx/dy, where dx and dy are increments of x and y, taken over
the full instrument scale, the sensitivity is the slope of the curve at any
value of y.
The sensitivity may be constant or variable along the scale. In the first
case we get linear
transmission and in the second non-linear transmission. . Sensitivity
refers to the ability of measuring device to detect small differences
in a quantity being measured. High sensitivity instruments
may lead to drifts due to thermal or other effects, and indications may
be less repeatable or less precise than that of the instrument of lower
sensitivity.
Readability
Readability refers to the case with which the readings of a measuring
Instrument can be read. It is the susceptibility of a measuring device to
have its indications converted into meaningful number. Fine and widely
spaced graduation lines ordinarily improve the readability. If the
graduation lines are very finely spaced, the scale will be more readable
by using the microscope; however, with the naked eye the readability
will be poor. To make micrometers more readable they are provided
with vernier scale. It can also be improved by using magnifying devices.
Calibration
The calibration of any measuring instrument is necessary to measure the
quantity in terms of standard unit. It is the process of framing the scale of the
instrument by applying some standardized signals. Calibration is a pre-
measurement process, generally carried out by manufacturers. It is carried
out by making adjustments such that the read out device produces zero
output for zero measured input. Similarly, it should display an output
equivalent to the known measured input near the full scale input value. The
accuracy of the instrument depends upon the calibration. Constant use of
instruments affects their accuracy. If the accuracy is to be maintained, the
instruments must be checked and recalibrated if necessary. The schedule of
such calibration depends upon the severity of use, environmental conditions,
accuracy of measurement required etc. As far as possible calibration should
be performed under environmental conditions which are vary close to the
conditions under which actual measurements are carried out. If the output of
a measuring system is linear and repeatable, it can be easily calibrated.
Repeatability
It is the ability of the measuring instrument to repeat the same results
for the measurements for the same quantity, when the measurement
are carried out-by the same observer, with the same instrument, under
the same conditions, without any change in location, without change in
the method of measurement and the measurements are carried out in
short intervals of time. It may be expressed quantitatively in terms of
dispersion of the results.
Reproducibility
Reproducibility is the consistency of pattern of variation in
measurement i.e. closeness of the agreement between the results of
measurements of the same quantity, when individual measurements
are carried out:
-by different observers by different methods using different
instruments
-under different conditions, locations, times etc.
STATIC AND DYNAMIC RESPONSE
The static characteristics of measuring instruments are concerned only
with the steady-state reading that the instrument settles down to, such
as accuracy of the reading.
The dynamic characteristics of a measuring instrument describe its
behavior between the time a measured quantity changes value and the
time when the instrument output attains a steady value in response. As
with static characteristics, any values for dynamic characteristics
quoted in instrument data sheets only apply when the instrument is
used under specified environmental conditions. Outside these
calibration conditions, some variation in the dynamic parameters can
be expected.
In any linear, time-invariant measuring system, the following general
relation can be written between input and output for time (t) > 0:
Where Qi is the measured quantity, qo is the output reading, and
ao ...an, bo... bm are constants. If we limit consideration to that of step
changes in the measured quantity only, then Equation (2) reduces to
Zero-Order Instrument
If all the coefficients A1 an other than ao in Equation (2) are assumed
zero, then where K is a constant known as the instrument sensitivity as
defined earlier. Any instrument that behaves according to Equation (3)
is said to be of a zero order type. Following a step change in the
measured quantity at time t, the instrument output moves immediately
to a new value at the same time instant t, as shown in Figure. A
potentiometer, which measures motion is a good example of such an
instrument, where the output voltage changes instantaneously as the
slider is displaced along the potentiometer track.
First-Order Instrument
• If all the coefficients a2 an except for ao and a1 are assumed zero in Equation
(2) then
 

Any instrument that behaves according to Equation (4) is known as a first-


order instrument. If d/dt is replaced by the D operator in Equation (4), we get

• Defining K ¼ bo/ao as the static sensitivity and t ¼ a1/ao as the time constant
of the system,
Equation (5) becomes
Second-Order Instrument
If all coefficients a3 other than ao, a1, and a2 in Equation (2) are assumed
zero, then we get
This is the standard equation for a second-order system, and any
instrument whose response can be described by it is known as a
second-order instrument. If Equation (9) is solved analytically, the
shape of the step response obtained depends on the value of the
damping ratio parameter x. The output responses of a second-order
instrument for various values of x following a step changein the value of
the measured quantity at time t are shown in Figure. Commercial
second-order instruments, of which the accelerometer is a common
example, are generally designed to have a damping ratio (x)
somewhere in the range of 0.6–0.8.
ERRORS IN MEASUREMENTS
It is never possible to measure the true value of a dimension there is
always some error. The error in measurement is the difference between
the measured value and the true value of the measured dimension.
Error in measurement = Measured value - True value

The error in measurement may be expressed or evaluated either as an


absolute error or as a relative error.
Absolute Error
True absolute error:
It is the algebraic difference between the result of measurement and the
conventional true value of the quantity measured. Apparent absolute error:
If the series of measurement are made then the algebraic difference
between one of the results of measurement and the arithmetical mean is
known as apparent absolute error.
Relative Error:
It is the quotient of the absolute error and the value of comparison use or
calculation of that absolute error. This value of comparison may be the true
value, the conventional true value or the arithmetic mean for series of
measurement. The accuracy of measurement, and hence the error depends
upon so many factors, such as: -calibration standard -Work piece -
Instrument -Person -Environment etc
Types of Errors
1. Systematic Error
These errors include calibration errors, error due to variation in the
atmospheric condition Variation in contact pressure etc. If properly
analyzed, these errors can be determined and reduced or even
eliminated hence also called controllable errors. All other systematic
errors can be controlled in magnitude and sense except personal error.
These errors results from irregular procedure that is consistent in
action. These errors are repetitive in nature and are of constant and
similar form.
2. Random Error
These errors are caused due to variation in position of setting standard
and work-piece errors. Due to displacement of level joints of
instruments, due to backlash and friction, these error are induced.
Specific cause, magnitude and sense of these errors cannot be
determined from the knowledge of measuring system or condition of
measurement. These errors are non-consistent and hence the name
random errors.
3. Environmental Error
These errors are caused due to effect of surrounding temperature,
pressure and humidity on the measuring instrument. External factors
like nuclear radiation, vibrations and magnetic field also leads to error.
Temperature plays an important role where high precision is required.
e.g. while using slip gauges, due to handling the slip gauges may
acquire human body temperature, whereas the work is at 20°C. A 300
mm length will go in error by 5 microns which is quite a considerable
error. To avoid errors of this kind, all metrology laboratories and
standard rooms worldwide are maintained at 20°C.
Calibration
It is very much essential to calibrate the instrument so as to maintain its
accuracy. In case when the measuring and the sensing system are
different it is very difficult to calibrate the system as an whole, so in
that case we have to take into account the error producing properties
of each component. Calibration is usually carried out by making
adjustment such that when the instrument is having zero measured
input then it should read out zero and when the instrument is
measuring some dimension it should read it to its closest accurate
value. It is very much important that calibration of any measuring
system should be performed under the environmental conditions that
are much closer to that under which the actual measurements are
usually to be taken.
Calibration is the process of checking the dimension and tolerances of a
gauge, or the accuracy of a measurement instrument by comparing it
to the instrument/gauge that has been certified as a standard of known
accuracy. Calibration of an instrument is done over a period of time,
which is decided depending upon the usage of the instrument or on the
materials of the parts from which it is made.
• The dimensions and the tolerances of the instrument/gauge are
checked so that we can come to whether the instrument can be used
again by calibrating it or is it wear out or deteriorated above the limit
value. If it is so then it is thrown out or it is scrapped. If the gauge or
the instrument is frequently used, then it will require more
maintenance and frequent calibration. Calibration of instrument is
done prior to its use and afterwards to verify that it is within the
tolerance limit or not. Certification is given by making comparison
between the instrument/gauge with the reference standard whose
calibration is traceable to accepted National standard.
PART – A
1. Differentiate between sensitivity and range with suitable example.
2. Define system error and correction.
3. Define: Measured
4. Define: Deterministic Metrology.
5. Define over damped and under damped system.
6. Give any four methods of measurement
7. Give classification of measuring instruments.
8. Define True size
9. Define Actual size
10. What is Hysteresis
11. What is Range of measurement?
12. Define Span 14. What is Resolution?
PART – B
1. Draw the block diagram of generalized measurement system and explain different stages with examples.
2. Distinguish between Repeatability and reproducibility
3. Distinguish between Systematic and random errors
4. Distinguish between Static and dynamic response.
5. Describe the different types of errors in measurements and the causes.
6. List the various measurement methods and explain
7. Briefly discuss on the applications of measuring instruments
8. Briefly discuss on calibration of temperature measuring devices with suitable examples
9. Explain the various systematic and random errors in measurements?
10. What is the need of calibration? Explain the classification of various measuring methods.
11. Describe loading errors and environmental errors.
12. What are elements of a measuring system? How they affect accuracy and precision? How error due to these elements are eliminated
TECHNICAL TERMS
Comparators
Comparators are one form of linear measurement device which is quick and more convenient for
checking large number of identical dimensions.
Least count
The least value that can be measured by using any measuring instrument known as least count.
Least count of a mechanical comparator is 0.0 1 mm.
Caliper
Caliper is an instrument used for measuring distance between or over surfaces comparing
dimensions of work pieces with such standards as plug gauges, graduated rules etc.
Interferometer
They are optical instruments used for measuring flatness and determining the length of the slip
gauges by direct reference to the wavelength of light.
Sine bar
Sine bars are always used along with slip gauges as a device for the measurement of angles very
precisely.
Auto-collimator
Auto-collimator is an optical instrument used for the measurement of small angular differences,
changes or deflection, plane surface inspection etc.
LINEAR MEASURING INSTRUMENTS
Linear measurement applies to measurement of lengths, diameter,
heights and thickness including external and internal measurements.
The line measuring instruments have series of accurately spaced lines
marked on them e.g. Scale. The dimensions to be measured are aligned
with the graduations of the scale. Linear measuring instruments are
designed either for line measurements or end measurements. In end
measuring instruments, the measurement is taken between two end
surfaces as in micrometers, slip gauges etc. The instruments used for
linear measurements can be classified as:
1. Direct measuring instruments
2. Indirect measuring instruments
The Direct measuring instruments are of two types:
1. Graduated
2. Non Graduated
The graduated instruments include rules, Vernier calipers, Vernier
height gauges, Vernier depth gauges, micrometers, dial indicators etc.
The non graduated instruments include calipers, trammels, telescopic
gauges, surface gauges, straight edges, wire gauges, screw pitch gauges,
radius gauges, thickness gauges, slip gauges etc. They can also be
classified as
1. Non precision instruments such as steel rule, calipers etc.,
2. Precision measuring instruments, such as Vernier instruments,
micrometers, dial gauges etc.
SCALES
The most common tool for crude measurements is the scale (also
known as rules, or rulers). Although plastic, wood and other materials
are used for common scales, precision scales use tempered steel alloys,
with graduations scribed onto the surface.
These are limited by the human eye. Basically they are used to compare
two dimensions.

The metric scales use decimal divisions, and the imperial scales use
fractional divisions. Some scales only use the fine scale divisions at one
end of the scale. It is advised that the end of the scale not be used for
measurement. This is because as they become worn with use, the end
of the scale will no longer be at a `zero' position. Instead the internal
divisions of the scale should be used. Parallax error can be a factor
when making measurements with a scale.
CALIPERS
Caliper is an instrument used for measuring distance between or over
surfaces comparing dimensions of work pieces with such standards as
plug gauges, graduated rules etc. Calipers may be difficult to use, and
they require that the operator follow a few basic rules, do not force
them, they will bend easily, and invalidate measurements made. If
measurements are made using calipers for comparison, one operator
should make all of the measurements (this keeps the feel factor a
minimal error source). These instruments are very useful when dealing
with hard to reach locations that normal measuring instruments cannot
reach. Obviously the added step in the measurement will significantly
decrease the accuracy.
VERNIER CALIPERS
The Vernier instruments generally used in workshop and engineering
metrology have comparatively low accuracy. The line of measurement
of such instruments does not coincide with the line of scale. The
accuracy therefore depends upon the straightness of the beam and the
squareness of the sliding jaw with respect to the beam. To ensure the
squareness, the sliding jaw must be clamped before taking the reading.
The zero error must also be taken into consideration. Instruments are
now available with a measuring range up to one meter with a scale
value of 0.1 or 0.2 mm.
Errors in Calipers
The degree of accuracy obtained in measurement greatly depends upon the condition of
the jaws of the calipers and a special attention is needed before proceeding for the
measurement. The accuracy and natural wear, and warping of Vernier caliper jaws should
be tested frequently by closing them together tightly and setting them to 0-0 point of the
main and Vernier scales.
MICROMETERS
There are two types in it.
(i) Outside micrometer — To measure external dimensions.
(ii) Inside micrometer — To measure internal dimensions. An outside
micrometer is shown. It consists of two scales, main scale and
thimble scale. While the pitch of barrel screw is 0.5 mm the thimble
has graduation of 0.01 mm. The least count of this micrometer is
0.01 mm.
The micrometer requires the use of an accurate screw thread as a means of obtaining a
measurement. The screw is attached to a spindle and is turned by movement of a thimble or
ratchet at the end. The barrel, which is attached to the frame, acts as a nut to engage the
screw threads, which are accurately made with a pitch of 0.05mm. Each revolution of the
thimble advances the screw 0.05mm. On the barrel a datum line is graduated with two sets of
division marks.
SLIP GAUGES
These may be used as reference standards for transferring the
dimension of the unit of length from the primary standard to gauge
blocks of lower accuracy and for the verification and graduation of
measuring apparatus. These are high carbon steel hardened, ground
and lapped rectangular blocks, having cross sectional area 0f 30 mm
10mm. Their opposite faces are flat, parallel and are accurately the
stated distance apart.
The opposite faces are of such a high degree of surface finish, that
when the blocks are pressed together with a slight twist by hand, they
will wring together. They will remain firmly attached to each other. They
are supplied in sets of 112 pieces down to 32 pieces. Due to properties
of slip gauges, they are built up by, wringing into combination which
gives size, varying by steps of 0.01 mm and the overall accuracy is of
the order of 0.00025mm. Slip gauges with three basic forms are
commonly found, these are rectangular, square with center hole, and
square without center hole.
Classification of Slip Gauges
Slip gauges are classified into various types according to their use as follows:
1) Grade 2
2) Grade 1
3) Grade 0
4) Grade 00
5) Calibration grade.
1) Grade 2: It is a workshop grade slip gauges used for setting tools,
cutters and checking dimensions roughly.
2) Grade 1:
The grade I is used for precise work in tool rooms.
3) Grade 0: It is used as inspection grade of slip gauges mainly by inspection
department.
4) Grade 00: Grade 00 mainly used in high precision works in the form of error
detection in instruments
5) Calibration grade:
The actual size of the slip gauge is calibrated on a chart supplied by the
manufactures.
Manufacture of Slip Gauges
The following additional operations are carried out to obtain the
necessary qualities in slip gauges during manufacture.
i. First the approximate size of slip gauges is done by preliminary
operations.
ii. The blocks are hardened and wear resistant by a special heat
treatment process.
iii. To stabilize the whole life of blocks, seasoning process is done.
iv. The approximate required dimension is done by a final grinding
process.
v. To get the exact size of slip gauges, lapping operation is done. vi.
Comparison is made with grand master sets.
Slip Gauges accessories
The application slip gauges can be increased by providing accessories to
the slip gauges. The various accessories are Measuring jaw
Scriber and Centre point. Holder and base
1. Measuring jaw:
It is available in two designs specially made for internal and external
features.
2. Scriber and Centre point: It is mainly formed for marking purpose.
3. Holder and base: Holder is nothing but a holding device used to hold
combination of slip gauges. Base in designed for mounting the holder
rigidly on its top surface.
INTERFEROMETERS
They are optical instruments used for measuring flatness and
determining the length of the slip gauges by direct reference to the
wavelength of light. It overcomes the drawbacks of optical flats used in
ordinary daylight. In these instruments the lay of the optical flat can be
controlled and fringes can be oriented as per the requirement. An
arrangement is made to view the fringes directly from the top and
avoid any distortion due to incorrect viewing.
Optical Flat and Calibration
1. Optical flat are flat lenses, made from quartz, having a very accurate surface to
transmit light.
2. They are used in interferometers, for testing plane surfaces.
3. The diameter of an optical flat varies from 50 to 250 nun and thickness varies
from 12 to 25 mm.
4. Optical flats are made in a range of sizes and shapes.
5. The flats are available with a coated surface.
6. The coating is a thin film, usually titanium oxide, applied on the surface to reduce
the light lost by reflection.
7. The coating is so thin that it does not affect the position of the fringe bands, but
a coated flat. The supporting surface on which the optical flat measurements are
made must provide a clean, rigid platform. Optical flats are cylindrical in form, with
the working surface and are of two types are
i) type A,
ii) type B.
i) Type A:
It has only one surface flat and is used for testing flatness of precision
measuring surfaces of flats, slip gauges and measuring tables. The
tolerance on flat should be 0.05 µm for type A.

ii) Type B:
It has both surfaces flat and parallel to each other. They are used for
testing measuring surfaces of micrometers, Measuring anvils and
similar length of measuring devices for testing flatness and parallelism.
For these instruments, their thickness and grades are important. The
tolerances on flatness, parallelism and thickness should be 0.05 µm.
Interference Bands by Optical Flat
Optical flats arc blocks of glass finished to within 0.05 microns for
flatness. When art optical flat is on a flat surface which is not perfectly
flat then optical flat will not exactly coincide with it, but it will make an
angle e with the surface as shown in Figure
LIMIT GAUGES
A limit gauge is not a measuring gauge. Just they are used as inspecting
gauges. The limit gauges are used in inspection by methods of
attributes. This gives the information about the products which may be
either within the prescribed limit or not. By using limit gauges report,
the control charts of P and C charts are drawn to control invariance of
the products. This procedure is mostly performed by the quality control
department of each and every industry. Limit gauge are mainly used for
checking for cylindrical holes of identical components with a large
numbers in mass production.
Purpose of using limit gauges
Components are manufactured as per the specified tolerance limits,
upper limit and lower limit. The dimension of each component should
be within this upper and lower limit. If the dimensions are outside
these limits, the components will be rejected.

If we use any measuring instruments to check these dimensions, the


process will consume more time. Still we are not interested in knowing
the amount of error in dimensions. It is just enough whether the size of
the component is within the prescribed limits or not. For this purpose,
we can make use of gauges known as limit gauges.
The common types are as follows:
1) Plug gauges.
2) Ring gauges.
3) Snap gauges
PLUG GAUGES
The ends are hardened and accurately finished by grinding. One end is
the GO end and the other end is NOGO end. Usually, the GO end will be
equal to the lower limit size of the hole and the NOGO end will be
equal to the upper limit size of the hole. If the size of the hole is within
the limits, the GO end should go inside the hole and NOGO end should
not go. If the GO end and does not go, the hole is under size and also if
NOGO end goes, the hole is over size. Hence, the components are
rejected in both the cases.
1. Double ended plug gauges
In this type, the GO end and NOGO end are arranged on both the ends
of the plug. This type has the advantage of easy handling.

2. Progressive type of plug gauges


In this type both the GO end and NOGO end are arranged in the same
side of the plug. We can use the plug gauge ends progressively one
after the other while checking the hole. It saves time. Generally, the GO
end is made larger than the NOGO end in plug gauges.
TAPER PLUG GAUGE
Taper plug gauges are used to check tapered holes. It has two check
lines. One is a GO line and another is a NOGO line. During the checking
of work, NOGO line remains outside the hole and GO line remains
inside the hole. They are various types taper plug gauges are available
as shown in fig. Such as
1) Taper plug gauge — plain
2) Taper plug gauge — tanged.
3) Taper ring gauge plain
4) Taper ring gauge — tanged.
RING GAUGES
Ring gauges are mainly used for checking the diameter of shafts having
a central hole. The hole is accurately finished by grinding and lapping
after taking hardening process. The periphery of the ring is knurled to
give more grips while handling the gauges. We have to make two ring
gauges separately to check the shaft such as GO ring gauge and NOGO
ring gauge. But the hole of GO ring gauge is made to the upper limit
size of the shaft and NOGO for the lower limit. While checking the
shaft, the GO ring gauge will pass through the shaft and NOGO will not
pass. To identify the NOGO ring gauges easily, a red mark or a small
groove cut on its periphery.
SNAP GAUGE
Snap gauges are used for checking external dimensions. They are also
called as gap gauges.
The different types of snap gauges are:
1. Double Ended Snap Gauge
This gauge is having two ends in the form of anvils. Here also, the GO
anvil is made to lower limit and NOGO anvil is made to upper limit of
the shaft. It is also known as solid snap gauges
2. Progressive Snap Gauge  This type of snap gauge is also called
caliper gauge. It is mainly used for checking large diameters up to
100mm. Both GO and NOGO anvils at the same end. The GO anvil
should be at the front and NOGO anvil at the rear. So, the diameter of
the shaft is checked progressively by these two ends. This type of gauge
is made of horse shoe shaped frame with I section to reduce the weight
of the snap gauges
3. Adjustable Snap Gauge
Adjustable snap gauges are used for checking large size shafts made
with horseshoe shaped frame of I section. It has one fixed anvil and two
small adjustable anvils. The distance between the two anvils is adjusted
by adjusting the adjustable anvils by means of setscrews. This
adjustment can be made with the help of slip gauges for specified limits
of size.
4. Combined Limit Gauges
A spherical projection is provided with GO and NOGO dimension
marked in a single gauge. While using GO gauge the handle is parallel to
axes of the hole and normal to axes for NOGO gauge.
5. Position Gauge
It is designed for checking the position of features in relation to another
surface. Other types of gauges are also available such as contour
gauges, receiver gauges, profile gauges etc.
TAYLOR’ S PRINCIPLE
It states that GO gauge should check all related dimensions.
Simultaneously NOGO gauge should check only one dimension at a
time.

• Maximum metal condition


It refers to the condition of hole or shaft when maximum material is left
on i.e. high limit of shaft and low limit of hole.

• Minimum metal condition


If refers to the condition of hole or shaft when minimum material is left
on such as low limit of shaft and high limit of hole.
Applications of Limit Gauges
1. Thread gauges
2. Form gauges
3. Screw pitch gauges
4. Radius and fillet gauges
5. Feeler gauges
6. Plate gauge and Wire gauge
COMPARATORS
Comparators are one form of linear measurement device which is quick and
more convenient for checking large number of identical dimensions.
Comparators normally will not show the actual dimensions of the work piece.
They will be shown only the deviation in size. i.e. During the measurement a
comparator is able to give the deviation of the dimension from the set
dimension. This cannot be used as an absolute measuring device but can only
compare two dimensions. Comparators are designed in several types to meet
various conditions. Comparators of every type incorporate some kind of
magnifying device. The magnifying device magnifies how much dimension
deviates, plus or minus, from the standard size.
The comparators are classified according to the principles used for obtaining
magnification. The common types are:
1) Mechanical comparators 2) Electrical comparators
3) Optical comparators 4) Pneumatic comparators
SINE BAR
Sine bars are always used along with slip gauges as a device for the
measurement of angles very precisely. They are used to
1) Measure angles very accurately.
2) Locate the work piece to a given angle with very high precision.
Generally, sine bars are made from high carbon, high chromium, and
corrosion resistant steel. These materials are highly hardened, ground
and stabilized. In sine bars, two cylinders of equal diameter are attached
at lie ends with its axes are mutually parallel to each other. They are also
at equal distance from the upper surface of the sine bar mostly the
distance between the axes of two cylinders is 100mm, 200mm or 300mm.
The working surfaces of the rollers are finished to 0.2µm R value. The
cylindrical holes are provided to reduce the weight of the sine bar.
• QUESTION (part A)
1. List the various linear measurements?
2. What are the various types of linear measuring instruments?
3. List out any four angular measuring instrument used in metrology
4. What is comparator?
5. Classify the comparator according to the principles used for obtaining magnification.
6. How are all mechanical comparator effected?
7. State the best example of a mechanical comparator.
8. Define least count and mention the least count of a mechanical comparator.
9. How the mechanical comparator is used? State with any one example.
10. State any four advantages of reed type mechanical comparator.
Part-B
1. What types of measuring systems are used for linear distance?
2. Explain the working principle of mechanical comparator with a neat sketch.
3. Explain the working principle of Electrical comparator with a neat sketch
4. Explain the working principle of pneumatic comparator with a neat sketch
5. Explain the precautionary measures one shall follow at various stages of using slip gauges. Explain the process of ‗Wringing‘
in slip gauges. Explain why sine bars are not suitable for measuring angles above 45 degrees.
6. Describe the method of checking the angle of a taper plug gauge using rollers, micrometer and slip gauges,
7. State and explain the ―Taylor‘s principle of gauge design‘.
8. Explain the working principle of autocollimator and briefly explain its application
9. Describe with the help of a near sketch, a vernier bevel protractor.
10. Shafts of 75± 0.02 mm diameter are to be checked by the help of a Go, Not Go snap gauges. Design the gauge, sketch it and
show its Go size and Not Go size dimensions. Assume normal wear allowance and gauge maker‘s tolerance.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy