Design Experiments For MQL1
Design Experiments For MQL1
ABSTRACT
Keywords
1. INTRODUCTION
Experimental design is a critically important tool in engineering for continuous improvement of the performance
of manufacturing process. It is essentially a strategy of planning, designing, conducting and analyzing
experiments so that valid and reliable conclusions can be drawn in the most effective manner. Statistical
experiments are generally carried out to explore, estimate or confirm (Jiju Antony, 2003). Experimental design
has proved to be very effective for improving the process performance and process capability. Experimental
procedures require a structured approach to achieve the most reliable results with minimal wastage of time and
money. Experimental design, based on sound statistical principles, can be used to give an overall view of a
manufacturing process using a limited number of experiments. The information gained can be used to optimize a
process and define which parameters need to be placed under the most influencing in order to maintain the
repeatability of a process. A mathematical model of the process can also be developed so that it can help in
predicting the results which are expected when parameters are changed.
The basic concepts of the statistical design of experiments and data analysis were developed in the early part of
the 20th century as a cost effective research design tool to help in improving the yields in farming. Since then,
many types of designed experiment and analysis techniques have been developed to meet the diverse needs of
researchers and engineers. The first statistician to consider a formal mathematical methodology for the design of
experiments was Sir Ronald A. Fisher (Ranjit K. Roy, 2010).
Exploration consists of gathering and understanding data to learn more about the process or product
characteristics. Estimation refers to determining the effects of process variables or factors on the output
performance characteristic. This information is used to estimate the settings of factors to achieve maximum
output. Confirmation implies verifying the predicted results obtained from the experiment. The application of
experimental design is useful in all the above phases. A well designed experiment can ensure improved process
outputs, reduced variability and closer conformance to nominal or target requirements, reduced development
time and reduction in overall cost.
The traditional approach of experimental design is empirical in nature. In this approach one factor is varied at a
time keeping all other variables in the experiment fixed. This approach depends upon guesswork, luck,
experience and intuition for its success. Moreover, this type of experimentation requires large resources to
obtain a limited amount of information about the process. One Variable-At-a-Time experiments often are
unreliable, inefficient, time consuming and may yield false optimum condition for the process. Statistical
thinking and statistical methods play an important role in planning, conducting, analyzing and interpreting data
from engineering experiments. When several variables influence a certain characteristic of a product, the best
strategy is then the statistical design of experiment which can give valid, reliable and sound conclusions that can
be drawn effectively, efficiently and economically (Jiju Antony, 2003).
2.1 RANDOMIZATION
The first principle of an experimental design is randomization, which is a random process of assigning
treatments to the experimental units. The random process implies that every possible allotment of treatments has
the same probability. An experimental unit is the smallest division of the experimental material and a treatment
means an experimental condition whose effect is to be measured and compared. The purpose of randomization
is to remove bias and other sources of extraneous variation which are not controllable. Another advantage of
randomization is that it forms the basis of any valid statistical test. Hence the treatments must be assigned at
random to the experimental units. Randomization is usually done by drawing numbered cards from a well-
shuffled pack of cards or by drawing numbered balls from a well-shaken container or by using tables of random
numbers.
2.2 REPLICATION
The next principle of an experimental design is replication; which is a repetition of the basic experiment. In
other words, it is a complete run for all the treatments to be tested in the experiment. In all experiments, some
variations are introduced. This type of variations can be removed by replication. The experiments are to be
performed more than once. An individual repetition is called a replicate. The number of replicate depends upon
the nature of the experimental material. Replication of experiments can fetch the following benefits (i) It will
secure more accurate estimate of the experimental error, thereby calculated values will tend to be closer to the
true factor effects; (ii) In un- replicated experiment, a single erroneous sample value can distort the whole
analysis which will not be the case in replicated experiments; (iv) Considerable confidence is added if an
experiment is replicated; (v) It will increase precision, which is a measure of the variability of the experimental
error; and (iii) It will give more precise estimate of the mean effect of a treatment.
It has been observed that all extraneous sources of variation are not removed by randomization and replication.
This necessitates a refinement in the experimental technique. In other words, a design is to be selected in such a
manner that all extraneous sources of variation are brought under control. For this purpose, one has to make use
of local control, a term referring to the amount of balancing, blocking and grouping of the experimental units.
Balancing means that the treatments should be assigned to the experimental units in such a way that the result is
a balanced arrangement of the treatments. Blocking means that similar experimental units should be collected
together to form a relatively homogeneous group. Blocking reduces known but irrelevant sources of variation
between units and thus allows greater precision in the estimation of the source of variation under study. The
main purpose of the principle of local control is to increase the efficiency of an experimental design by
decreasing the experimental error. Control in experimental design is used to find out the effectiveness of other
treatments through comparison.
2.4 ORTHOGONALITY
Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out.
Contrasts that can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently
distributed if the data are normal. Because of this independence, each orthogonal treatment provides different
information to the others. If there are T treatments and T - 1 orthogonal contrasts, all the information that can be
captured from the experiment is obtainable from the set of contrasts.
2.5 COMPARISON
In many fields of study it is hard to reproduce measured results exactly. Comparisons between treatments are
much more reproducible and are usually preferable. Often one compares against a standard or traditional
treatment that acts as baseline.
3. Selection of the process variables or design parameters (control factors), noise factors and the
interactions among the process variables of interest. (Noise factors are those which cannot be
controlled during actual production conditions, but may have strong influence on the response
variability). The purpose of an experiment is to reduce the effect of these undesirable noise factors by
determining the best factor level combinations of the control factors or design parameters.
6. Experimental planning.
7. Experimental execution.
Experimental design enables industrial engineers to study the effects of several variables affecting the response
or output of a certain process. Experimental design methods have wide potential application in the engineering
design and development stages. It is the strategy of the management in today's competitive world to develop
products and processes insensitive to various sources of variations. The potential benefits of experimental design
are as under:
2. Possibility to study the behavior of a process over a wide range of operating conditions.
4. Understanding the process under study and thereby improving its performance.
7. Making products insensitive to environmental variations such as relative humidity, vibration, shock and
so on.
8. Studying the relationship between a set of independent process variables (i.e., process parameters) and
the output (i.e. response).
10. The experimental design gives investigators or researchers adequate control to analyze and establish
the cause-effect relationships.
11. A researcher can be assured that the outcomes obtained are essentially true representations of the actual
events.
12. An extra advantage is that experimental design enables the generalization of results.
5. TAGUCHI’S APPROACH
Taguchi’s experimental methods are now widely used in many industries and in research to efficiently optimize
the manufacturing process. It is an iterative approach which allows a statistically sound experiment to be
optimized at minimal cost while investigating a minimum number of possible combinations of parameters or
factors (Park, 1996). Taguchi’s parameter design is an important tool for robust design. By applying this
technique one can significantly reduce the time required for experimental investigation as it is effective in
investigating the effects of multiple factors on performance as well as to study the influence of individual factors
to determine which factor has more influence and which has less influence (Lochner, 1990; Ross, 1989).
This method employs orthogonal arrays which reduce the number of trials and at the same time all factor effects
are given equal importance and determine the effect of process parameters upon performance characteristics.
Orthogonal arrays allow the simultaneous effect of several process parameters to be studied efficiently.
Taguchi’s technique effectively separates many trivial design parameters from the few vital ones making the
design robust. Hence maximum information can be derived in minimum number of trials. Orthogonal arrays
allow the simultaneous effect of several process parameters to be studied efficiently. Thus Taguchi contributed
discipline and structure to the design of experiments. This standardized design methodology can be easily
applied by investigators.
Its results are comparable to a full factorial experiment. Techniques such as fractional factorial experiments are
used to simplify the experiment. Fractional factorial experiments investigate only a fraction of all the possible
combinations. This approach saves considerable time and money but requires rigorous mathematical treatment
both in the design of the experiment and in the analysis of the results. Each experimenter may design a different
set of fractional factorial experiments. Taguchi simplified and standardized the fractional factorial designs in
such a manner that two different persons conducting the same tests thousand of miles apart, will always use
similar designs and tend to obtain similar results. Therefore, factorial and fractional factorial designs of
experiments are widely and effectively used (Ranjit K. Roy, 2010). Taguchi’s technique addresses the following
limitations of full factorial experiments.
1. The experiments become unwieldy in cost and time when the number of variables is large.
2. Two designs for the same experiment may yield different results.
3. The designs normally do not permit determination of the contribution of each factor.
4. The interpretation of experiments with a large number of factors may be quite difficult.
In full factorial design if there are k factors at n levels, one requires nk trials. But in Taguchi’s approach the
influence of k factors could be found using k+1 trials. As a result, a drastic reduction in the number of
experimental trials is possible. Considering the above factors, Taguchi’s concept of ‘Robust design’ is
extensively used in this investigation for designing the experiments.
Taguchi suggests signal-to-noise (S/N) ratio as the objective function for matrix experiments. The S/N ratio is
used to measure the quality characteristics as well as the significant machining parameters through analysis of
variance (ANOVA). Taguchi classifies objective functions into three categories such as smaller the better type,
larger the better type and nominal the best type. The optimum level for a factor is the level that results in the
highest value of S/N ratio in the experimental region (Gaitonde et al., 2008).
Taguchi’s approach for creating robust design is through a three-step method consisting of system design,
parameter design, and tolerance design.
¾The focus on system design phase is on determining the suitable working levels of design factors. It includes
designing and testing a system based on the engineer’s judgment of selected materials, parts, and nominal
product/process parameters based on current technology. Most often it involves innovation and knowledge from
the applicable fields of science and technology.
¾Parameter design helps to determine the factor levels that produce the best performance of the product/process
under study. The optimum condition is selected so that the influence of the uncontrolled factors (noise factors)
causes minimum variation of system performance.
¾Tolerance design is a step used to fine tune the results of parameter design by tightening the tolerance of
factors with significant influence on the product. Such steps will normally lead to identify the need for better
materials, buying newer equipments, spending more money for inspection, etc.
2. Determine the de sign parameters affecting the process. Parameters are variables within the process that
affect the performance measure.
3. Identify the objective function to be optimized or more specific ally a target value or a performance
measure of the process.
5. Create orthogonal arrays for the parameter design indicating the number of experiments and conditions
for each experiment. The selection of orthogonal arrays is based on the number of parameter s and the
levels of variation for each parameter.
Taguchi constructed a special set of orthogonal arrays to lay out his experiments. By combining the orthogonal
Latin squares in a unique manner, Taguchi prepared a new set of standard orthogonal arrays to be used for a
number of experimental situations.
This array, designated by the symbol L8, is used to design experiments involving up to seven 2-level factors.
The array has 8 rows and 7 columns. Each row represents a trial condition with factor levels indicated by the
number of each row.
Each column contains four level 1 and four level 2 conditions for the factor assigned to the column. The
orthogonal array facilitates the experiment design process. To design an experiment one has to select the most
suitable orthogonal array, assign the factors to the appropriate columns and finally describe the combinations of
the individual experiments called the trial conditions. The table identifies the eight trials needed to complete the
experiment and the level of each factor for each trial run. The experiment descriptions are determined by
reading numerals 1 and 2 appearing in the rows of the trial runs. A factorial experiment on the other hand would
require 27 or 128 runs but would not provide appreciably more information.
The array forces all experimenters to design almost identical experiments. Experimenters may select different
designations for the columns but the eight trial runs will include all combinations independent of column
definition. Thus the orthogonal array assures consistency of design by different experimenters.
In the Taguchi method, the results of the experiments are analyzed to achieve one or more of the following three
objectives:
The optimum condition is identified by studying the main effects of each of the factors. The main effects
indicate the general trend of the influence of the factors. Knowing the characteristic, i.e., whether a higher or
lower value produces the preferred result, the levels of the factors which are expected to produce the best results
can be predicted.
The knowledge of the contribution of individual factors is a key in deciding the nature of the control to be
established on a production process. The analysis of variance (ANOVA) is the statistical treatment most
commonly applied to the results of the experiment to determine the percent contribution of each factor. Study of
the ANOVA table for a given analysis helps to determine which of the factors need control and which do not.
Once the optimum condition is determined, it is usually a good practice to run a confirmation experiment. It is
however possible to estimate performance at the optimum condition from the results of experiments conducted
at a non-optimum condition.
Qualitek-4 software is used to easily accomplish experiment design and analysis tasks. Qualitek automatically
designs experiments based on user-indicated factors and levels. The program selects the array and assigns the
factors to the appropriate column. For complex experiments a manual design option is also included in the
software. The program performs three basic steps in the analysis: main effect, analysis-of variance and optimum
studies. Analysis can be performed using standard or signal-to-noise ratios of results for smaller, bigger,
nominal or dynamic characteristics. Results can be displayed using pie-charts, bar charts or trial-data-range
graphs.
5.4 AREAS OF APPLICATION OF TAGUCHI’S APPROACH
5.4.1 ANALYSIS
In the design of engineering products and processes, analytical simulation plays an important role by
transforming a concept into the final product design. The Taguchi’s approach can be utilized to arrive at the best
parameters for the optimum design configuration with the least number of analytical investigations. Taguchi’s
method is the method that treats factors at discrete levels. Frequently this approach significantly reduces
computer time.
Testing with prototypes is an efficient way to see how the concepts work when they are put into design. Since
experimental hardware is costly, the need to accomplish the objectives with the least number of tests is a top
priority. The Taguchi’s approach of laying out the experimental conditions significantly reduces the number of
tests and the overall testing time.
Manufacturing processes typically have a large number of factors that influence the final outcome. Identification
of their individual contributions and their intricate interrelationships is essential in the development of such
processes. The Taguchi concepts used in industries has helped to realize significant cost savings in recent times.
Response Surface Methodology (RSM) is a dynamic and foremost important tool of design of experiment
(DOE), wherein the relationship between response (s) of a process with its input decision variables is mapped to
achieve the objective of maximization or minimization of the response properties (Raymond & Douglas 2002).
It is a collection of statistical and mathematical techniques useful for developing, improving, and optimizing
processes (Myers and Montgomery, 2002). The most extensive applications of RSM are in the particular
situations where several input variables potentially influence some performance measure or quality
characteristic of the process. This performance measure or quality characteristic is called the response. The input
variables are sometimes called
Fig 6.1 Conceptual plot of the types of metamodels and problems for which they are suited
independent variables, and they are subjected to the control of the scientist or engineer. By care full design of
experiments, the response (output variable) can be optimized which is structural dynamics problems in which it
is shown that response surface methods may be of use. Response surface methods may be employed with lo w
efforts an d have the potential to be applied to both linear and nonlinear problems.
The empirical nature of response surface models makes them well suited to nonlinear simulations. In addition to
features derived from the frequency do main
(which may be difficult to derive in nonlinear settings), response features may be derived from measured or
simulated responses in the time domain. In fact, the number and types of response features used are limited only
by the ingenuity of the experimenter.
Most practitioners of RSM now generate their experiment designs and analyze their data using a statistical
software program running on a personal computer. Many of these software programs can generate many classes
of RSM designs and in some cases such software programs offer several varieties of each class. However, the
central composite design (CCD) is the most popular of the many classes of RSM designs due to the following
three properties:
A CCD can be run sequentially. It can be naturally partitioned into two subsets of points; the first subset
estimates linear and two-factor interaction effects while the second subset estimates curvature effects. The
second subset need not be run when analysis of the data from the first subset points indicates the absence of
significant curvature effects. CCDs are very efficient, and provide much information on experiment variable
effects and overall experimental error in a minimum number of required runs. CCDs are very flexible. The
availability of several varieties of CCDs enables their use under different experimental regions of interest and
operability.
An augmented factorial design is commonly used in product optimization. Three main varieties of CCD are
available in most statistical software programs: face-centered, rotatable and inscribed. A complete CCD
experiment design allows estimation of a full quadratic model for each response. A schematic layout of a CCD
for k = 3 independent variables is shown in Fig. 6.2
The design consists of 2k (in this case, 8) factorial points (filled circles in Fig. 6.2) and 6 axial points (hollow
circles in Fig. 6.2 at a distance ‘±a’ from the origin, and 6 center points (hatched circle in Fig. 6.2). In order to
appraise different designs that may be proposed for fitting a second order response surface, some criteria are
needed as to what constitutes a good design. One useful property is of course, that the computation should not
be too difficult. In a more intensive consideration of desirable properties, Box and Hunter proposed the criterion
of rotatability (Cochran and Cox, 1957). The value of ‘a’ usually is chosen to make the design rotatable.
Alauddin et al. (1996) applied response surface methodology to optimize the surface finish during end milling of
Inconel 718 under dry condition. They developed a second-order mathematical model for surface roughness
utilizing the central composite rotatable design in terms of speed and feed. It was found that the developed
model could be used to enhance the efficiency of the end milling process.
Sahin and Riza, (2005) developed a model in terms of cutting speed, feed rate and depth of cut using response
surface methodology for turning of AISI 1040 mild steel. Machining tests were carried out with TiN-coated
carbide cutting tools under various cutting conditions. The predicted surface roughness of the samples were
found to lie close to that of the experimentally observed ones with 95% confident intervals.
Palanikumar, (2007) investigated machining of GFRP composites and developed a model for surface roughness.
The results obtained in his investigation revealed that central composite rotatable designs of second order have
been the most efficient tool in RSM to establish a mathematical model using the smallest possible number of
experiments. Accordingly, an attempt was made in this research work to optimize the fluid application
parameters using response surface methodology under central composite rotatable designs of second order.
Turnad et al., (2009) conducted experiments to determine the performance of uncoated WC-Co inserts in
predicting surface roughness during end milling of titanium alloys (Ti-6Al-4V) under dry conditions. Central
composite design of response surface methodology was employed to create an efficient analytical model for
surface roughness in terms of cutting speed, axial depth of cut, and feed per tooth. The adequacy of the
predictive model was verified using analysis of variance. They found that central composite design of Response
Surface Methodology is a successful technique for predicting surface roughness.
To improve the effectiveness of machining operations and to reduce the manufacturing costs and time, it is very
important to develop modeling techniques which can be better adapted to the requirements of the metal cutting
industry (Luttervelt et al., 1998). ANN models are able to solve the difficulties encountered in the machining
process through its massive parallelization to solve complex nonlinear problem s. Artificial neural networks are
computational networks that have the ability to emulate neurons in biological nerve c enters (Grau pe, 2006).
ANN consists of simple processing units in a parallel sequence as shown in Fig. 6.3. The connections between
these units will specify the network function and performance. For performing some tasks or function s, ANN
can be trained by adjusting the connection strength (weight s) between units (Beal et al., 2010). Fig. 6.3
illustrates what is mentioned above .
There is a comparison between the output and the target in order to reach the target output. The network needs
input/output p airs for training and adjusting the connection values.
Recently, Artificial Intelligence (AI) based models have become very popular and widely used. It is also
considered as a successful approach to model the machining process for predicting performance measures
through the development of an expert system which is an interactive intelligence program with an expert- like
performance in solving a particular type of problem using knowledge base inference engine and user interface.
ANN shows capability in solving many problems in many applications like pattem recognition classification,
prediction, optimization, and control systems. A model based
on ANN is able to learn, adapt to changes and mimic the human thought process with little human interaction
(Azlan et al., 2010). ANNs are powerful tools that are easy-to-use in complex problems where not all of the
parameters are straightforwardly engaged (Benardos and Vosniakos, 2002).
Several learning methods have been developed for ANN. Many of these learning methods are closely connected
with a certain network topology, with the main categorization method distinguished by supervised vs.
unsupervised learning. Among various existing learning methods in this field, the back propagation was adopted
by many researchers for two reasons: (1) it is the most commonly used algorithm and is relatively easy to apply
and (2) it has been proven to be successful in practical applications (Das et al., 1996; Huang and Chiou, 1996).
Back propagation is a systematic method of training multilayer artificial networks. It is built on high
mathematical foundation and has very good application potential. Back propagation algorithm is an extension of
the least mean square algorithm that can be used to train multi-layer networks. Back propagation uses the chain-
rule to compute the derivatives of squared error with respect to connection weights.
Problems that are not linearly separable can be solved with multi-layer feed forward neural networks (Fig. 7.1),
possessing one or more hidden layers in which the neurons have nonlinear transfer characteristics. The
information propagation is only in the forward direction and there are no feedback loops. Even it does not have
feedback connections, errors are back propagated during training. The name back propagation derives from the
fact that computations are passed forward from the input layer to the output layer, following which calculated
errors are propagated back in the other direction to change the weights to obtain a better performance.
Tsai et al., (1999) developed an in-process surface roughness prediction system in end milling of aluminum
6061 T6 alloy. The input parameters were spindle speed, feed rate, depth of cut and vibration average per
revolution and the output parameter was surface roughness. An accelerometer and a proximity sensor were
employed during cutting to collect the vibration and rotation data respectively. A Back Propagation neural
network was used in the investigation with four inputs and one output neuron. The system accuracy was 96 -
99% under various cutting conditions. ANN model was proven to be accurate in predicting the surface
roughness in end milling. Benardos and Vosniakos (2002) presented an ANN model for the prediction of surface
roughness during CNC face milling of series 2 Aluminum alloy which is normally used in aerospace
applications.. The input parameters were depth of cut, feed rate, cutting speed, cutting tool wear, cutting fluids
and three components of cutting forces. A feed-forward back propagation neural network was used for training
with the Levenberg-Marquard algorithm. The Taguchi method was used for the design of the experiments. It
was found that the primary factors affecting surface roughness were feed rate per tooth, cutting force, depth of
cut, cutting fluids and cutting tool wear. The mean square error during the investigation was 1.86%. It was
found that ANNs can be used reliably, successfully and very accurately for the modeling of the surface
roughness formation mechanism and the prediction of its value in face milling.
Erzurumlu and Oktem (2007) developed two models for surface roughness prediction in end milling of 7075-
T6. One model was based on response surface methodology (RSM) and the second one by using a feed forward
backpropagation neural network. The input parameters for the two models were cutting speed, feed rate, axial
and radial depth of cut and machining tolerance. The output response was surface roughness. It was found that
the ANN model was more accurate than the RSM model.
Hossain et al., (2008) developed an ANN model to predict surface roughness during high speed end milling of
nickel-based Inconel 718 with a single-layer PVD TiAlN insert. A back propagation neural network with two
hidden layers having 15 neurons each was used. The input parameters were cutting speed, feed, and axial depth
of cut. The output parameter of the model was surface roughness. The model had very good predicting ability.
Liu et al., (2006) developed a feed-forward back-propagation neural network model to predict surface roughness
for high speed milling of Titanium alloy. Different network structures were used in order to select the best one
with the minimum error. Cutting speed, feed rate, axial depth of cut, and radial depth of cut were considered as
input parameters while surface roughness was the output response. A 4-9-9-1 network was found to be the best
architecture and the predictions of the model showed good agreement with the experimental results.
In this current research work, Artificial Neural Network (ANN) was selected to model the surface roughness in
terms of fluid application parameters such as pressure at the fluid injector, frequency of pulsing and rate of
application of cutting fluid.
2. When an element of the neural network fails, it can continue without any problem by their parallel
nature.
There are no reports on surface milling of hardened AISI4340 steel with minimal fluid application using a high
velocity narrow pulsed jet using a proprietary cutting fluid. This technique was implemented successfully by
Varadarajan et al. (2002a) during turning of hardened AISI4340 steel using coated carbide tools. It was decided
to investigate the applicability of the same scheme for surface milling of AISI4340 steel with a hardness of 45
HRC. AISI4340 steel is a through hardenable range of steel and is widely used in die making, automobile and
fabrication industries.
Ranges of operating parameters were selected at moderate cutting conditions based on industrial practices and
earlier research work in this field (Philip et al., 2000; Vikram Kumar et al., 2008; Varadarajan et al., 2002b,
Thakur et al., 2009) so that the new scheme will be well acceptable to the industry especially on the existing
machine tools without major changes in the existing setup for its implementation on the shop floor. From the
literature, it was found that low cutting speeds facilitated considerable reduction in cutting force for MQL as
compared with dry cutting and flood cooling conditions (Liao et al., 2007). Similarly other researchers
concluded that MQL might be regarded as and economical and environmentally compatible lubrication
technique for low speed, feed rate and depth of cut condition (Rahman et al., 2002; Nikhil et al., 2007).
Philip et al., (2000) investigated that apart from conventional operating parameters such as cutting velocity, feed
and depth of cut, cutting fluid parameters namely, its composition and mode of delivery also influence the
cutting performance. Similar conclusions are reported by other researchers in this filed (Vikram Kumar et al.,
2008; Varadarajan et al., 2002b). The present research work concentrates on the effect of fluid application
parameters such as pressure at fluid injector, frequency of pulsing, rate of fluid application, mode of fluid
application and composition of cutting fluid on cutting performance during surface milling of hardened
AISI4340 steel of 45 HRC and to optimize its performance based on cutting and fluid application parameters
and to compare it with hard dry milling and milling with conventional flood cooling in terms of surface finish,
flank wear and cutting force.