0% found this document useful (0 votes)
5 views

SM3833

This study presents a tool wear classification model utilizing machine vision and deep learning techniques, specifically support vector machines (SVM) and convolutional neural networks (CNN), to assess tool life through image analysis. The model captures tool images from multiple angles, processes them, and applies various methods for wear measurement and classification, achieving over 93% accuracy with CNN compared to 89.8% with SVM. The research aims to enhance manufacturing efficiency by enabling real-time detection of tool wear, thereby preventing production losses and improving product quality.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

SM3833

This study presents a tool wear classification model utilizing machine vision and deep learning techniques, specifically support vector machines (SVM) and convolutional neural networks (CNN), to assess tool life through image analysis. The model captures tool images from multiple angles, processes them, and applies various methods for wear measurement and classification, achieving over 93% accuracy with CNN compared to 89.8% with SVM. The research aims to enhance manufacturing efficiency by enabling real-time detection of tool wear, thereby preventing production losses and improving product quality.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Sensors and Materials, Vol. 36, No.

11 (2024) 4815–4833 4815


MYU Tokyo

S & M 3833

Tool Wear Classification Based on Support Vector Machine


and Deep Learning Models
Yung-Hsiang Hung, Mei-Ling Huang, Wen-Pai Wang,* and Hsiao-Dan Hsieh

Department of Industrial Engineering and Management, National Chin-Yi University of Technology,


No. 57, Sec. 2, Zhongshan Rd., Taiping Dist., Taichung 41170, Taiwan, R.O.C.

(Received June 25, 2024; accepted October 21, 2024)

Keywords: tool wear, machine vision, image classification, support vector machine, convolutional neural
network

Tool status is crucial for maintaining workpiece quality during machine processing. Tool
wear, an inevitable occurrence, can degrade the workpiece surface and even cause damage if it
becomes severe. In extreme cases, it can also shorten the machine tool service life. Therefore,
accurately assessing tool wear to avoid unnecessary production costs is essential. We present a
wear classification model using machine vision to analyze tool images. The model categorizes
wear images on the basis of predefined wear levels to assess tool life. The research involves
capturing images of the tool from three angles using a digital microscope, followed by image
preprocessing. Wear measurement is performed using three methods: gray-scale value, gray-
level co-occurrence matrix, and area detection. The K-means clustering technique is then
applied to group the wear data from these images, and the final wear classification is determined
by analyzing the results of the three methods. Additionally, we compare the recognition
accuracies of two models: support vector machine (SVM) and convolutional neural network
(CNN). The experimental results indicate that, within the same tool image sample space, the
CNN model achieves an accuracy of more than 93% in all three directions, whereas the accuracy
of the SVM model, affected by the number of samples, has a maximum of only 89.8%.

1. Introduction

Precision machining technology is essential in modern manufacturing, especially given the


rising demand for equipment and components in aerospace, automotive, and precision machinery
sectors. The advancement of processing technology is therefore crucial. Precision machining is
extensively employed in micro- and ultra-precision equipment owing to its exceptional ability to
process various materials and complex three-dimensional surfaces. However, the challenges
posed by ultra-high-speed intermittent cutting and the use of micro-tools mean that tool wear—
referring to the gradual deterioration of cutting tools in processes such as milling, turning, and
drilling—can significantly affect workpiece quality. Tool wear impacts tool life, workpiece

*Corresponding author: e-mail: wangwp@ncut.edu.tw


https://doi.org/10.18494/SAM5205

ISSN 0914-4935 © MYU K.K.


https://myukk.org/
4816 Sensors and Materials, Vol. 36, No. 11 (2024)

dimensional accuracy, surface finish, and the efficiency of automated production operations. It
can lead to defects in parts, increased production costs, and unexpected machine downtimes.
Previous research has consistently shown that an early and effective detection of tool wear
can prevent machine damage, avoid unplanned shutdowns, and minimize workpiece scrapping.
This leads to significant improvements in manufacturing quality and cost reduction.(1–3) Zhou
and Xue highlighted that traditional tool wear monitoring methods rely on periodic inspections
and manual measurements based on the operator’s experience, which can be both time-
consuming and inaccurate.(4) As a result, effective tool wear monitoring has become a crucial
focus in ultraprecision machining. There is a need for tool condition monitoring systems that can
continuously and accurately manage tool wear to minimize downtime and enhance product
quality. In response to this need, Singh et al. proposed an automated detection system for high-
performance machining environments, utilizing various sensors, modeling techniques, and data
analysis methods to assess cutting tool wear.(5)
In recent years, the rapid advancements in IoT and big data technologies have made the
application of machine learning (ML) to production quality and machine preventive maintenance
(PM) increasingly significant in modern manufacturing. By analyzing large volumes of
production data, ML can substantially enhance production efficiency and product quality while
reducing maintenance costs.(6–7) Concurrently, the growth of AI has led researchers to apply
data-driven methods for developing pattern recognition algorithms to predict tool wear.(8) These
methods typically involve extracting feature variables from sensor data and processing signals to
predict tool wear. Various classification algorithms are used for this purpose, including artificial
neural networks (ANNs), fuzzy logic, pattern recognition, support vector machines (SVMs),
decision trees, K-nearest neighbor classifiers (KNNs), adaptive neuro-fuzzy inference systems
(ANFISs), Bayesian networks (BNs), principal component analysis (PCA), and convolutional
neural networks (CNNs). The effectiveness of these methods depends on how well the extracted
features correlate with the tool wear status.(9–12) The accurate prediction of tool wear can
significantly improve production efficiency and quality by facilitating timely tool replacement,
thus preventing production line interruptions and losses. For instance, Bagga et al. investigated
the use of ANNs to predict cutting force and vibration related to tool wear during the turning
process.(13) Their study, based on Taguchi L9 experiments using alloy inserts and EN-8 medium
carbon steel, demonstrated that ANNs can effectively predict tool wear, thus offering a valuable
tool for monitoring and managing tool condition.
Deep learning, an advanced data analysis technique evolved from traditional neural networks,
has recently been applied to tool condition monitoring. In machining processes, time-series
images combined with deep learning methods are utilized to classify and predict tool wear
effectively.(14) Serin et al. conducted research on applying deep learning for tool wear analysis
and damage reduction, with the goal of accurately predicting tool wear and preventing adverse
conditions for cutting tools and machinery.(11) Deep learning technologies can model complex,
previously unexamined functional relationships and offer exceptional adaptability and self-
learning capabilities, which help mitigate the effects of external interferences during machining.
However, current deep learning approaches to tool wear monitoring face challenges, including
difficulties in interpreting extracted features and inability to fully account for the significance of
different feature maps, which can limit prediction accuracy.(15–17)
Sensors and Materials, Vol. 36, No. 11 (2024) 4817

In this study, we employ a machine vision system to analyze tool wear from three
perspectives: corner wear, flank wear, and crater wear (see Fig. 1). After capturing and
preprocessing the images, the wear is assessed using three distinct methods. The processed
images are analyzed to extract feature values indicative of tool wear. The K-means clustering
technique is then applied to categorize the degree of wear of each sample. The results from these
three methods are compared to determine the final wear classification and evaluate the
effectiveness of each approach. We also aim to compare the wear image classification accuracies
of SVMs and CNNs to determine their ability to accurately classify tool wear across different
samples. The accurate real-time detection of tool status could prevent losses associated with
delayed tool replacement, thereby significantly enhancing the overall factory processing
efficiency. The specific objectives of this study are as follows.
1. Calculate wear areas using processed images to establish wear level grading data.
2. Construct a model to achieve optimal wear level recognition.

2. Literature Review

2.1 Definition of tool wear

In machining processes, tool wear refers to the gradual failure of cutting tools as a result of
frequent operation. During cutting, the tool comes into contact with the workpiece and chips,
causing intense friction. This continuous wear is induced by high temperatures and friction,
with the extent of wear varying depending on shape, depth, cutting fluid, and cutting speed.
Consequently, the sharpness and efficiency of the tool are affected. This phenomenon is
unavoidable and can be considered normal dulling, but it also leads to issues such as increased
workpiece surface roughness, higher cutting forces, rising cutting temperatures, reduced
machining precision, and shorter tool life. Generally, tool wear manifests in various forms (e.g.,
flank wear, crater wear, and chipping). These wear patterns primarily depend on the tool
characteristics, workpiece material, cutting conditions, and machining techniques. Under
normal machining conditions, flank wear is the most common and significant wear. Flank wear
width (VB) is the most crucial parameter for assessing tool life.(18)
Tool life can be divided into three stages, as shown in Fig. 2. These stages provide an
effective means to predict and manage the lifespan of cutting tools, thereby avoiding unnecessary
production costs and time loss.(17)
1. Break-in Region: This stage spans from point A to point B in Fig. 2 and is known as the rapid
initial wear period. During this phase, the tool wear rate is relatively high because of the

Fig. 1. (Color online) Schematic diagram of tool wear.


4818 Sensors and Materials, Vol. 36, No. 11 (2024)

Fig. 2. (Color online) Tool life curve.

initial interaction between the tool surface and the workpiece. The wear is primarily due to
the abrasion of microscopic surface irregularities and eventually enters a relatively stable
state.
2. Steady-state Wear Region: This stage extends from point B to point C and is characterized by
a relatively stable and uniform wear rate. This is the main working period of the tool and the
longest stage of its usage. During this phase, the wear rate is low and stable, allowing the tool
to maintain good machining quality.
3. Failure Region: This stage runs from point C to point D, where the tool enters an accelerated
wear period. The wear rate increases significantly because the tool has worn to its limit,
causing wear to intensify. During this phase, the tool performance rapidly declines, ultimately
leading to failure, as indicated by point D. At this point, the tool can no longer be used and
must be replaced to ensure machining quality and efficiency.

2.2 Tool wear detection

In the early stages of technological development, the majority of research on tool wear relied
on empirical rules and experimental data to determine the wear state of tools. Dornfeld and
Kannatey-Asibu classified tool wear detection techniques into two categories: direct techniques
and indirect techniques.(19) Indirect techniques use sensor signals to output parameters related to
wear status during cutting operations and predict tool wear using these parameters. The
drawback of this approach is that the measured parameters are susceptible to environmental
effects and do not directly quantify tool wear. Bao and Tansel conducted research on tool wear in
vertical milling cutter grooves using mild steel and aluminum materials.(20) They estimated tool
wear by monitoring the cutting force in the workpiece feed direction and dynamically adjusted
cutting parameters to control the cutting force, thereby extending the tool life. Rmili et al. used a
three-axis accelerometer to measure vibration characteristics during turning and proposed an
Sensors and Materials, Vol. 36, No. 11 (2024) 4819

average power analysis technique to extract indicative parameters from the vibration response,
which describes the tool condition throughout its lifespan.(21)
In contrast, direct techniques offer advantages in accuracy and reliability.(22) Direct
techniques use optical, radioactive, and resistive proximity sensors, or vision systems to directly
measure the actual geometric changes in tools. Machine vision detection technology overcomes
the difficulty of contact between the tool and the workpiece during cutting due to the presence of
coolant, making the measurement of tool geometry and dimensional changes more accurate.
Compared with indirect techniques, machine vision does not require complex measurement
systems, is more flexible, and is cost-effective. In micro–milling processes, tool wear conditions
are critical to product geometry and surface integrity. Zhu and Yu proposed a novel tool wear
surface area monitoring method based on complete tool wear images.(23) Unlike the traditional
tool wear width standard, this method better reflects the tool condition. They introduced a
region-growing algorithm based on morphological component analysis (MCA) to solve the tool
wear problem by decomposing the original micro–milling-tool image into target tool,
background, and noise images to extract effective tool wear areas. Results showed that the wear
surface area better reflects the tool usage status than do traditional tool wear width standards.
However, direct techniques using machine vision for tool wear detection also have significant
limitations. Different optical sensors, such as lasers, CCD and CMOS cameras, and thermal
infrared cameras, exhibit considerable variability in image quality.(24)

2.3 Cutting tool wear prediction

As the demands of Industry 4.0 continue to grow, precision machining plays a crucial role in
modern manufacturing. Tool wear is a major factor affecting product quality, production time,
and manufacturing costs. Therefore, assessing and accurately predicting tool wear before
significant damage occurs to the workpiece are essential for ensuring high-quality workpieces
and reducing production costs.(25) In recent years, many researchers have applied data acquisition
and signal processing methods to evaluate tool wear.(26) Kara utilized Taguchi quality
engineering in machining experiments to minimize tool wear and surface roughness.(27) Dastres
and Soori proposed a network-based decision support system using data warehouse management
to address tool replacement decisions.(28)
Pimenov et al. combined traditional sensor systems with neuro-AI methods to monitor
turning tool conditions, replacing decision-making based on human experience.(29) Wang et al.
integrated IoT sensors and SVM methods to predict tool wear, thereby enhancing the reliability
of manufacturing systems.(30) Xu et al. introduced an incremental cost-sensitive SVM (ICSSVM)
learning model to predict tool breakage in milling operations.(31) Their results showed that even
with imbalanced datasets, the prediction accuracy was higher than that of traditional batch cost-
sensitive SVM models.
CNN is a multilayer feedforward ANN initially developed to handle two-dimensional image
classification problems. Image classification distinguishes different target categories on the basis
of image features. An image classification system mainly comprises image information
acquisition, information processing, feature extraction, and classification. Generally, image
4820 Sensors and Materials, Vol. 36, No. 11 (2024)

classification involves feature learning to describe the entire image, followed by the use of a
classifier to determine the category of the target. This technology has been widely used in facial
recognition, vehicle identification, pathological image recognition, and tool monitoring.(32) Chen
et al. proposed a CNN based on an attention mechanism and bidirectional LSTM networks for
monitoring tool wear conditions.(33)

3. Methods

Automated optical inspection (AOI) is a quality inspection method for automated


manufacturing vision systems and is characterized as a noncontact testing method. In recent
years, AOI has been extensively applied in advanced manufacturing processes to detect potential
defects. Image quality inspection is a method used in manufacturing and production processes to
assess and ensure the quality of products or workpieces. Through an image inspection system,
various defects, such as cracks, dents, and foreign objects, can be detected on products or
workpieces. Additionally, the system can accurately measure their dimensions and geometric
features to ensure compliance with specifications. AOI machines can detect error features based
on standard samples. The most challenging part of detection lies in the effects of variations in
brightness and colors. In particular, the image features of the same component type can differ
under varying brightness and color conditions. Consequently, image quality inspection identifies
different types of products or workpieces and classifies them on the basis of preset standards
while recording the data obtained during the inspection process for subsequent analysis and
traceability. This process typically includes image capture, image processing, feature extraction,
inspection analysis, and result presentation. By integrating these functions and methods, image
quality inspection helps improve production efficiency and product yield, ensuring product
quality. Image preprocessing involves a series of steps performed on images before recognition
to enhance image quality and improve the accuracy and efficiency of subsequent recognition
tasks. Image preprocessing includes binarization, image enhancement, image segmentation, and
morphological processing.(34) The powerful algorithms of deep CNNs have demonstrated
outstanding performance in the field of computer vision. Numerous research groups have
proposed training models that combine AOI systems with CNNs for defect detection.(35)
In this study, we begin with the acquisition of tool wear images. In Step 1, high-magnification
CCD equipment is used to capture the original tool wear images. In Step 2, these images
undergo preprocessing, including grayscale conversion, filtering, negative processing, edge
detection, and binarization. Step 3 involves image feature extraction through methods such as
grayscale threshold segmentation, gray level co-occurrence matrix calculation, and area
detection to obtain key features of the images. Subsequently, wear clustering based on the
extracted features is performed to classify the wear parts in the images. We employ CNNs and
SVMs as the two classification methods for model training and testing. Finally, performance
evaluation is conducted to determine the accuracy and effectiveness of the models, and an
intelligent tool wear automatic classification prediction model is confirmed. The research
process is illustrated in Fig. 3.
Sensors and Materials, Vol. 36, No. 11 (2024) 4821

Fig. 3. (Color online) Research process flowchart for the tool wear classification model.

• Image Acquisition: The actual tool edge wear conditions are captured using a digital
microscope by fixing the tool in a clamp. The type of tool being measured is a disposable
triangular milling insert. Images are captured in three specific directions: 1. corner, 2. minor
flank, and 3. major flank. A schematic diagram of the acquisition directions is shown in Fig.
4.
• Wear Measurement Calculation: The captured images, sized at 100 × 100 pixels, are
segmented into 10 × 10 pixel blocks. Using MATLAB, the grayscale values of the tool images
are calculated. From these, representative feature values, including the maximum and average
values, are computed for each of the 100 blocks, as shown in Fig. 5.
• Gray Level Co-occurrence Matrix (GLCM): GLCM is a feature extraction method for
processing RGB images.(36) These texture features are calculated by probability, which can
be defined as:

Pr = Cij(γ, θ), (1)

where Cij is the co-occurrence probability between gray levels i and j.


4822 Sensors and Materials, Vol. 36, No. 11 (2024)

Fig. 4. (Color online) Tool wear of the (1) corner, (2) minor flank, and (3) major flank.

(a) (b)

Fig. 5. (Color online) (a) Tool edge wear grayscale matrix. (b) Matrix of degree of tool edge wear.

Pij
Cij = G
,
(2)
∑ Pij
i , j =1

Here, Pij represents the number of occurrences of gray levels i and j within the given d, θ, and G
values.
We extracted 14 tool feature images from the GLCM, namely, 1. angular second moment, 2.
contrast, 3. correlation, 4. sum of squares, 5. inverse difference moment, 6. sum average, 7. sum
variance, 8. sum entropy, 9. entropy, 10. difference variance, 11. difference entropy, 12.
information measures of correlation I and II, and 13. maximal correlation coefficient.

3.1 Support vector machine

SVM is an ML method based on statistical learning theory. It is theoretically robust, highly


adaptable, generalizes well, and has short training times. SVM is primarily used for data
classification and finds applications in face detection, image classification, and handwritten
recognition. The basic principle of SVM is to find a hyperplane in the feature space that
maximizes the margin to separate two classes of data. The following are SVM multiclass
classification methods: (1) One-Against-The-Rest where one class of samples is separated from
the rest; (2) One-Against-One where every two classes are paired for classification [For N
classes, N(N − 1)/2 SVM classifiers are needed, and the class with the most votes is chosen
during testing]; (3) SVM Decision Tree where binary decision trees are combined with SVM to
Sensors and Materials, Vol. 36, No. 11 (2024) 4823

form a multiclass classifier (The drawback is that classification errors can affect subsequent
nodes); (4) Multiclass Objective Functions where the objective function is modified to
accommodate multiclass needs, but this approach has high computational complexity and is less
commonly used. For addressing multiclass problems, we adopt the directed acyclic graph SVMs
(DAG-SVMs) proposed by Agarwal et al.(37)
Traditional ML methods for image recognition require feature extraction in conjunction with
classifiers, enabling algorithms to make predictions using unseen data. Histogram of oriented
gradients (HOG) is a feature extraction technique that derives features by accumulating the
intensity of gradients in various orientations within image blocks. HOG calculates a histogram
of gradient directions for each block and uses it as a feature representation of the block. It divides
the image into small regions called “cells.” Since HOG features operate on local units of the
image, they maintain robustness to both geometric and photometric transformations.(38) The
main steps of computing HOG features from an image are as follows.
1. Compute Gradient Magnitude and Orientation: Calculate the gradient of each pixel intensity
in both horizontal and vertical directions.
2. Calculate Gradients: Use the Gx and Gy values for each pixel to determine the gradient
magnitude and orientation, where the magnitude indicates the edge strength and the
orientation indicates the edge direction.

Gradientmagnitude: Gx2 + G y2
(3)
Gradientangle: (
tan −1 G y / Gx )
SVM is an ML technique aimed at correctly distinguishing different classes of data points by
identifying the optimal hyperplane. The process of combining HOG and SVM involves first
using HOG to extract features from images, and then inputting these features into the SVM
model for image classification.

3.2 Convolutional neural networks

CNNs are a type of deep learning model particularly well suited for image recognition and
processing. The advantages of CNNs can be summarized as follows.
• High Image Recognition Capability: CNNs exhibit strong performance in recognizing
various image patterns and transformations accurately.
• Effectiveness in Image Classification: In recent years, CNNs have dominated visual
recognition competitions, with winners frequently employing CNN architectures.
• Efficient Information Retention: CNNs retain substantial information without excessively
increasing the number of parameters, resulting in improved computational speed.
A typical CNN architecture is shown in Fig. 6. The structure and steps of each neural layer
are described below.
1. Convolutional Layers: These layers utilize multiple filters that slide over the input image to
extract local features. Each filter captures different features such as edges and corners.
4824 Sensors and Materials, Vol. 36, No. 11 (2024)

Fig. 6. (Color online) Tool wear CNN framework.

2. Pooling Layers: Pooling layers reduce the dimensionality of feature maps by down sampling,
which decreases the number of parameters and computation load while preserving essential
features. Common pooling methods include max pooling and average pooling.
3. Fully Connected Layers: After several convolutional and pooling layers, the feature maps are
flattened and fed into fully connected layers for classification. These layers function similarly
to traditional neural network layers, integrating and classifying the input features.
4. Activation Functions: Activation functions, such as the rectified linear unit (ReLU), introduce
nonlinearity into the model, enhancing its representational power. The ReLU function is
defined as Eq. (4).
5. Loss Function and Optimization: The loss function (e.g., cross-entropy loss) measures the
difference between the predicted and true values. Network weights are updated through
backpropagation and optimization algorithms, such as gradient descent, to minimize this
loss.
By leveraging these components, CNNs can efficiently process and classify images; this makes
them a powerful tool in the field of computer vision.

x if x > 0
ReLu ( x ) =  (4)
0 otherwise

The CNN model architecture utilized in this study comprises the following components:
batch normalization, convolutional layers, pooling layers, activation functions, loss function,
and fully connected layers.
• Batch Normalization: Applied to the training data, this technique accelerates the training
process and enhances model performance.
Sensors and Materials, Vol. 36, No. 11 (2024) 4825

• Convolutional Layers: These layers perform feature extraction to obtain feature maps from
the input images.
• Pooling Layers: These layers are employed for down sampling to significantly reduce
computational load. We use max pooling because of its superior performance in practical
applications.
• Activation Functions: The ReLU function is used to enhance the network structure, prevent
gradient vanishing, and mitigate overfitting.
• Fully Connected Layers: The purpose of these layers is to classify the feature information
derived from the convolutional and pooling layers.
• Loss Function: The Softmax function is used to obtain probability values for multiclass
classification. The final model prediction categorizes tool wear into three classes representing
dull, semidull, and sharp conditions.
This comprehensive architecture leverages each component’s strengths to develop a robust
model for classifying tool wear levels based on image features.

4. Results

4.1 Experimental equipment

In this study, a high-magnification digital microscope was used throughout the experimental
process. Images were captured for 14 worn tool samples and one set of new tool samples,
focusing on the three cutting edges subjected to wear during the cutting process. After image
acquisition, image preprocessing was performed. The detailed specifications of the experimental
hardware are shown in Table 1.

4.2 Experimental environment and interface development

The system processor used in this study is an Intel® Core™ i5-8250U 1.6GHz, paired with
an NVIDIA GeForce 2070 dedicated graphics card. The program interface for the tool wear
classification model was developed using MATLAB R2020 to design the graphical user
interface (GUI), as shown in Fig. 7.

Table 1
Specifications of the digital microscope.
Image sensor Micron 1.3 m High-resolution CMOS sensor
Microscope lens Confocal 80X–200X
Light source White SMD LED × 8 pcs
Hardware interface USB 2.0
Software Measurement & Capture
Cable length 1.5 m
Dimensions 31 mm (diameter) × 120 mm (height)
Weight 125 g
4826 Sensors and Materials, Vol. 36, No. 11 (2024)

Fig. 7. (Color online) Interface for tool wear classification.

4.3 Experiments

4.3.1 Image preprocessing

According to the methodology described in Sect. 3, we developed a MATLAB-based tool


image edge detection process. Step one involves loading the original tool image file and
performing edge detection using the Sobel algorithm. The preprocessing results are shown in
Fig. 8.
In this study, three different detection methods were employed to calculate the tool wear in
three directions. The final wear grade was determined by analyzing the obtained data.
Consequently, selecting different directions of samples will display the corresponding wear
classification and the number of samples. When there is a need to change the input samples, this
can be directly modified through the program interface to expedite the classification process.
The practical operation results are shown in Fig. 9.
The initial experiment samples consisted of images capturing the tool tip, secondary relief
face, and primary relief face of 15 tools. Images from three angles were obtained for each tool,
resulting in a total of 45 tool wear images. These images were classified into three categories
based on the severity of wear, ranging from severe to minor. The categories are dull, semi-dull,
and sharp, as illustrated in Fig. 10.
On the other hand, data augmentation techniques were employed to fine-tune the existing
tool image samples, enhancing the dataset’s diversity and improving its generalization capability.
In this study, the dataset was expanded using methods such as rotating 30 degrees to the right,
rotating 30 degrees to the left, horizontal flipping, vertical flipping, and contrast enhancement.
After data augmentation, the tool tip dataset contained 1488 images, the secondary relief face
dataset contained 1352 images, and the primary relief face dataset contained 1395 images. The
actual effect of data augmentation is shown in Fig. 11.

4.3.2 Parameter settings

In this study, an executable file (EXE) was compiled in the MATLAB environment. When
using the system’s CNN model, the training parameters can be adjusted in accordance with
Sensors and Materials, Vol. 36, No. 11 (2024) 4827

Fig. 8. (Color online) Tool wear image processing results.

Fig. 9. (Color online) CNN and SVM experimental models.

Fig. 10. (Color online) Three levels of tool wear.

different samples and dataset sizes, as shown in Fig. 12. The parameters that need to be set
include the maximum number of epochs, mini batch size, initial learning rate, and validation
data frequency.

4.3.2.1 SVM classification

To ensure an equal number of samples for each wear level, we balanced the sample sizes by
reducing the number of samples in the other categories to match that in the smallest category. As
shown in Fig. 13, the Sharp category, with 186 samples, was the smallest. Therefore, the total
4828 Sensors and Materials, Vol. 36, No. 11 (2024)

Fig. 11. (Color online) Image data augmentation effect.

Fig. 12. (Color online) Experimental parameter settings.

Fig. 13. (Color online) SVM classification model.


Sensors and Materials, Vol. 36, No. 11 (2024) 4829

sample size was set to 558 (186 × 3) tool wear images. Of these, 70% were randomly selected for
training and 30% for testing. The results of this execution are shown in Fig. 14, including the
confusion matrix, accuracy, and execution time. Figure 14 shows the prediction results for ten
randomly selected test samples; misclassified results are highlighted in red.
The classification accuracies of the SVM model for wear samples at three tool angles (Corner,
Minor Flank, and Major Flank) are shown in Fig. 15. It can be observed that the average
classification accuracies for the Minor Flank and Major Flank samples are similar, whereas that

Fig. 14. (Color online) SVM classification model.

Fig. 15. (Color online) SVM classification model test accuracy.


4830 Sensors and Materials, Vol. 36, No. 11 (2024)

for the Corner samples is significantly lower. Upon analysis, this decline in accuracy is attributed
to the issue of sample size distribution. After balancing the sample sizes to ensure an equal
number of samples for each wear level category, the total number of Corner samples was reduced
to 558 images; the total number of Minor Flank samples was 744 images; and the total number of
Major Flank samples was 837 images. This imbalance led to the SVM model underperforming
in classifying the Corner samples compared with the Minor and Major Flank samples.

4.3.2.2 CNN classification

The comparison of classification accuracies of the CNN model for wear samples at the
Corner, Minor Flank, and Major Flank tool angles is shown in Fig. 15. The results indicate that
despite the sample size differences across the three directions, the CNN model achieved an
average accuracy of more than 93% for all three directions. This is in contrast to the SVM
model, where the classification accuracy varied significantly under similar conditions,
demonstrating a notable difference in classification accuracy between the two models given the
same sample sizes. The confusion matrices for the classification results of the CNN and SVM
models are presented in Fig. 16, while the classification quality is detailed in Table 2.

Fig. 16. (Color online) Confusion matrices of CNN and SVM models.

Table 2
CNN and SVM model classification qualities.
Classification Quality Index SVM Model CNN Model
Precision Class=Dull 1.00 1.00
Class=Semi-dull 0 1.00
Class=Sharp 0 1.00
Recall Class=Dull 1.00 1.00
Class=Semi-dull 0 1.00
Class=Sharp 0 1.00
Sensors and Materials, Vol. 36, No. 11 (2024) 4831

5. Conclusions

In this study, machine vision technology was utilized to capture images of wear on the tool
tip, minor flank, and major flank. Following image preprocessing, wear measurements were
obtained using three techniques: grayscale value, gray-level co-occurrence matrix, and area
detection. The measurements were then clustered using the K-means algorithm to determine
final wear grades. Tool wear images were subsequently classified using SVM and CNN models.
A comparison of the classification accuracies of the SVM and CNN models revealed that,
despite differences in sample sizes across the three directions, the CNN model consistently
achieved an average accuracy exceeding 93% in all directions. This performance markedly
surpassed that of the SVM model, which exhibited significant variability in accuracy under
similar conditions, highlighting the CNN model’s superior robustness and reliability in tool wear
classification.
Compared with previous research on tool wear, this study was focused specifically on tool
wear classification using machine vision, which yielded clear, actionable insights. Concrete,
quantifiable results that demonstrate CNN’s superior accuracy over SVM were obtained,
making CNN highly relevant for practical applications. By directly comparing SVM and CNN,
we provide a solid basis for selecting the best AI model for tool wear classification. The emphasis
on measurable performance ensures direct applicability to industry needs, and the detailed, step-
by-step guide for implementing the classification model further enhances its practical value.
The exploration of the integration of advanced deep learning techniques, such as transfer
learning or hybrid models, to further enhance the accuracy and efficiency of tool wear
classification is a subject for future research. Additionally, incorporating real-time data from
industrial environments could lead to the development of adaptive models that continuously
learn and improve from new data. Investigating the use of 3D imaging and more sophisticated
image processing algorithms could also provide a more comprehensive analysis of tool wear by
capturing subtle wear patterns that 2D images might miss. Finally, expanding the scope to
include predictive maintenance systems, where the model not only classifies wear but also
predicts future tool performance and lifespan, could significantly benefit industrial applications.

References
1 A. Malakizadi, T. Hajali, F. Schulz, S. Cedergren, J. Ålgårdh, R. M’Saoubi, E. Hryha, and P. Krajnik: Int. J.
Mach. Tools Manuf. 171 (2021) 103814. https://doi.org/10.1016/j.ijmachtools.2021.103814
2 L. Hao, L. Bian, N. Gebraeel, and J. Shi: IEEE Trans. Autom. Sci. Eng. 14 (2017) 1211. https://doi.org/10.1109/
TASE.2015.2513208
3 J. Karandikar, T. McLeay, S. Turner, and T. Schmitz: Int. J. Adv. Manuf. Technol. 77 (2015) 1613. https://doi.
org/10.1007/s00170-014-6560-6
4 Y. Zhou and W. Xue: Int. J. Adv. Manuf. Technol. 96 (2018) 2509. https://doi.org/10.1007/s00170-018-1768-5
5 R. Singh, A. Gehlot, S. V. Akram, L. R. Gupta, M. K. Jena, C. Prakash, S. Singh, and R. Kumar: Sustainability
13 (2021) 7327. https://doi.org/10.3390/su13137327
6 M. H. Abidi, M. K. Mohammed, and H. Alkhalefah: Sustainability 14 (2022) 3387. https://doi.org/10.3390/
su14063387
7 C. Chen, H. Fu, Y. Zheng, F. Tao, and Y. Liu: J. Manuf. Syst. 71 (2022) 581. https://doi.org/10.1016/j.
jmsy.2023.10.010
4832 Sensors and Materials, Vol. 36, No. 11 (2024)

8 S. Shurrab, A. Almshnanah, and R. Duwairi: 12th Int. Conf. Inf. Commun. Syst. (IEEE 2021) 24–26. https://
doi.org/10.1109/ICICS52457.2021.9464580
9 Y. Lei, B. Yang, X. Jiang, F. Jia, and A. K. Nandi: Mech. Syst. Signal Process. 138 (2020) 106587. https://doi.
org/10.1016/j.ymssp.2019.106587
10 R. Munaro, A. Attanasio, and A. D. Prete: J. Manuf. Mater. Process. 7 (2023) 129. https://doi.org/10.3390/
jmmp7040129
11 G. Serin, B. Sener, A. Ozbayoglu, and H. O. Unver: J. Adv. Manuf. Technol. 109 (2020) 723. https://doi.
org/10.1007/s00170-020-05449-w
12 J. Wang, J. Xie, R. Zhao, L. Zhang, and L. Duan: Rob. Comput. Integr. Manuf. 45 (2017) 47. https://doi.
org/10.1016/j.rcim.2016.05.010
13 P. J. Bagga, M. A. Makhesana, H. D. Patel, and K. M. Patel: Mater. Today Proc. 44 (2021) 1549. https://doi.
org/10.1016/j.matpr.2020.11.770
14 M. A. Giovanna, G. Terrazas, and S. Ratchev: J. Adv. Manuf. Technol. 104 (2019) 3647. https://doi.org/10.1007/
s00170-019-04090-6
15 W. Cai, W. Zhang, X. Hu, and Y. Liu: J. Intell. Manuf. 31 (2020) 1497. https://doi.org/10.1007/s10845-019-
01526-4
16 Z. C. Lipton: Commun. ACM 61 (2018) 36. https://doi.org/10.1145/3233231
17 J. P. Davim and V. P. Astakhov: Machining: Fundamentals and Recent Advances, J. P. Davim Eds. (Springer,
2008) pp. 29–57.
18 R. G. Silva, R. L. Reuben, K. J. Baker, and S. J. Wilcox: Mech. Syst. Signal Process. 12 (1998) 319. https://doi.
org/10.1006/mssp.1997.0123
19 D. A. Dornfeld and E. Kannatey-Asibu: Int. J. Mech. Sci. 22 (1980) 285. https://doi.org/10.1016/0020-
7403(80)90029-6
20 W. Y. Bao and I. N. Tansel: In. J. Mach. Tools Manuf. 40 (2000) 2193. https://doi.org/10.1016/S0890-
6955(00)00056-0
21 W. Rmili, A. Ouahabi, R. Serra, and R. Leroy: Measurement 77 (2016) 117. https://doi.org/10.1016/j.
measurement.2015.09.010
22 K. Vacharanukul and S. Mekid: Measurement 38 (2005) 204. https://doi.org/10.1016/j.measurement.2005.07.009
23 K. Zhu and X. Yu: Mech. Syst. Signal Process. 93 (2017) 80. https://doi.org/10.1016/j.ymssp.2017.02.004
24 U. Župerl, K. Stepien, G. Munđar, and M. Kovačič: Processes 10 (2022) 671. https://doi.org/10.3390/pr10040671
25 T. Mohanraj, J. Yerchuru, H. Krishnan, R. S. Nithin Aravind, and R. Yameni: Measurement 173 (2021) 108671.
https://doi.org/10.1016/j.measurement.2020.108671
26 Z. Li, R. Liu, and D. Wu: J. Manuf. Process. 48 (2019) 66. https://doi.org/10.1016/j.jmapro.2019.10.020
27 F. Kara: Mater. Test. 59 (2017) 903. https://doi.org/10.3139/120.111085
28 R. Dastres and M. Soori: Int. J. Eng. Res. 19 (2022) 1. https://hal.science/hal-03367778
29 D. Y. Pimenov, A. Bustillo, S. Wojciechowski, V. S. Sharma, M. K. Gupta, and M. Kuntoğlu: J. Intell. Manuf.
34 (2022) 2079. https://doi.org/10.1007/s10845-022-01923-2
30 J. Wang, J. Xie, R. Zhao, L. Zhang, and L. Duan: Rob. Comput. Integr. Manuf. 45 (2017) 47. https://doi.
org/10.1016/j.rcim.2016.05.010
31 G. Xu, H. Zhou, and J. Chen: Eng. Appl. Artif. Intell. 74 (2018) 90. https://doi.org/10.1016/j.engappai.2018.05.007
32 F. Aghazadeh, A. Tahan, and M. Thomas: Int. J. Adv. Manuf. Technol. 98 (2018) 3217. https://doi.org/10.1007/
s00170-018-2420-0
33 Q. Chen, Q. Xie, Q. Yuan, H. Huang, and Y. Li: Symmetry 11 (2019) 1233. https://doi.org/10.3390/sym11101233
34 L. Li, S. Chen, M. Deng, and Z. Gao: Grain Oil Sci. Technol. 5 (2022) 44. https://doi.org/10.1016/j.
gaost.2021.12.001
35 Y. H. Tsai, N. Y. Lyu, S. Y. Jung, K. H. Chang, J. Y. Chang, and C. T. Sun: Proc. IEEE Int. Conf. Adv. Intell.
Mechatronics (China, 2019) 103. https://doi.org/10.1109/AIM.2019.8868602
36 N. Iqbal, R. Mumtaz, U. Shafi, and SMH Zaidi: PeerJ Comput. Sci. 7 (2021) e536. https://doi.org/10.7717/peerj-
cs.536
37 N. Agarwal, V. N. Balasubramanian, and C. V. Jawahar: Pattern Recognit. Lett. 112 (2018) 184. https://doi.
org/10.1016/j.patrec.2018.06.034
38 N. Laopracha, K. Sunat, and S. Chiewchanwattana: IEEE Access 7 (2019) 20894. https://doi.org/10.1109/
ACCESS.2019.2893320
Sensors and Materials, Vol. 36, No. 11 (2024) 4833

About the Authors

Yung-Hsiang Hung received his Ph.D. degree in industrial engineering and


management from National Chiao-Tung University, R.O.C., in 2002. He is a
professor in the Department of Industrial Engineering and Management at
National Chin-Yi University of Technology, Taiwan, R.O.C. His main research
interests are in statistical process control, service quality management, process
capability analysis, semiconductor manufacturing management, and process
parameter optimization design. (hys502@ncut.edu.tw)

Mei-Ling Huang received her M.S. and Ph.D. degrees in industrial


engineering from the University of Wisconsin–Madison and National Chiao-
Tung University, respectively. She is now affiliated with the Department of
Industrial Engineering & Management at National Chin-Yi University of
Technology. Her research interests include quality management, quality
engineering, data mining, and medical diagnosis. (huangml@ncut.edu.tw)

Wen-Pai Wang received his M.S. and Ph.D. degrees in industrial engineering
from Kansas State University and National Chiao-Tung University,
respectively. He is currently a professor in the Department of Industrial
Engineering and Management at National Chin-Yi University of Technology.
His research interests are in fuzzy set theory, product development and
operations/ manufacturing management, and decision theory.
(wangwp@ncut.edu.tw)

Hsiao-Dan Hsieh received her M.S. degree from the Department of Industrial
Engineering and Management, National Chin-Yi University of Technology,
Taichung City, Taiwan, in 2019. Her research interests are in deep learning and
image recognition. (Huliouo@gmail.com)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy