Burak Sel Thesis v12

Download as pdf or txt
Download as pdf or txt
You are on page 1of 100

ISTANBUL TECHNICAL UNIVERSITY  GRADUATE SCHOOL OF SCIENCE

ENGINEERING AND TECHNOLOGY

OPTIMIZATION OF PLASTIC INJECTION PROCESS PARAMETERS USING


THE TAGUCHI METHOD, MACHINE LEARNING AND GENETIC
ALGORITHM

M.Sc. THESIS

Burak SEL

Department of Mechanical Engineering

Materials and Manufacturing Programme

DECEMBER 2021
ISTANBUL TECHNICAL UNIVERSITY  GRADUATE SCHOOL

OPTIMIZATION OF PLASTIC INJECTION PROCESS PARAMETERS USING


THE TAGUCHI METHOD, MACHINE LEARNING AND GENETIC
ALGORITHM

M.Sc. THESIS

Burak SEL
503171305

Department of Mechanical Engineering

Materials and Manufacturing Programme

Thesis Advisor: Prof. Dr. Ersan ÜSTÜNDAĞ

DECEMBER 2021
ISTANBUL TEKNİK ÜNİVERSİTESİ  LİSANSÜSTÜ EĞİTİM ENSTİTÜSÜ

PLASTİK ENJEKSİYON PROSESİNDE TAGUCHI, MAKİNE ÖĞRENMESİ


VE GENETİK ALGORİTMA YÖNTEMLERİNİ KULLANARAK PARAMETRE
OPTİMİZASYONU

YÜKSEK LİSANS TEZİ

Burak SEL
503171305

Makina Mühendisliği Anabilim Dalı

Malzeme ve İmalat Programı

Tez Danışmanı: Prof. Dr. Ersan ÜSTÜNDAĞ

DECEMBER 2021
Burak SEL, a M.Sc. student of ITU Graduate School of Science Engineering and
Technology, student ID 503171305, successfully defended the thesis/dissertation
entitled “OPTIMIZATION OF PLASTIC INJECTION PROCESS PARAMETERS
USING THE TAGUCHI METHOD, MACHINE LEARNING AND GENETIC
ALGORITHM”, which he prepared after fulfilling the requirements specified in the
associated legislations, before the jury whose signatures are below.

Thesis Advisor : Prof. Dr. Ersan ÜSTÜNDAĞ ..............................


İstanbul Technical University

Jury Members : Prof. Dr. Altan ÇAKIR .............................


Istanbul Technical University

Prof. Dr. Mehmet DEMİRKOL ..............................


Işık University

Date of Submission : 16 December 2021


Date of Defense : 15 December 2021

v
vi
To my dear family…

vii
viii
FOREWORD

I would you like thank my advisor Prof. Ersan Üstündağ, who did not spare me his
knowledge and experience, who supported and guided me in all circumstances during
my graduate education.
I would like to thank my family, who always stayed with me in every difficulty I have
faced throughout my life, have made every self-sacrifice for me and have provided full
support.
Finally, I would like to thank the Ecoplas Automotive family for their support in many
areas, especially my experimental studies, while I was preparing my thesis.

December 2021 Burak SEL

ix
x
TABLE OF CONTENTS

Page

FOREWORD............................................................................................................. ix
TABLE OF CONTENTS ......................................................................................... xi
ABBREVIATIONS ................................................................................................. xiii
SYMBOLS ................................................................................................................ xv
LIST OF TABLES ................................................................................................. xvii
LIST OF FIGURES ................................................................................................ xix
SUMMARY ............................................................................................................. xxi
ÖZET .................................................................................................................... xxiii
1 INTRODUCTION.......................................................................................... 1
1.1 Literature Review ............................................................................................. 1
2 METHOD ....................................................................................................... 9
2.1 Taguchi Method ............................................................................................... 9
2.1.1 Taguchi Experimental Design Steps.......................................................... 10
2.2 The Genetic Algorithm .................................................................................. 14
2.2.1 Steps of Genetic Algorithms...................................................................... 15
2.2.2 Operators and Terminologies .................................................................... 16
2.2.3 Encoding .................................................................................................... 19
2.2.4 Reproduction (Breeding) ........................................................................... 20
2.2.5 Search Termination (Convergence Criteria).............................................. 26
2.2.6 Parameter Selection in Genetic Algorithms .............................................. 27
2.2.7 Comparison of Genetic Algorithms with Other Optimization Techniques28
2.2.8 Advantages and Limitations of Genetic Algorithms ................................. 29
2.3 Machine Learning .......................................................................................... 30
2.3.1 Supervised Learning .................................................................................. 31
2.3.2 Unsupervised Learning .............................................................................. 31
2.3.3 Semi-supervised Learning ......................................................................... 32
2.3.4 Reinforcement Learning ............................................................................ 32
2.3.5 Artificial Neural Networks ........................................................................ 32
3 EXPERIMENTAL RESULTS AND ANALYSIS ..................................... 36
3.1 Plastic Raw Material ...................................................................................... 36
3.2 Plastic Part...................................................................................................... 37
3.3 Plastic Injection Machine ............................................................................... 38
3.4 Measurement Equipment ............................................................................... 38
3.5 Experimental Method ..................................................................................... 39
3.5.1 Taguchi Experimental Design ................................................................... 39
3.5.2 Full Factorial Design ................................................................................. 49
3.6 Machine Learning Model ............................................................................... 53

xi
3.6.1 Introduction of the Dataset ........................................................................ 53
3.6.2 Splitting the Dataset ................................................................................... 54
3.6.3 Normalization ............................................................................................ 54
3.6.4 Building the Machine Learning Model...................................................... 55
3.6.5 Measures of Error in Prediction ................................................................. 58
3.7 Optimization of Process Parameters Using ML and GA ............................... 59
4 DISCUSSION AND FUTURE WORK ...................................................... 65
REFERENCES ......................................................................................................... 67

xii
ABBREVIATIONS

ABS : Acrylonitrile Butadiene Stiren


ANOVA : Analysis of Variance
ANN : Artificial Neural Network
DOE : Design of Experiment
GA : Genetic Algorithm
MAE : Mean Absolute Error
ML : Machine Learning
MSE : Mean Square Error
OA : Orthogonal Array
RMSE : Root Mean Square Error
S/N : Signal to Noise

xiii
xiv
SYMBOLS

L : Orthogonal Array
d : The total trial number
a : The number of levels of factors
k : Number of factors
yi : Responses for the given factor level combination
°C : Temperature (in Centigrade)
sec : Second
bar : Pressure
mm : Millimeter
mm/sec : Millimeter/Second

xv
xvi
LIST OF TABLES

Page

Table 2.1 Types of problems and their respective signal-to-ratio functions [39]. ..... 13
Table 3.1 Orthogonal array L27 (39) of Taguchi control factors (i.e., process
parameters). ........................................................................................................ 41
Table 3.2 Taguchi process parameter trial results. .................................................... 42
Table 3.3 S/N ratios for Taguchi control factors. ...................................................... 44
Table 3.4 Response table for S/N ratios and the ranking of the factors (“smaller is
better”)................................................................................................................ 46
Table 3.5 Analysis of variance for warpage value (mm), using adjusted SS for tests.
............................................................................................................................ 47
Table 3.6 1st trial set created with full factorial design. ............................................ 49
Table 3.7 2nd trial set created with full factorial design. ........................................... 51
Table 3.8 A portion of the dataset employed in the early the machine learning
analysis. .............................................................................................................. 53
Table 3.9 A portion of the dataset used in the later machine learning analysis. ....... 54
Table 3.10 The details of the artificial neural network employed in the ML model
development. ...................................................................................................... 55
Table 3.11 Comparison of measured and predicted values of warpage. The
predictions are from the testing data of the ML model. ..................................... 56
Table 3.12 The ranges of the input parameters employed in the GA optimization. . 59
Table 3.13 An example of a uniform crossover type. ............................................... 61
Table 3.14 Results of 30 independent runs of GA. ................................................... 62

xvii
xviii
LIST OF FIGURES

Page

Figure 2.1 An orthogonal array L8 (27) [38]. ............................................................. 12


Figure 2.2 Flowchart of a genetic algorithm [41]. .................................................... 17
Figure 2.3 Encoding-decoding relation. .................................................................... 19
Figure 2.4 Encoding types in a genetic algorithm..................................................... 19
Figure 2.5 The generic structure of an artificial neural network, sometimes also
called a deep neural network. ............................................................................. 33
Figure 2.6 Visualization of the gradient descent technique. ..................................... 34
Figure 2.7 Demonstration of the effect of the value of the learning rate on the search
process. ............................................................................................................... 35
Figure 2.8 Effect of the learning rate value on the evolution of the cost function. .. 35
Figure 3.1 Grille lath plastic part used in this study. ................................................ 37
Figure 3.2 Grille lath plastic part placed on the height gauge fixture to measure its
warpage. ............................................................................................................. 37
Figure 3.3 A typical Welltec 500SE II plastic injection machine. ............................ 38
Figure 3.4 Height gauge fixture. ............................................................................... 38
Figure 3.5 Double column digital height gauge. ....................................................... 39
Figure 3.6 Taguchi orthogonal array design as shown in Minitab............................ 41
Figure 3.7 Measuring points to determine the warpage value at sectional view. Here,
the z direction is vertical. ................................................................................... 42
Figure 3.8 Main effects for S/N ratios. ...................................................................... 46
Figure 3.9 Evolution of loss as a function of epochs during the training and testing
of the ML model................................................................................................. 56
Figure 3.10 Evolution of warpage value after each generation in GA...................... 62
Figure 3.11 Covariance matrix, including the inter-parameter correlation numbers
for the parameters in Table 3.14. ....................................................................... 64

xix
xx
OPTIMIZATION OF PLASTIC INJECTION PROCESS PARAMETERS
USING THE TAGUCHI METHOD, MACHINE LEARNING AND GENETIC
ALGORITHM

SUMMARY

Plastics are shaped by many different shaping methods according to their types and
intended purpose. One of these methods is injection molding. Plastic injection is a
production method that involves the injection of molten polymer into a mold cavity
where the polymer later cools and is removed. Dimensional and visual errors (sink
marks, flow marks, warpage, short shot, burn marks, weld line, air bubbles, jetting,
etc.) occur in parts produced by the plastic injection method due to various reasons.
In this thesis, a study was conducted on the warpage problem, which is one of the
critical quality problems of injection molded parts and results in both structural and
visual defects. Warpage can be defined as deviations and dimensional distortions
compared to those in the original part design.
The main cause of warpage is the development of uneven internal stresses during the
filling, compression and cooling phases of the injection process. However, direct
measurement or accurate modeling of these stresses is rather difficult. For this reason,
the present study employed an indirect approach to determine the optimum values for
the input parameters of the plastic injection process that minimize warpage. Firstly,
using the Taguchi method, the most influential process parameters were determined
(from the most influential onward): packing pressure, cavity steel temperature, cooling
time, packing time and back pressure, respectively.
Next, using the Taguchi experiment design, a full factorial experiment design set
containing a more comprehensive dataset was created with parameters whose number
and range of values were limited. According to this set of experiments, a second trial
was made and the warpage values of the piece were measured. A machine learning
(ML) model was then created using all the data. This model served as the simulation
of the process and was employed by the genetic algorithm (GA) in the search for
optimum process parameters that yield minimum warpage.
As a result of the studies, it has been determined that the most influential parameters
on warpage are packing pressure, cooling time, packing time and back pressure. On
the other hand, it was observed that parameters such as melt temperature and injection
pressure had little effect on the process. An interesting observation was that, in contrast
with literature, an increase in applied pressure had a negative effect on the warpage of
the part.

xxi
xxii
PLASTİK ENJEKSİYON PROSESİNDE TAGUCHI, MAKİNE ÖĞRENMESİ
VE GENETİK ALGORİTMA YÖNTEMLERİNİ KULLANARAK PROSES
OPTİMİZASYONU

ÖZET

Plastikler geçmişten günümüze kadar oldukça yaygın bir şekilde hayatımızın birçok
alanında kullanılmıştır. Plastiklerin bu derece yaygın olarak kullanılmasının başlıca
sebepleri arasında yalıtım, hafiflik, darbelere ve korozyona karşı dayanıklılık, şekil
verilebilirlik gibi geniş bir özellik yelpazesinin bulunması ve bunların işleme
kolaylığını gösterebiliriz. Bu tür özelliklerin özellikle maliyet konusunda ciddi kâr
sağlamasının yanı sıra plastiklerin kullanıcı ve çevre dostu olması ambalaj, yapı ve
inşaat, otomotiv, havacılık ve uzay, beyaz eşya, elektrik ve elektronik, tekstil, tarım,
sağlık ve diğer birçok sektörde kullanılmasına olanak sağlamıştır.
Plastikler, karbon (C), hidrojen (H), oksijen (O), azo t(N) ve diğer organik ve inorganik
elementlerin oluşturduğu monomer yapılardaki bağın koparılarak uzun ve zincirli bir
yapıya dönüştürülmesi ile elde edilen, polimer olarak da adlandırılan malzemelerdir.
Isıya tepkilerine göre termoset ve termoplastik olmak üzere iki gruba ayrılırlar.
Termosetler bir kez ısı verilip şekillendirildikten sonra tekrar ısıtılıp eski hallerine
veya başka bir şekle dönüştürülemezler. Oda sıcaklığında katı halde bulunan
termoplastikler ise yüsek sıcaklıklarda erimeye başlarlar ve eriyik hale gelirler.
Isıtılınca viskozitelerinin düşmesi sebebiyle atom zincirleri biribirinden kopar ve
akıcılığa sebep olur. Aynı şekilde soğutulduğunda ise kopan zincirler tekrar katılaşarak
malzemenin katı bir hal almasını sağlanır. Bu özelliklerden dolayı termoplastiklere
defalarca şekil verilebilir. En çok kullanılan plastik malzeme olan termoplastiklere
örnek olarak akronitril bütadien stiren (ABS), polietilen (PE), polivinil klorür (PV),
polimetil metakrilat (PMMA), ve polikarbonat (PC) verilebilir.
Plastikler türlerine ve kullanım amaçlarına göre birçok farklı şekil verme yöntemi ile
biçimlendirilirler. Bu yöntemlerden birisi de enjeksiyon kalıplama yöntemidir. Plastik
enjeksiyon, sıcaklık yardımıyla eritilen plastik hammaddenin bir kalıp içerisine
enjekte edilerek kalıp boşluğunun şeklini alması ve kalıp içerisinde belli bir süre
soğutulduktan sonra plastik ürünün kalıptan çıkarılmasını içeren bir üretim
yöntemidir. Enjeksiyon aşamasında besleme haznesindeki granül halinde bulunan
plastik malzeme kovanın içindeki vida boşluklarına dökülür. Gitgide vida
boşluklarının daralmasından kaynaklı olarak vida boşluklarına dolan hammaddenin
üzerinde basınç oluşmaya başlar. Kovan üzerinde bulunan rezistanslı ısıtıcılar ve
basıncın da etkisiyle eriyik hale gelen hammadde vida ucundaki roket grubunun
içinden geçerek vida önünde birikmeye başlar. Öne dolan malın sıkışmasıyla vida geri
gitmeye başlar. Vidanın önünde kalıbı doldurmak için yeterince mal alındığında
enjeksiyon silindiri vidayı ileri iterek malın memeden ve yolluktan geçerek kalıba

xxiii
dolması sağlar. Parçanın istenilen ölçülerde olması ve görsel kusurların olmaması
(çöküntü, eğilme, çekme vb.) için vidaya belli bir süre daha basınç uygulanır. Bu
aşamada kalıp içine enjekte edilen plastik eriyik kalıp boşluğunu iyice doldurur ve
eriyik plastik kalıp içinde belli bir süre sonra katılaşır. Bu süre hammaddenin
özellikliklerine, parça boyutlarına ve parçanın ağırlığına göre değişkenlik gösterir.
Sonraki adımda uygulanan basınç kaldırılarak plastik parçanın kalıp içinde soğuması
gerçekleştirilir. Soğutma işleminin ardından kalıbın iki tarafı ayrılacak şekilde kalıp
açılarak bitmiş plastik ürün maçalar ya da iticiler yardımıyla kalıptan çıkartılır.
Plastik enjeksiyon yöntemi ile üretilen parçalarda çeşitli sebeplerden dolayı ölçüsel ve
görsel hatalar (çöküntü, akış izi, çarpılma, eksik dolum, yanık izi, birleşim izi, hava
kabarcıkları, serpinti vb.) ortaya çıkmaktadır. Hataların meydana gelmesinin birçok
sebebi olabilir. Bu problemler, mesela, parça tasarımı, kalıp, hammadde, makine ya da
proses kaynaklı olabilir. Bunların arasında en çok değişken durumun olduğu ve
değişkenlerin parça kalitesine etkisinin gözlemlenmesi ve tespit edilmesinin de zor
olduğu aşama plastik enjeksiyon aşamasıdır.
Bu tezde, plastik parçaların kritik kalite problemlerinden biri olan, hem yapısal hem
de görsel problemlere yol açan çarpılma problemini minimize etmek için plastik
enjeksiyon prosesinde parametre optimizasyonu yapılmıştır. Öncelikle çarpılma,
üretilmiş olan ürünün ölçülerinin orjinal parça tasarımdaki ölçülerinden sapması ya da
boyutsal bozulmalar olarak tanımlanabilir. Kalıplanmış olan parça boyutsal toleransı
sağlamadığı takdirde nihai ürün olarak kullanılamaz ve bunun sonucunda kalıp ve
parça tasarımlarının yeniden tasarlanması sürecine kadar gidecek ciddi maliyetlere ve
zaman kayıplarına sebep olabilir. Parça tasarımı, parça et kalınlığı, yolluk giriş
tasarımı, yolluk giriş lokasyonu, proses parametreleri ve çevre koşulları parçanın
çarpılmasına etki eden faktörlerdendir. Esasında parça çarpılmasına etki eden ana
neden enjeksiyon prosesindeki doldurma, sıkıştırma ve soğutma aşamaları sırasında
ortaya çıkan ve eşit olmayan gerilmelerdir. Akış kaynaklı artık gerilmeler, gerilmiş
polimer moleküller, basınç ve yoğunluk dağılımı ile ilişkili olan dengesiz doldurma ve
sıkıştırmadan dolayı parça içine hapsolur. Bu artık gerilmeler parça serbest halde
bırakıldıktan sonra çeşitli ortam koşullarına da bağlı olarak parçanın genellikle uç
bölgelerinde şekil bozukluklarına sebebiyet verir. Bu artık gerilmeler ne kadar az
olursa parça üzerindeki şekil değişimine de etkisi o kadar az olacaktır.
Bu sebeple tez çalışmasında parça çarpılmasını minimize edecek şekilde çeşitli
deneysel ve teorik metotlar kullanılarak plastik enjeksiyon prosesindeki
parametrelerin en uygun olduğu değerler tespit edilmeye çalışılmıştır. İlk olarak
Taguchi yöntemi kullanılarak üründe ve süreçte değişkenlik (hedef değerden fark)
yaratan kontrol edilemeyen faktörlere karşı kontrol edilebilir faktörlerin değerlerini
optimum şekilde seçerek ürün ve süreçteki değişkenliğin en aza indirilmesi
amaçlanmıştır. Yapılan denemeler ile elde edilen veriler sonucunda kullandığımız
plastik parça için en önemli parametrelerin sırasıyla tutma basıncı, dişi çelik sıcaklığı,
soğutma zamanı, tutma zamanı ve geri basınç olduğu görülmüştür. Ayrıca
parametrelerin çarpılma üzerinde daha çok etkiye sahip oldukları parametre aralığı da
belirlenmiştir.
Taguchi deney tasarımı ile sayısı ve değer aralığı sınırlandırılmış parametreler ile daha
kapsamlı bir veri seti içeren iki ayrı tam faktöriyel deney tasarım seti oluşturulmuştur.
Taguchi deney setine göre birinci deneme seti, birinci deneme sonuçlarına göre de
ikinci deneme seti oluşturulmuştur. Elde edilen veri seti kullanılarak bir makine
öğrenmesi modeli geliştirilmiştir. Bu model prosesin bir simülasyonu olarak hizmet

xxiv
etmiştir. Makine öğrenmesi modelinden elde edilen denklem genetik algoritma
metodunda uygunluk fonksiyonu olarak kullanılmıştır. Genetik algoritma metodu ile
de minimum çarpıklık değerini veren optimum süreç parametrelerinin aranması
gerçekleştirilmiştir.
Yapılan çalışmalar sonucunda çarpılma üzerinde en etkili parametrelerin sırasıyla
tutma basıncı, soğuma süresi, tutuma süresi ve geri basınç olduğu tespit edilmiştir. Öte
yandan, eriyik sıcaklığı ve enjeksiyon basıncı gibi parametrelerin proses üzerinde çok
az etkisi olduğu gözlemlenmiştir. Ayrıca literatürün aksine, bu çalışmada kullanılan
plastik parçanın üzerine uygulanan basınç değerlerinin artması parçanın çarpılmasını
negatif yönde etkilemiştir.

xxv
xxvi
1 INTRODUCTION

Plastic injection molding (PIM) is a common manufacturing technique. It involves the injection
of molten polymer into a mold followed by cooling. With this method, highly complex plastic
parts can be produced at rather precise dimensions in a variety of sizes and categories in the
plastic industry. Plastic injection molding has many advantages such as low cost, rapid serial
production, products with high-quality surfaces and mechanical properties. It is a constantly
developing process accompanied by technological developments and spurred by customer
demand.

On the other hand, PIM is a rather complex process that is difficult to model. It also yields some
product defects such as short-shots, burrs, sink marks, weld lines, warpage, dimensional
changes and so on. Many researchers have worked on optimizing PIM and its input parameters
to reduce or eliminate these defects. The present study focuses on warpage and its elimination.

1.1 Literature Review

This section presents a brief overview of the most recent and relevant literature about reducing
defects in PIM including the warpage.

Ming-Chih Huang and Ching-Chih Tai (2001) [1] used Taguchi’s experimental method to
analyze variation in warpage by using five control parameters that were melt temperature, mold
temperature, filling time, packing time, packing pressure, in addition to gate dimensions to see
whether they affect warpage variance. They initially reduced the number of trials and then
determined the effective factors by using analysis of variance (ANOVA). Their studies
indicated that the most influential parameter was the packing pressure with 15.6% influence
whereas the gate dimension had a much lower effect.

Tuncay Erzurumlu and Babur Ozcelik (2006) [2] employed the Taguchi optimization technique
to minimize warpage and sink marks on plastic parts. They used an orthogonal array of Taguchi,

1
the signal-to-noise (S/N) ratio, and ANOVA. Their predictions were tested with a confirmation
test to evaluate the effectiveness of the Taguchi optimization technique.

S. H. Tang et al. (2007) [3] were interested in warpage defects on plastic parts and, they too
used the Taguchi method. Four parameters such as melt temperature, filling time, packing
pressure, and packing time were chosen for testing at a mold manufactured for this study and
was shown that the most important parameter was melt temperature. Also, as with a previous
study, the filling time had a minor effect on warpage.

Hasan Oktem et al. (2007) [4] studied the influence of shrinkage and warpage correlation on
the warpage problem, again by using the Taguchi optimization technique. The packing pressure
and packing time were found to be the significant parameters with the help of the signal-to-
noise (S/N) ratio and the analysis of variance (ANOVA).

Babur Ozcelik and Ibrahim Sonat (2009) [5], too, focused on the warpage problem and used
the Taguchi experimental method. They looked at the effect of packing pressure, packing time,
mold temperature and melt temperature. The materials used in their study were PC/ABS. They
showed that the most important parameter was packing pressure.

Zhao Longzhi et al. (2010) [6] used the Moldflow PIM simulation software and the orthogonal
experiment technique together to optimize the process parameters. They determined the best
gate location for reducing sink marks in decorative parts of an automobile. The results showed
that the melt temperature was the most influential process parameter among injection time,
holding pressure, mold temperature and cooling time for the sink marks problem in PIM.

Z. Shayfull et al. (2011) [7] evaluated the warpage of an ultra-thin shell part by using Moldflow
to establish the optimum parameters in PIM. They analyzed two situations with four parameters
and five parameters, respectively. In the first situation, mold temperature had the most
significant effect on warpage, but in the other situation, melt temperature was the most
significant parameter. This study also demonstrated that the temperature differences between
the cavity and core sides had no significant effects on warpage.

Radhwan Hussin et al. (2012) [8] were, too, interested in warpage defects. They used Taguchi
orthogonal array and Moldflow Insight software (MPI) to reduce process variation, enhance
process effectiveness and process capability. In this paper, it was observed that the ambient
temperature and runner size were important in addition to the other six process parameters

2
determined by Ming-Chih Huang [1]. The runner size (with a 14.6% effect) was found to be
the second important parameter after melt temperature.

Zhi Shan et al. (2014) [9] studied the relationship between the weld line and process parameters
and employed the Taguchi orthogonal method to optimize process parameters. Here, the weld
line problem was transformed into numerical data in order to quantitatively evaluate the level
of the weld line based on the Vbscript of Moldflow processing module. In the end, the optimal
process parameters were evaluated with Moldflow software.

G.S. Dangayach and Lalit Guglani (2015) [10] explained the enhancement in the effectiveness
of injection molded polycarbonate energy meter base and focused on essentially reducing the
overall cycle time. Moldflow software and design of experiment (DOE) method were used to
predict some processing parameters and evaluate the impact of different parameters on cycle
time, sink marks and overall quality indexes.

Ko-Ta Chiang and Fu-Ping Chang (2007) [11] applied a systematic methodology to determine
the effect of processing parameters on shrinkage and warpage and to find optimum values for
process parameters based on response surface methodology (RSM) that related sequential
approximation optimization (SAO).

D. Mathivanan and N. S. Parthasarathy (2009) [12] attempted to minimize sink mark levels by
optimization of processing variables through the second-order response surface regression
model (RSRM).

Kuo-Ming Tsai and Hao-Jhih Luo (2015) [13] used RSM and an artificial neural network
(ANN) to obtain an accurate model that predicts the form of a plastic lens. Initially, the Taguchi
method was used to eliminate insignificant parameters. As a result, it was seen that the ANN
model had better fitness and accuracy than RSM.

B. H. M. Sadeghi (2000) [14] developed a backpropagation artificial neural network (BPANN)


model for estimating the quality and robustness of the injection plastic parts. This model was
trained based on input/output data taken from CAE software simulations, not from experiments.

S. J. Liao et. al. (2004) [15] studied warpage and shrinkage of injected thin-wall parts by using
both the Taguchi optimization method and BPANN. It was demonstrated that the BPANN is
more successful than the Taguchi method in determining optimum process conditions.

3
Fei Yin et al. (2011a) [16] were interested in warpage prediction and optimization of an
automobile glove compartment cap by using BPANN trained by the input/output data acquired
from Finite Element (FE) simulations. Their prediction of warpage was within an error in the
range of 2%.

Fei Yin et al. (2011b) [17] proposed a hybrid back propagation / genetic algorithm (BP/GA)
optimization method based on Moldflow, Taguchi orthogonal method, BPANN and GA. The
proposed optimization method can be used to minimize defects of plastic parts in addition to
cycle time and cost of part etc. during PIM.

Rongji Wang et al. (2013) [18] developed a BPANN model using the Matlab neural network
toolbox to understand the relationship between shrinkage and processing parameters in PIM.
This paper showed that the ANN model was more effective in forecasting shrinkage by
comparison to Moldflow. Also in the BPANN model, if the number of hidden layer nodes
increased, the prediction efficiency increased as well.

Hasan Kurtaran et al. (2005) [19] attempted to minimize warpage of a bus ceiling lamp base by
using an ANN model and GA. To find the optimum processing parameters, a predictive model
based on a feed-forward ANN utilizing finite element analysis results was developed and
interfaced with a genetic algorithm.

Wen-Chin Chen and Shou-Wen Hsu (2007) [20] applied two types of neural network
approaches such as BPANN and radial basis function network (RBFN) for light-emitting diode
(LED) inspection. The results showed that the BPANN accuracy of recognition was more than
that of RBFN and the testing speed of the applied approach was faster than the traditional
inspection system.

S. L. Mok et al. (2001) [21] implemented a hybrid ANN/GA model using the Visual Basic
programming language to determine the initial process parameters in PIM. The input values
were obtained from the part design, mold design, process parameters generated by a GA and
user input. As a result, it was shown that the time required for the determination of initial
process parameters could be reduced.

B. Ozcelik and T. Erzurumlu (2006) [22] attempted to minimize the warpage of thin shell plastic
parts by using a consistent optimization model that involved ANN and GA. Using the most
important parameters defined by finite element analysis based on ANOVA, they created a

4
predictive model to reduce the computational cost of the optimization process and then this
model was interfaced with GA to find the optimum process parameters.

Yonggang Peng et al. (2012) [23] used nonlinear model predictive control (NMPC) based on
diagonal recurrent neural networks (DRNN) to control barrel melt temperature because of its
nonlinear and longtime delay characteristic. Then, they constructed a predictive model by
employing DRNN as a rolling optimization tool by GA.

R. Sedighi et al. (2017) [24] focused on the weld line problem in PIM. To obtain optimum gate
location they used finite element analysis, radial basis function network (RBFN) and GA.
Initially, the weld line that occurred on the part was described. Then, the optimum gate location
was efficiently determined by a GA using optimized gate location results supplying minimum
weld line length.

Bernardete Ribeiro (2005) [25] created a multiclass support vector machine (SVM) classifier
and compared it with RBFNs to predict faults in automotive parts. The SVMs formed a structure
of the classifier to reduce the bounds of the training and general errors. The SVM was preferred,
because in comparison to the training of RBFN, the SVM method was quite faster.

Jie-Ren Shie (2008) [26] analyzed the contour distortions of polypropylene (PP) by combining
RBN and the sequential quadratic programming (SQP) technique. The suggested method
provided better efficiency than DOE and also reduced the total time to find an optimum
solution.

Maosheng Tian et. al. (2017) [27] determined the optimal process parameters according to
product quality and energy productivity by using combined NSGS-II and multi regression
models. This paper showed that the proposed method increased process consistency and
reduced energy consumption and product weight in the PIM.

Huizhuo Shi et al. (2010) [28] combined an ANN model and the expected improvement (EI)
function method to reduce the warpage of injection molding parts. As long as EI function took
unexpected values to improve the productivity of the ANN model, optimum solutions could be
approached. However, multimodal with sharp peaks of EI and some network parameters were
disadvantages for EI in terms of quickness and stability.

Mohammad Saleh Meiabadi et. al. (2013) [29] wanted to obtain the optimum process
parameters for accurate part weight, reduced process cycle time and injection pressure. The

5
proper process parameters, the accuracy of Moldflow simulations and neural network
predictions ensured the productivity of the model. Also, it was indicated that some parameters
such as gate location, coolant point could not be considered because of the constraint of machine
tools and mold steel.

Patel Manjunath and Prasad Krishna (2012) [30] developed an ANN model to optimize the
dimensional shrinkage alteration by determining optimum process parameters via forward and
reverse mapping, respectively. The input and output data for the training of the network were
obtained from earlier research. As a result, the trained network for optimization of process
parameters for minimum shrinkage decreased the mean square error in either case.

Wen-Chin Chen et al. (2009) [31] and (2008) [32] employed a lot of methods such as the
Taguchi, ANOVA, BPANNs, GA, and the Davidon-Fletcher-Powell to find optimal process
parameters. Firstly, the initial process parameters obtained from Moldflow simulations were
used in the Taguchi method to reduce the number of experimental runs. Secondly, these process
parameters were used to train and test the BPANNs. Optimum process parameters were then
obtained by combining BPANNs with DFP and GAs in the study at ref. [31], but the study at
ref. [32] excluded GAs. In both studies, control factors to find the product weight were injection
time, injection velocity, velocity pressure switch position and packing pressure. It was seen that
the integrated parameter optimization system both reduced shortcomings inherent in a typical
Taguchi method and yielded important quality and cost advantages.

W. C. Chen et al. (2012) [33] then developed a two-stage optimization method to determine
optimum process parameters by using a Taguchi orthogonal array, S/N ratio, BPANN and
particle swarm optimization (PSO). In the first stage, S/N ratio predictor was used together with
simulated annealing (SA) to determine the initial optimal process values and then the S/N
quality predictor was created by applying the BPANN after the first process parameter was
determined with the Taguchi method and S/N ratio. The purpose of the first stage was to
minimize the PIM process variance. In the second stage, the best quality identification was
constructed by working the BPANN and PSO together. As a result, this study achieved the
quality specifications and lower cost than those in refs. [31] and [32].

Wen-Chin Chen et al. (2014) [34] next used an integrated optimization system in obtaining
optimum process parameters in the PIM process. The system used multi-input multi-output

6
(MIMO) variables. In this study, the Taguchi method, ANOVA, S/N ratio were used for initial
process parameters. In the construction of the S/N ratio predictor and quality, the predictor were
used in the BPANN. Initial optimum process parameters were obtained with a combination of
the S/N ratio predictor and GA. Then, to obtain the final process parameters the quality predictor
was integrated with PSO such as in ref. [33].

It is also worth mentioning the work by this group (Wen-Chin Chen et al. 2009) [35] where
they developed a soft computing approach integrating the Taguchi method, BPANN, GA and
engineering optimization concepts for the optimal process parameter affecting the efficiency,
quality and cost of production. The engineering optimization concept integrated with soft
computing had an advantage in avoiding the possible shortcoming of the former optimization
approach. Firstly, Taguchi was used to provide training and test data for BPANN and to obtain
an initial population for use in GA. Secondly, the engineering optimization concept was
implemented to formulate the fitness function of GA and BPANN tried to predict definite
process parameters. Finally, the GA was used to obtain optimum process parameters. In general,
this study demonstrated that the proposed method could assist engineers in reducing product
quality costs.

Finally, the same group (Wen-Chin Chen et al. 2016) [36] offered a multi optimization method
to find optimal process parameters using the Taguchi method, RSM, and hybrid GA-PSO.
Firstly, the experimental data obtained by the Taguchi method was used in RSM in order to
analyze and generate two quality predictors and two S/N ratio predictors. The quality predictor
was interfaced with GA to find an optimal combination of process parameters. Finally, four
predictors were combined with GA-PSO to reach the final optimum process.

Kuo-Ming Tsai and Hao-Jhih Luo (2014) [37] created an inverse model by combining an ANN
model with GA to find optical lens form accuracy in an injection molding process. Initially,
Taguchi parameter design was used to distinguish the most important factors affecting lens
form accuracy. Experimental data based on the results of the Taguchi design were then used as
training and control sets in the ANN model. The dataset trained with ANN model was evaluated
in the fitness function of GA, and the process parameters providing the fitness function were
obtained through the evolutionary process of GA.

7
In the present thesis, unlike previous studies, a hybrid model was created to minimize plastic
part warpage. First of all, the parameters that have a high effect on warpage and their level were
determined with the Taguchi experimental design method. According to the results obtained,
more data was collected by creating an experimental set with a full factorial experimental
design. This data was then used to create the machine learning model to be used as process
simulation. Finally, this simulation model was combined with the genetic algorithm to search
for the optimum process parameters.

8
2 METHOD

2.1 Taguchi Method

Taguchi devoted great thought on the quality control of products in the manufacturing industry.
This philosophy brings many different engineering perspectives to work out the quality
problems and thus resulted in a new quality culture. Fundamentally, Taguchi’s philosophy is
constructed in several simple concepts. These concepts are:

“1. Quality should be designed into the product and not inspected into it.

2. Quality is best achieved by minimizing the deviation from a target. The product
should be so designed that it is immune to uncontrollable environmental factors.

3. The cost of quality should be measured as a function of deviation from the standard,
and the losses should be measured system-wide.

These three principles help to improve manufacturing processes, evaluate the factors affecting
the quality of the product and determine product parameters” [38].

Taguchi method is essentially a standardized design of experiment (DOE) technique. The DOE,
also referred to also as factorial design in the literature, is a statistical approach describing and
researching all possible situations in an experiment containing multiple factors. The factorial
design is extremely costly and time consuming due to its requirement of a large number of
trials. As the number of factors affecting the system increases a rapid increase in the number of
experiments occurs and costs increase even more. Besides, the explication of the experimental
results may be troublesome and different designs to be used for the same experiment may not
give the same outcomes. Dr. Genichi Taguchi developed a creative technique known as the
Taguchi method to make the DOE more practical for many industries. In addition, the Taguchi
method removed the constraint of factorial experiments and the researches obtained similar
results from the same experiments by using different designs.

The Taguchi method aims to minimize the variability in the product and process by optimally
selecting the values of the controllable factors against the uncontrollable factors that create
variability (difference from the target value) in the product and the process. This method, which
is based on experimental design, was obtained by adding concepts such as robust design and

9
orthogonal arrays to the fractional factorial experiment design method. The concept of
orthogonality is to determine the effects of factors on product performance independently from
other factors by making an equal number of observations for each level of each factor. The
robust design aims to eliminate variability in the product or process without interfering with
factors that cause variability. In other words, it makes the product or process “insensitive to
variability”.

The next section explains in stages how the Taguchi method is applied and how the results are
evaluated according to which criteria.

2.1.1 Taguchi Experimental Design Steps

1. Determination of the Factors (Parameters)


2. Determination of the Factor-Levels
3. Selection of Orthogonal Array
4. Placement of Factors and Their Interactions
5. Performing the Tests

After determining the purpose related to solving the current problem, the factors or interactions
to be evaluated are determined using various methods. While choosing factors related to the
problem in this step, it is important to have sufficient knowledge about prior knowledge and
experience and to consult the opinions of experts in the field. The healthier factor (parameter)
selection is made, the faster the targeted optimum value can be reached.

After determining the factors affecting performance characteristics, the number of levels of
these factors is determined. The levels of the factors can be two, three or more. Factor levels
are a function associated with degrees of freedom. The degree of freedom that is calculated
from factor levels is important to determine the size of the experiment. In other words, the
number of experiments to be carried out depends on the selected factor and the factor level. The
level of freedom of a factor is obtained by subtracting from 1 the level of that factor. Also,
alongside the individual effects of factors, interactions between factors are determined. This is
called interaction effects.

10
In the next step, the orthogonal array is selected. The number of selected factors and factor-
levels affects the selection of the orthogonal array. Dr. Taguchi benefited from a specific set of
orthogonal arrays (OAs) as mentioned above. By ingeniously combining the orthogonal Latin
squares, he constituted a new set of standard OAs to be used for a lot of experimental conditions.
This is shown by the symbol L8 or L8(2)7. The explanation of the general symbol of the
orthogonal array is as follows:

Ld or Ld(a)k

Here:

L: orthogonal array

d: the total trial number

a: the number of levels of factors

k: number of factors

The OA with two-levels factors is shown in Figure 2.1 as an example. This array with eight
rows and seven columns is utilized to design experiments containing up to seven-factors with
two-levels. Each row specifies a trial situation with two-levels shown by the numbers in the
row. The vertical columns correspond to the factors indicated in the study.

Balancing of all orthogonal array columns is carried out in two ways. Firstly, the columns with
the same number of levels of the factor are balanced within themselves. Secondly, the columns
which constitute the same number of possible combinations are balanced between any two
columns. For instance, each column in an L8 array comprises an equal number of two-level
factors. These two-level factors are obtained in four possible choices, such as (1,1), (1,2), (2,1),
(2,2). When two columns of an array involve an equal number, it is said that the column is
orthogonal. In this regard, it is seen that all seven columns of an L are orthogonal to each other.

An OA experiment design allows the reduction of variation caused by controllable factors


whose levels can be determined and checked during the experiment and in the final design of
the product or process. Uncontrollable factors such as noise, dust, moisture, ambient
temperature, etc. can be tackled in two manners. The first is to repeat the experimental study in
different noise situations. In the second one, the noise factors can be attached to a second
orthogonal array (called an outer array), which is used in incorporation with an inner array.

11
Because the OAs are used to specify original experimental situations, also, the noise factors,
Taguchi terms the former and latter design as an inner and outer array, respectively.

Figure 2.1 An orthogonal array L8 (27) [38].

After selecting the orthogonal array, the factors and/or interactions are placed in the appropriate
columns using the linear graph and triangular tables developed by Taguchi.

At the stage of performing the tests, analysis of data is made according to the appropriate
performance statistic. It is important to choose correct performance statistics in terms of
healthier interpretation of trial results. Taguchi fiercely suggests the use of the signal-to-noise
(S/N) ratio for the same steps in analysis in multiple runs. The signal-to-noise ratio is a quality
index where experiments and designers can evaluate and make an inference changing a
particular design parameter based on how it affects the performance of the process or product.

When a trial is made, there are numerous outer and inner factors not be addressed in the trial
that affect the result. These uncontrollable factors are termed the noise factors, and their
influence on the result of the quality characteristic under test is called “noise” too. The signal-
to-noise ratio (S/N ratio) measures the degree of accuracy of the quality characteristic being
researched in a controlled way to those influencing factors (noise factors), not under control.
Taguchi effectually implemented this concept to obtain the optimal condition from the trials.

The goal of any trial is always to specify the highest possible S/N ratio for the outcome. A high
value of S/N ratio denotes that the signal is much higher than the random influences of the noise
factors. Product design or the process operation that has the highest S/N ratio all the time
provides the optimal quality with minimum variability.

12
The transformation of a set of observations into the S/N ratio is carried out in two stages. The
mean square deviation (MSD) of the cluster is evaluated and then the S/N ratio is computed
from the MSD through an equation. The MSD is a statistical numerical value that shows the
deviation from the target value. MSD that can be related to both the average and standard
deviation of the data is a good scale of the dataset productivity.

There are three different S/N functions depending on the purpose of the experiment. These are
the smaller-the-better, the-larger-the-better, and the-nominal-the-best. The S/N functions
suitable for three different problem types are presented in Table 2.1 [39].

Table 2.1 Types of problems and their respective signal-to-ratio functions [39].

The analysis of variance method (ANOVA) is used in the analysis and interpretation of the
results. The ANOVA is the statistical evaluation usually performed on the results of the
experiment to specify the relative percent influence of an individual factor and to separate the
important factors from the unimportant ones. The study of the ANOVA table for a given
analysis helps find out which of the factors need control and which do not. The method does
not directly analyze the data. Instead, it specifies the variability of the data. The analysis
determines the variance of controllable and noise factors. By evaluating the source and size of
variance, optimum process parameters can be better estimated.

In brief, the results of the experiments are evaluated to achieve the following objectives in the
Taguchi method:
1. To specify the tendency of the effect of factors and interactions under study.
2. To describe the important factors and their proportional effects on the variability of
results.

13
3. To determine the optimal situation for a product and process together with a prediction
of the contribution of individual factors and an estimate of the expected response under
the optimal situations.

As supplementary information, productivity at any full factorial situation can also be calculated
from the outcomes of experiments performed. It should be emphasized, however, this approach
may not always yield the optimal condition since the OA states only a small fraction of all the
probabilities. It just helps in getting as close as possible to that condition by requiring the
minimum number of experiments (or trials).

2.2 The Genetic Algorithm

With the increasing use of data from complex and difficult processes, the search for solutions
to problems is increasing day by day. At this point, instead of hard computing, which is the
traditional method based on the principles of accuracy, certainty, and inflexibility, the soft
computing method, which is the modern approach based on the idea of approximation,
uncertainty, and flexibility, has gained even more importance. In addition, because the hard
calculation methods require a larger amount of computation time and cost than the soft
computing methods, it has made the soft computing method a better alternative in the solution
of real-world problems. Genetic algorithms, which are based on evolutionary computation are
among the soft computing methods.

The genetic algorithm, invented by John Holland, is a search algorithm based on foundations
of natural selection (survival of fittest) and natural genetics. These algorithms obtain and
manipulate a population of solutions and perform their search for better solutions based on a
“survival of the fittest” technique. In every generation, a new set of artificial models is
generated using pieces of the fittest of the old; sometimes a new part is tried for a good rank
[40]. The genetic algorithm is a powerful and applicable algorithm that can use different paths
in every problem and concludes the operation in different steps using historical information
effectively. It has been demonstrated theoretically and empirically that genetic algorithms
perform powerful search in a complex state space. It should also be noted that GAs are not
limited by restrictive solutions in search space [40].

14
GAs work linear and nonlinear problems by discovering all regions of the state space and
making use of promising areas via mutation, crossover, and selection operations applied to
individuals in the population. Before the GA method is applied to a problem, some basic criteria
related to the GA need to be determined. These basic criteria are the chromosome
representation, the selection function, genetic operators that constitute the reproduction
function, the determination of the initial population, the termination criteria, and the evaluation
function [40].

The first step in employing a GA in an optimization problem is to show all solutions in the form
of bit strings with the same size (chromosomes), which are randomly generated. Here, each bit
in a bit string (the chromosome) refers to a gene. The gene pool should be as wide as possible
so that any solution of the search space can be created. By encoding the parameters, the
problem-specific information is translated into the form will be used by genetic algorithm. All
of these chromosomes make up the population. This first population must present a wide variety
of genetic materials. The chromosomes evolve through several iterations or generations and in
the meantime, new generations (offspring) are created applying crossover and mutation
methods. While crossover can occur during the splitting of two chromosomes and then
combining one half of each chromosome with the other pair, the mutation can occur by flipping
a single bit of a chromosome. The best chromosomes that are evaluated using certain fitness
criteria are kept, while the others are taken away from the population. These stages are repeated
until one chromosome has the best fitness value and thus has reached the optimal solution for
the problem [40].

In this section, the basics of the genetic algorithm in general are presented as well as the
purposes it can be used for. A short terminology and its application stages are then briefly
mentioned. Next, it is shown how the genetic algorithm is applied step by step.

2.2.1 Steps of Genetic Algorithms

These steps are listed below and a flowchart is added showing the process of GA in Figure 2.1.

1. All the potential solutions are coded into a chromosome and an initial population with
n chromosomes is created by a chosen random solution set.

15
2. All chromosomes in the population are evaluated with a pre-determined fitness function
f(x).
3. A new population is created by following the steps below:
a. [Selection] Two parent chromosomes are chosen from a population according to
their fitness. (A chromosome with a better fitness value will have a higher
chance of selection.)
b. [Crossover] Crossover operators take selected two individuals and produce two
offspring (children). (By recombining the information from two good “parent”
solutions, we aim to acquire an even better “offspring” solution.)
c. [Mutation] The mutation operator changes a chromosome to produce a new
solution.
d. [Accepting] New offspring are placed in the new population.
4. The newly created population replaces the old population.
5. If the last condition is convenient according to the fitness function, stops the process
and returns the best solution in the current population.
6. The second stage of the fitness evaluation begins and the process repeats from step 3
[41].

2.2.2 Operators and Terminologies

In this section, basic terminologies and operators used to obtain an optimum solution for
possible termination conditions of a genetic algorithm are stated.

2.2.2.1 Individuals and genes

Individuals (chromosomes) refer to a single solution in the search space. Individuals are
presented in two different forms: phenotype (solution space) and genotype (coding space). The
phenotype specifies the appearance of an individual.

A chromosome is expressed as a string of certain length where all the genetic information of an
individual is preserved. Each chromosome is made up of many genes. Genes are the building
block of the genetic algorithm. Also, genes are the smallest information units in a chromosome
[42, 43].

16
Figure 2.2 Flowchart of a genetic algorithm [41].

2.2.2.2 Populations

The community that individuals form by coming together is called a population. The generation
of the initial population and selection of the population size is important for the GAs. The
population size varies according to the complexity of each problem. Ideally, the first population
should have as large a gene pool as possible so that all search space can be explored [41]. The
more space the algorithm has a chance to take a look, the more the probability of approaching
the optimum value. Otherwise, the population's lack of diversity will cause the algorithm to
discover only a part of the search space and not find the global optimum value. It should also
be noted that with the increase in population size, more cost, memory and time will be needed
for calculations.

17
2.2.2.3 Fitness

The fitness of an individual in GA corresponds to the value of an objective function for his
phenotype. To calculate the fitness, the chromosome is first decoded and then the objective
function is evaluated. The objective function values can be scalar or vectoral and are certainly
the same as the fitness values. Fitness values are derived from the objective function using a
scaling or ranking function and can be held as vectors. The fitness shows how good the solution
is and also how close the chromosome is to the optimal one [41].

2.2.2.4 Search space

In optimization problems, the best solution is searched out of a wide solution set. The area of
all possible solutions is called search space. All points in the search space represent a possible
solution [41].

2.2.2.5 Search strategies

Search has several main purposes. One of these aims is to find the global optimum point. Apart
from the fact that this is not guaranteed for the model types GA works with, there is always the
possibility of better revelation in the next iteration.

Another purpose of the search is to provide a faster convergence when it is expensive to run
the purpose function. In this case, there is the probability of a local convergence and the
possibility of finding a substandard optimal increase.

Another purpose of searching is to increase diversity without moving away from a good
solution. Where the solution space contains several different optima that are similar in terms of
fitness, some combinations of factor values in the model may yield more robust results than
others, depending on the choice to be made [41].

18
2.2.3 Encoding

In GAs, an encoding function is used to represent the transforming of the object variables to a
string code via a decoding function. The relation between the two situations is shown in Figure
2.3 [43].

Figure 2.3 Encoding-decoding relation.

Encoding is a process used to represent individual genes. This process can be implemented
using bits, numbers, trees, arrays, lists or any other objects. It should be noted here that the
encoding will depend mainly on the problem. For instance, genes can be encoded directly as
integer and real numbers [41]. Some encoding types explained at this section are shown in
Figure 2.4.

According to Goldberg [42], the fitness function for a particular encoding scheme is formed
based on two factors, such as value and layout. The encoding methods are briefly described
below.

Figure 2.4 Encoding types in a genetic algorithm.

19
Binary encoding: Each chromosome is represented with a binary (bit) string. In some
problems, while each bit in the string represents some characteristics of the solution, in a
different problem, all of the string can represent a number [41].

Octal encoding: Chromosomes consist of strings of octal numbers (0-7).

Hexadecimal encoding: Chromosomes are made of strings of hexadecimal numbers (0-9, A-


F).

Permutation encoding: Each chromosome is shown in the form of a string of numbers that
represents the number in a sequence. Permutation encoding is only functional for problems with
a particular order. In this encoding, the crossover operators used are partially mapped crossover,
cycle crossover, and order crossover.

Value encoding: Each chromosome is stated as a string of values that can be an integer, real
number, character, etc. Values may be charts which belong to some complicated objects, real
numbers, form numbers or anything related to the problem. In problems involving some
complicated values such as real numbers in value encoding, while direct values can be encoded,
it will be very difficult to encode such problems using binary encoding. So value encoding is
very suitable for some special problems. On the other hand, it may be necessary to develop a
new problem-specific crossover and mutation for this coding apart from integer values that can
be performed by the same crossover operator as performed on binary encoding [41]. It should
also be noted that value encoding can be used to find the weights of artificial neural networks.
Here, the value of the chromosome corresponds to the weights of the inputs [43].

Tree encoding: This encoding is essentially used for evolving program explanations in genetic
programming.

2.2.4 Reproduction (Breeding)

The breeding proses is the most important stage in GAs. Because in this cycle the parents are
selected and the offspring (children) are created. Finally, new individuals replace the old ones
in the population.

20
2.2.4.1 Selection

The two best partners in the population are selected for crossing in the selection process. The
aim of the selection is to highlight fitter individuals in the population, hoping that their offspring
will have higher fitness. The chromosomes to be selected from the initial population for
reproduction are randomly chosen according to their fitness function. The higher the fitness
function, the more chance an individual has to be selected [41].

Selection types are divided into the proportionate selection and ordinal based selection. While
the proportionate-based selection chooses individuals based on their fitness values in the
population, the ordinal-based selection chooses their rank within the population. The various
selection methods are briefly explained as follows.

Roulette wheel selection: The principle of the roulette selection technique is that the selection
process is performed according to the individual's fitness value in the population. The fitness
value of each individual is divided by the fitness value of the population. An area is reserved
for each individual in the roulette wheel in direct proportion to the individual's fitness value.
The wheel is rotated as much as the number of individuals in the population. The individual
corresponding to the wheel marker at each spin is taken into the parent pool for the next
generation. However, it is not guaranteed that the individual with the greatest fitness value will
be selected, only that it will have a higher chance of being selected.

Random selection: This technique arbitrarily chooses parents within the population.

Rank selection: Having very different fitness values will create problems on the chances of
individuals being chosen in the roulette wheel selection. In other words, if an individual with a
high suitability value is present in the population, the probability of selecting other individuals
will be very low. In the rank selection, each individual is given a fitness value from 1 to N (from
worst to best). This selection slows down the convergence but also prevents converging too
quickly. It protects diversity and provides a more successful search. In effect, potential parents
are selected and a tournament is organized about which parent will be chosen for the next
generation [41]. Ranking can be a convenient option for problems where the objective function
cannot be fully defined. In such problems, it does not make sense to shape the entire selection
according to the value of the objective function, because in some cases this may not be possible
[44].

21
Tournament selection: In this method selection pressure and population diversity can be
adjusted to fine-tune the GA search performance. Tournament selection is similar to ranking
selection in terms of selection pressure, but its calculation is more efficient and more suitable
for parallel applications [45]. Unlike, the roulette wheel selection, tournament selection
provides selection pressure by organizing a tournament between Nu individuals. The winner is
determined by the fitness value. The individual with the highest fitness value is the best in the
tournament. The tournament is repeated until the mating pool fills to generate a new offspring.
Generally, the tournament is organized between two individuals, but it can also be applied
among a number of arbitrarily chosen individuals. The tournament is organized independently
of the groups, but as the number of groups increases, the selection pressure will increase [44].
The mating pool consisting of the tournament winner has higher average population fitness.
This fitness difference ensures selection pressure and this drives the genetic algorithm to
develop the fitness of succeeding genes [41].

Boltzmann selection: This selection method controls selection pressure by using similar
techniques to the ''simulated annealing method''. The simulated annealing algorithm is a local
search algorithm and it is designed to find the maximum and minimum values of functions with
many variables. In Boltzmann selection, the selection rate is controlled by constantly changing
temperature according to a preset schedule. The temperature is gradually lowered starting from
a high value. During this time, the selection pressure gradually increases. Thus, this allows GA
to narrow the search space while staying closer to the best part and maintaining the appropriate
variety degree [41].

Elitism: The probability of choosing the best string and entering the mating pool is very high.
Elitism eliminates the possibility of an undesirable loss of information during the mutation
phase. The first best chromosome or a few best chromosomes are copied to the new population.
The rest is done by other methods. Such individuals are not chosen to be reproduced so that
they do not get lost because crossing or mutation can destroy these individuals. As a result, it
significantly improves the performance of GA, as elitism preserves the best result available
[41].

22
2.2.4.2 Crossover

Crossover is the process of taking two individuals from the population and producing offspring
from them. After the reproduction process, the population is improved with better individuals.
Reproduction clones good strings but does not generate new ones. The crossover operator is
applied to the mating pool to produce better individuals [41]. Crossover takes place in three
steps:

1. The reproduction operator chooses a pair of two individuals for the mating.
2. The crossover position is determined randomly along the length of the string.
3. Finally, two individuals are replaced with each other from their crossover position.

In the next section, some common crossover techniques are described.

Single-point crossover: In this type of crossover method, chromosomes are mutually cut at a
randomly selected point and the parts after the cut part are displaced. The quality of children
varies from location to location of the crossover point [41].

Two-point crossover: In a two-point crossover, two crossover points are selected, and the
contents between these points are changed between the two mated parents to produce offspring
in the next generation. In the single-point crossover, the head and tail parts of the chromosome
do not pass to the offspring. If there is useful information on both the head and tail parts and
we want to transfer them to the offspring, the two-point crossover will help instead of the single-
point crossover [41].

Multi-point crossover (N-point crossover): This crossover has odd and even crossover
position options. In the even-numbered crossover position, while the crossover position is
randomly selected around a circle; in the odd crossover position, a different crossover point is
always assumed at the beginning of the string [41].

Uniform crossover: In the uniform crossover, each gene in the children is generated by
copying the corresponding gene from one or the other parent selected according to a random
created binary crossover mask of the same length as the chromosomes. When there is a 1 in the
crossover mask, the gene is copied from the first individual, and when there is a 0 in the mask
the gene is copied from the second individual. In this way, the first child is created and the

23
second child is created by doing the opposite of the process. Thus, offspring comprise a mixture
of genes from each parent. Also, a different crossover mask is produced for each pair of parents
[41].

Genes that are close on the chromosome have more chance together passing to offspring
obtained with an N-point crossover. This may create an undesirable correlation between genes
that are close to each other. As it is understood from here, the efficiency of N-point crossing
changes depending on the position of the genes in the chromosome. A uniform crossover is a
good option to avoid gene position problems [41].

Three parent crossover: Each bit of the first and second parents from three randomly selected
parents are compared to see if they are the same. If both are the same, the bits are taken from
these; but if they are different, the bit is taken from the third family for the offspring [41].

Shuffle crossover: In shuffle crossover, a single crossover point is selected first. Then, after
the numbers in both parents are mixed randomly, the variables are exchanged mutually. The
positional bias is also prevented with this method since variables are reassigned randomly each
time a crossover carried out [41].

Apart from these crossover methods, some crossover methods in the literature are as follows:

 Precedence preservative crossover (PPX)


 Ordered crossover
 Partially matched crossover (PMX)

2.2.4.3 Mutation

It presents new genetic structures in the population by randomly modifying some of its building
blocks. Thereby it helps to escape from local minima’s trap and maintains diversity in the
population. However, it should be noted that it can also reduce the variety in the population and
cause the algorithm to converge toward some local optima [41]. The types of mutation are stated
below for different kinds of representation.

Flipping: Flipping is the process of converting “0” to “1” and “1” to “0”, based on a mutation
a chromosome has undergone. First, a parent is selected and a random mutation chromosome

24
is created. If the bit on the parent that corresponds to “1” on the mutation chromosome is “0”,
it is converted to “1” and “1” is converted to “0” and the offspring is produced. On the other
hand, the bits are copied to the offspring without any changes in the parent's bit corresponding
to “1” in the mutation chromosome [41].

Interchanging: Two different points are randomly determined on the string and the bits at
these points are exchanged to produce offspring [41].

Reversing: After a random location is selected, the bits next to that location are reversed and
then offspring is generated [41].

2.2.4.4 Replacement

In the replacement, which is the last stage of the breeding cycle, all the offspring and parents
that are produced cannot be incorporated into the population. Two of these four individuals
need to be replaced. Here, two methods are used to decide which individuals will remain in the
population and which individuals will be excluded: Generational updates and steady state
updates.

In generational updates, in order to create the new generation from a population of size N, again
N number of offspring are created and they are completely replaced with their parents. In this
method, an individual can only create offspring with another individual of the same generation.
Later, the generational genetic algorithm generates new offspring from the members of an old
population and replaces these individuals in a new population which becomes the old
population when the all new population is created.

On the contrary to generational updates, in the steady state updates, the worst or oldest
individual in the population is eliminated from the population when the new individuals are
created and included in the population. The steady state updates technique uses ordinal based
methods such as tournament method in both selection and replacement [41].

Random replacement: In this method, children replace two randomly selected people in the
population. Parents are candidates for selection, just like children. Here, it can be useful for
researching small populations, as weak individuals can be included in the population [41].

25
Weak parent replacement: In this method, the weak parent is replaced by the strong
offspring. Only the best two of the parents or children (a total of 4 individuals) remain in the
population. This process increases the overall fitness of the population when used with a
selection technique that uses both suitable and weak parents for crossing [41].

Both parents replacement: Here, children replace their parents. Each individual participates
in breeding only once. Population and genetic material change constantly. If used in conjunction
with techniques that highlight fit parents, it can lead to problems like disposal of the fit breed
[41].

2.2.5 Search Termination (Convergence Criteria)

The commonly used stopping conditions are stated the following:

 Maximum Generations: The genetic algorithm is stopped when a determined number of


generations are produced.
 Elapsed Time: The genetic process is terminated when the predefined time is reached.
If the maximum number of generations is reached before the determined time has
passed, the process will end.
 No Change in Fitness: If the best value of the population does not change in the specified
number of generations, the genetic process ends. If the maximum number of generations
is reached before the determined number of generations with no changes is reached, the
process will end.

The termination or convergence criteria finally stops the searching. Below are a few termination
techniques:

2.2.5.1 Best individual

The best individual convergence criterion stops the search when the minimum fitness value in
the population falls below the convergence value [41].

26
2.2.5.2 Worst individual

The worst individual stops the search when the least suitable individuals in the population have
fitness value less than the convergence criteria. While this ensures that the entire population is
above a certain minimum standard, there may be no significant difference between the best
individual and the worst individual [41].

2.2.5.3 Sum of fitness

In this termination, it is considered that the search reaches convergence if the sum of fitness in
the population is equal to or lower than the convergence value in the population record. This
ensures that almost all individuals are within a certain fitness range. Because several new
individuals with low fitness value in the population are likely to diminish overall fitness, it is
better to consider this convergence criterion with the weakest gene change [41].

2.2.5.4 Median fitness

In this approach, at least half of the individuals will be equal or better than the determined
convergence criteria for termination [41].

2.2.6 Parameter Selection in Genetic Algorithms

An important step in the application of the genetic algorithm is to determine various parameters.
Since the parameters affect each other non-linearly, they cannot be optimized at the same time.
In the studies on parameter selection and adaptation in the literature, there is no consensus on
the best parameters that can be used in all problems [45]. These parameters, called control
parameters, are mentioned below:

Population Size: If this value is selected as small, the probability of the algorithm getting stuck
in the local optimum increases. When a large value is selected, it can increase the time and
resources required for the solution against the possibility of better scanning of the solution
space.

27
Crossover Probability: The crossover probability (Pc) is a parameter that defines how often
the crossover will be done. The crossover probability determines the number of chromosomes
that will enter the mating pool. If this rate is too high, the accumulating probability of good
building blocks in a single chromosome is significantly reduced. If a low rate is determined, a
sufficient number of offspring cannot be revealed [46].

Mutation Probability: The probability of mutation is the parameter that shows how much
change will occur in the mutation. Some studies use large mutation probability (Pm > 0.05) for
large populations (μ > 200). For smaller populations (μ < 20) it was demonstrated that
performance improves when a smaller mutation rate (Pm < 0.002) is used. For a medium-size
population size, the Pm = 0.001 is often selected [44].

Generation Range: The new chromosome ratio in each generation is called the generation
range. A high generation range means that a large number of chromosomes have changed [45].

Selection Strategies: In the generational strategy, the chromosomes in the current population
are completely replaced by offspring. This strategy is used together with the elitist strategy to
transfer good chromosomes to the next generation without getting lost. In a steady state strategy,
the worst chromosomes are renewed when new chromosomes join the population.

2.2.7 Comparison of Genetic Algorithms with Other Optimization Techniques

Genetic algorithms (GAs) basically imitate genetic and natural selection by a computer
program. The parameters of the problem are encoded inherently like a DNA – a linear data
structure, a vector or a string. When genetic algorithms are used to optimize, a cost function or
a fitness function is required. By means of the fitness function, the best solution candidates can
be selected from the population and at the same time candidates that are not so good can be
discarded.

The advantage point when comparing GAs with other optimization methods is that fitness can
be almost anything that can be evaluated by a computer or even works like a human judgment
that cannot be stated as a crisp program. Therefore, there is not any specific mathematical
limitation on the features of the fitness function. It may be discrete, multimodal, etc. [41].

28
It is shown below how the genetic algorithm is different from conventional optimization
methods:

1. GAs work encoded parameter sets rather than parameters themselves.

2. GAs always search on all population of points (strings), in spite of the fact that almost
all conventional optimization techniques search from a single point. This feature plays
an important role in the robustness of the GAs. This offers a serious advantage over
other conventional methods in reaching the global optimum by avoiding local stationary
points.

3. GAs use a fitness function for evaluation in contrast to derivatives or other auxiliary
knowledge. Because of that, the GAs can be faster in finding optimum values than
conventional techniques, especially when derivatives provide deceptive information.

4. GAs use transition probabilistic rules in contrast to deterministic rules that apply in
conventional optimization methods [42].

2.2.8 Advantages and Limitations of Genetic Algorithms

Just as each method has its own advantages and disadvantages, so do GAs. The sensible thing
here is to evaluate the advantages and disadvantages of frequently used methods together and
to choose the method that is the most suitable and advantageous for the solution of the current
problem. Some advantages and disadvantages of GAs are listed below.

The advantages of genetic algorithms:

1. GAs can be parallelized. (As the multiple offspring in a population behave like
independent individuals, the GA can cover the search space in many directions
synchronically.)
2. Can be used when the search space is wide and poorly understood.
3. Global optimum is discovered easily.
4. Convenient for multi-objective problems.
5. The fitness function is only used for evaluations.
6. Can be easily applied to different types of problems.

29
7. Performs well against noisy environments.
8. Resistant to getting stuck in local optima.
9. Performs very well for large-scale optimization problems.

The disadvantages of genetic algorithms are:

1. The challenge of describing the fitness function.


2. The problem of selecting the variety of parameters like the size of the population,
crossover rate, mutation rate, the selection method, and its robustness.
3. Can be hard to include problem-specific information.
4. It needs to be used with a local search technique because it is not good at identifying
local optima.
5. A large number of objective (fitness) function assessment is needed [41].

2.3 Machine Learning

Machine Learning (ML) is a branch of artificial intelligence which attempts to perform certain
tasks without direct programming. Specifically, using mathematical and statistical methods it
identifies hidden relationships within experimental data and creates a model to
represent/simulate a given process [47]. The algorithm of a ML model consists of three main
sections:

Decision Process: In general, ML is used for prediction or classification. Based on tagged or


untagged data points, the algorithm makes a prediction for the model it developed.

Error Function: It is used to evaluate the accuracy of the prediction. If there exist data with
known accuracy, an error function can be used for this step.

Model Optimization Process: To improve the fit between data and model predictions, the
weights used in calculations are adjusted. These weights are independently updated in the
subsequent steps. This optimization process continues until a pre-determined accuracy is
reached [48].

30
There are many different ML methods and algorithms at present; however, they can all be
classified within the following four categories: Supervised Learning, Unsupervised Learning,
Semi-supervised Learning and Reinforcement Learning [49].

2.3.1 Supervised Learning

Among the most successful ML methods, in supervised learning the user provides input and the
desired output. The algorithm is then required to estimate the desired output for a given input.
Supervised learning is capable of predicting output from a previously unseen input. While it is
often difficult to create proper datasets, supervised learning is easier to understand and its
performance can be easily measured compared to the other ML methods [50].

The most important supervised learning algorithms are:

- k-Nearest Neighbors
- Linear Regression
- Logistic Regression
- Support Vector Machines (SVMs)
- Decision Trees and Random Forests
- Artificial Neural Networks

2.3.2 Unsupervised Learning

In contrast to supervised learning, unsupervised learning requires input data only and no output
is specified. Its task is to tease out hidden relationships among the input data. There are many
successful applications of the unsupervised learning method including [50]:

- Clustering (K-Means, Hierarchical Cluster Analysis (HCA), DBSCAN)


- Visualization and dimensionality reduction (Principal Component Analysis – PCA,
Kernel PCA, Locally Linear Embedding – LLE, etc. )
- Anomaly detection and novelty detection (One-class SVM, Isolation Forest)

31
2.3.3 Semi-supervised Learning

Since it is time consuming and expensive to label data, semi-supervised learning employs
partially labelled data. Most algorithms under this class can be considered as a combination of
supervised and unsupervised learning algorithms [49].

2.3.4 Reinforcement Learning

This algorithm is quite different than the previous ones. Here, an “agent” is capable of
monitoring the process and can choose between different options. Based on its choice, it can be
rewarded or penalized. The agent’s goal is then to develop a strategy that maximizes its rewards.
Once developed, this strategy guides the agent in its future choices while the strategy itself is
continually updated to maximize rewards [49].

2.3.5 Artificial Neural Networks

This section offers more detail on artificial neural networks since this was the ML technique
employed in the present study.

Artificial neural networks (ANNs), a subset of machine learning, are inspired by the way
biological neurons in the human brain send signals to each other. Artificial neural networks
consist of three main layers, which include an input layer, one or more hidden layers, and an
output layer. The artificial neural network structure can be seen in Figure 2.5. Each node or
artificial neuron connects to the other and has an associated weight and threshold. As the output
of any node exceeds the specified threshold, that node is activated and data is sent to the next
layer of the network. Neural networks rely on training data to improve themselves. However,
if these learning algorithms are fine-tuned for accuracy, they become powerful tools in
computer science and artificial intelligence, allowing a high speed classification and analysis
of data.

32
2.3.5.1 Neural network working principle

Each individual node is essentially a regression model composed of input data, weights, a bias
(or threshold), and an output. The formula is as follows:

∑𝒎
𝒊 𝟏 𝒘𝒊 𝒙𝒊 + bias = w1x1 + w2x2 + w3x3 …+ bias (2.1)
where,
w: weight
x: input data

Once an input layer has been identified, weights are assigned that help determine the importance
of any variable. Data with larger weights contribute more to the output compared to other
inputs. All inputs are then multiplied by their respective weights and summed. Then the output
is passed through an activation function that determines the output, and if the output exceeds a
certain threshold, it activates the node, transmitting the data to the next layer in the network.
Here, the output of one node becomes the input of the next node. Binary Step Function, Logistic
Function, Hyperbolic Tangent Function and Rectified Linear Units (ReLu) are some of the
frequently used activation functions. ReLu was used as the activation function in this study.
ReLu function’s values is between 0 and positive infinity. If the input value x is a positive value,
it will return the value of x. For all other inputs, the returned value would be 0. The process of
passing data from one layer to the next makes this neural network a feed-forward network.

Figure 2.5 The generic structure of an artificial neural network, sometimes also called a deep
neural network.

33
Most deep neural networks are feed-forward, meaning they only flow in one direction from
input to output. However, the model can also be trained by back propagation; that is, it moves
in the opposite direction from the output to the input. Backpropagation is a supervised learning
algorithm for ANN training. When designing a neural network, initially the weights are
initialized by giving some random values or any values. By assigning these weights, a cost is
obtained at the end of the feed-forward process. The obtained cost value may be large and one
may want to reduce this value. The backpropagation algorithm looks for the minimum value of
the cost function in the weight space using a technique called the delta rule or gradient descent.
The cost is tried to be reduced by updating the weights and bias values. In the machine learning
concept, the gradient is the derivative of a function with more than one input variable.
Mathematically, it can be thought of as the slope of a function. Back propagation allows the
calculation and correlation of the error associated with each neuron. Thus, it allows the
parameters of the model(s) to be adjusted and fitted appropriately [51].

Figure 2.6 Visualization of the gradient descent technique.

In order for gradient descent to reach the local minimum in the cost function, the learning rate
must be at an appropriate value. The learning rate symbolizes the step one will take to reach the
local minimum. It is an important hyperparameter. Because if too large a rate is selected, the
local minimum point can be skipped. If the learning rate is too small, the training time of the
model will increase (Figure 2.7). So it needs to be set to an appropriate value [52 - 53].

34
If the gradient descent algorithm works appropriately, the cost function should diminish after
each iteration. Whether the gradient descent algorithm works properly or not can be interpreted
by plotting the cost function (Figure 2.8).

Figure 2.7 Demonstration of the effect of the value of the learning rate on the search process.

Figure 2.8 Effect of the learning rate value on the evolution of the cost function.

35
3 EXPERIMENTAL RESULTS AND ANALYSIS

This chapter presents the experiments conducted, the analysis of the results and their
interpretation. After the plastic part was selected, the parameters and parameter ranges that
affect the warpage the most were determined. Firstly, Taguchi experiment design was made.
Then, with the selected parameters, a full factorial experimental design was established and
experiments were carried out. The collected data were used to build a machine learning model.
The variable weights obtained from the machine learning model were used in the fitness
function required for the genetic algorithm. After reaching the pre-determined number of
repetitions or the relevant value determined in the objective function, the values obtained were
inserted into the machine learning model and the adequacy of the parameters was tested. A loop
was created between machine learning and the genetic algorithm until the optimum warpage
value was reached.

3.1 Plastic Raw Material

ABS (Acronitrile butadiene styrene) raw material was used in this study [54]. ABS is a light
and hard polymer that is widely used in plastic injection. The ratios of the substances it contains
can vary as acrylonitrile, between 15% - 35%, butadiene, between 5% - 30% and styrene,
between 40% - 60%. While acrylonitrile provides hardness, heat and chemical resistance,
butadiene makes the product harder and more resistant even at low temperatures. Styrene gives
gloss and good surface quality to the plastic. Because ABS is hygroscopic, it absorbs moisture
and must be dried before processing. ABS resin moisture should be less than 0.1 percent before
injection molding to avoid surface defects. Therefore, this raw material was dried at 80-90°C
for 2-4 hours before the injection process as specified in the TDS (technical data sheet)
document.

36
3.2 Plastic Part

The plastic part studied in this project was the grille lath used on the front of cars. The
dimensions of the part were 785.3x35.1x19.5 mm, the thickness of the part was 2.8 mm. A
slider gate and a single hot runner was also used during injection molding. While designing the
mold, the shrinkage rate used was 0.40% as specified by the TDS of the material. Images of the
plastic part are shown below (Figure 3.1 and Figure 3.2).

Figure 3.1 Grille lath plastic part used in this study.

Figure 3.2 Grille lath plastic part placed on the height gauge fixture to measure its warpage.

37
3.3 Plastic Injection Machine

A Welltec brand SEII series servo motorized plastic injection molding machine was used in the
production of plastic parts. Machine locking force was 500 tons. An images of the machine is
shown below (Figure 3.3).

Figure 3.3 A typical Welltec 500SE II plastic injection machine.

3.4 Measurement Equipment

A fixture was made so that the piece can stand freely on it and can be measured without
constraining it in any way (Figure 3.4).

Figure 3.4 Height gauge fixture.

An Asimeto brand double column digital height gauge was used for the height measurements
of the parts. The smallest scale on the gauge was 0.01 mm and the measurement accuracy was

38
±0.02 mm. It provides precise movement with its micro-adjusted gear movement system. An
image of the height gauge and probe is shown below (Figure 3.5).

Figure 3.5 Double column digital height gauge.

3.5 Experimental Method

3.5.1 Taguchi Experimental Design

The Taguchi method is essentially a standardized factorial design. This method, which is based
on experimental design, was obtained by adding concepts such as robust design and orthogonal
arrays to the fractional factorial experiment design method. The concept of orthogonality is to
determine the effects of factors on product performance independently from other factors by
making an equal number of observations for each level of each factor. The robust design aims
to eliminate variability in the product or process without interfering with factors that cause
variability.

The factorial design is extremely costly and time consuming due to its requirement of numerous
experiments. As the number of factors affecting the system increases, depending on the rapid
increase in the number of experiments, costs increase even more and applications become more
difficult. Besides, the explication of the experimental results may be troublesome and different
designs to be used for the same experiment may not give the same outcomes. For this purpose,
Taguchi experimental design has been used to improve on factorial design in many respects.

39
While creating the Taguchi experimental design, the parameters to be used in the process
experiment were determined first. While determining these parameters, both the TDS document
of the ABS raw material and the experiences of the experts in this field were consulted. In
addition, within the information obtained as a result of the literature review, 9 parameters that
can affect the warpage problem in the plastic part the most were determined. These selected
parameters (factors) are: packing pressure, injection pressure, back pressure, injection velocity,
melt temperature, core side temperature, cavity side temperature, cooling time and packing
time.

In the second step, these selected factors were divided equally into 3 levels. The levels are
shown below.

Core Temperature (°C) : 20 – 45 – 70

Cavity Temperature (°C) : 20 – 45 – 70

Melt Temperature (°C) : 240 – 260 – 280

Injection Velocity (mm/sec) : 40 – 55 – 70

Packing Pressure (bar) : 60 – 100 – 140

Injection Pressure (bar) : 120 – 140 – 160

Back Pressure (bar) : 10 – 20 – 30

Cooling Time (sec) : 10 – 20 – 30

Packing Time (sec) : 6 – 12 – 18

In step 3, an orthogonal table consisting of 27 trials was created via the Minitab 16 program
(Table 3.1). A Taguchi design table taken from Minitab is shown in Figure 3.6.

40
Figure 3.6 Taguchi orthogonal array design as shown in Minitab.

Table 3.1 Orthogonal array L27 (39) of Taguchi control factors (i.e., process parameters).
# TAGUCHI PROCESS PARAMETERS

Core Cavity Melt Injection Packing Injection Back Cooling Packing


No Temp. Temp. Temp. Velocity Pressure Pressure Pressure Time Time
(°C) (°C) (°C) (mm/sec) (bar) (bar) (bar) (sec) (sec)

1 20 20 240 40 60 120 10 10 6
2 20 20 240 40 100 140 20 20 12
3 20 20 240 40 140 160 30 30 18
4 20 45 260 55 60 120 10 20 12
5 20 45 260 55 100 140 20 30 18
6 20 45 260 55 140 160 30 10 6
7 20 70 280 70 60 120 10 30 18
8 20 70 280 70 100 140 20 10 6
9 20 70 280 70 140 160 30 20 12
10 45 20 260 70 60 120 30 10 12
11 45 20 260 70 100 140 10 20 18
12 45 20 260 70 140 120 20 30 6
13 45 45 280 40 60 140 30 20 18
14 45 45 280 40 100 160 10 30 6
15 45 45 280 40 140 120 20 10 12
16 45 70 240 55 60 140 30 30 6
17 45 70 240 55 100 160 10 10 12
18 45 70 240 55 140 120 20 20 18
19 70 20 280 55 60 160 20 10 18
20 70 20 280 55 100 120 30 20 6
21 70 20 280 55 140 140 10 30 12
22 70 45 240 70 60 160 20 20 6
23 70 45 240 70 100 120 30 30 12
24 70 45 240 70 140 140 10 10 18

41
25 70 70 260 40 60 160 20 30 12
26 70 70 260 40 100 120 30 10 18
27 70 70 260 40 140 140 10 20 6

To assure all transient effects on warpage are extinguished, the grid lath plastic parts were kept
at room temperature, free of contact, for about 4-5 days. Afterwards, the warpage was
determined using the coordinates of three points (A, B, C – see Figure 3.7) in the z direction
while the part was placed on the height gauge fixture (Figure 3.4) and employing the digital
height gauge (Figure 3.5). The geometry of the part and the processing conditions lead to
negligible changes in the x and y coordinates of the measurement points. Furthermore, point C
was found to be a far better indicator of warpage. As a result, only the z coordinates of points
A and C were employed in Equation (3.1) that yielded the warpage values listed in Table 3.2 as
well as all the other tables that exhibit warpage.

Figure 3.7 Measuring points to determine the warpage value at sectional view. Here, the z
direction is vertical.

𝑾𝒂𝒓𝒑𝒂𝒈𝒆 = |𝟏𝟖, 𝟕𝟓 − (𝑨𝒛 − 𝑪𝒛 )| (in mm) (3.1)

In this equation, the number “18.75” is the “ideal” difference (i.e., when there is no warpage)
between the z coordinates of points A and C as shown in the design drawing of the plastic
part.

Table 3.2 Taguchi process parameter trial results.


Warpage
# Part Measurement Value
Value
Az - Cz
Trial
Az (mm) Cz (mm) mm, |18,75 – d|
Number
(=d)
1 43,39 26,8 16,59 2,16

42
2 43,39 27,65 15,74 3,01
3 43,38 31,41 11,97 6,78
4 43,35 29,34 14,01 4,74
5 43,36 30,05 13,31 5,44
6 43,36 34,25 9,11 9,64
7 43,33 25,17 18,16 0,59
8 43,21 40,72 2,49 16,26
9 43,21 44,23 -1,02 19,77
10 43,33 27,85 15,48 3,27
11 43,36 24,95 18,41 0,34
12 43,4 30,51 12,89 5,86
13 43,38 28,07 15,31 3,44
14 43,32 29,53 13,79 4,96
15 43,31 40,55 2,76 15,99
16 43,26 26,71 16,55 2,2
17 43,36 33,47 9,89 8,86
18 43,31 36,77 6,54 12,21
19 43,31 25,48 17,83 0,92
20 43,44 29,12 14,32 4,43
21 43,42 31,4 12,02 6,73
22 43,32 26,05 17,27 1,48
23 43,33 30,12 13,21 5,54
24 43,38 30,89 12,49 6,26
25 43,27 26,42 16,85 1,9
26 43,25 34,03 9,22 9,53
27 43,3 41,08 2,22 16,53

Measurement results were then assigned as dependent variables in the Minitab program. Next,
using the S/N quality index, the effects of the parameters on warpage were determined. In other
words, it was determined which factor and level value affects the warpage to what extent. There
are three different S/N ratios depending on the purpose of an experiment. In this study, the-
smaller-the-better S/N ratio was used, since a minimum value for warpage was the objective
(Equation (3.2)):

𝑺 𝟏
= −𝟏𝟎𝒍𝒐𝒈 ∑𝒏𝒊 𝟏 𝒚𝟐𝒊 (3.2)
𝑵 𝒏

where,

n: the number of experiments

43
𝑦 : the observed data

The S/N ratios and experiment set obtained from the Minitab 16 program are listed in Table 3.3.
In addition, the main effect plots for the S/N ratios are shown in Figure 3.8. It is seen that the
experiment which yielded the minimum warpage was the 7th trial.

Table 3.3 S/N ratios for Taguchi control factors.

Injection Pressure (bar)


Injection Vel. (mm/sec)

Packing Pressure (bar)

Warpage Value (mm)


Back Pressure (bar)

Packing Time (sec)


Cooling Time (sec)
Cavity Temp. (°C)
Core Temp. (°C)

Melt Temp. (°C)

S/N
No

Ratio

1 20 20 240 40 60 120 10 10 6 2,16 -12,4

2 20 20 240 40 100 140 20 20 12 3,01 -15,1

3 20 20 240 40 140 160 30 30 18 6,78 -19,9

4 20 45 260 55 60 120 10 20 12 4,74 -12,4

5 20 45 260 55 100 140 20 30 18 5,44 -17,4

6 20 45 260 55 140 160 30 10 6 9,64 -21,8

7 20 70 280 70 60 120 10 30 18 0,59 -11,4

8 20 70 280 70 100 140 20 10 6 16,26 -25,7

9 20 70 280 70 140 160 30 20 12 19,77 -27,1

10 45 20 260 70 60 140 30 10 12 3,27 -15,3

11 45 20 260 70 100 160 10 20 18 0,34 -10,5

44
12 45 20 260 70 140 120 20 30 6 5,86 -19,7

13 45 45 280 40 60 140 30 20 18 3,44 -14,3

14 45 45 280 40 100 160 10 30 6 4,96 -17,7

15 45 45 280 40 140 120 20 10 12 15,99 -25,5

16 45 70 240 55 60 140 30 30 6 2,2 -12,9

17 45 70 240 55 100 160 10 10 12 8,86 -20,6

18 45 70 240 55 140 120 20 20 18 12,21 -23,2

19 70 20 280 55 60 160 20 10 18 0,92 -11,7

20 70 20 280 55 100 120 30 20 6 4,43 -18,4

21 70 20 280 55 140 140 10 30 12 6,73 -19,9

22 70 45 240 70 60 160 20 20 6 1,48 -11,9

23 70 45 240 70 100 120 30 30 12 5,54 -19,1

24 70 45 240 70 140 140 10 10 18 6,26 -19,8

25 70 70 260 40 60 160 20 30 12 1,9 -12,1

26 70 70 260 40 100 120 30 10 18 9,53 -21,1

27 70 70 260 40 140 140 10 20 6 16,53 -25,9

45
Main Effects Plot for SN ratios
Data Means

C ore Temperature C av ity Temperature M elt Temperature


-12

-16

-20
Mean of SN ratios

20 45 70 20 45 70 240 260 280


Injection V elocity P acking P ressure Injection P ressure
-12

-16

-20

40 55 70 60 100 140 120 140 160


Back P ressure C ooling Time P acking Time
-12

-16

-20

10 20 30 10 20 30 6 12 18

Signal-to-noise: Smaller is better

Figure 3.8 Main effects for S/N ratios.

As Figure 3.8 is examined, it is seen that the effects of factors such as injection speed, injection
pressure and melt temperature on the warpage problem remained very minor, while the effects
of factors such as packing pressure, back pressure, cooling time and packing time were
significant.

To rank the influence of a given factor (or parameter) on warpage, the following calculation
was performed: First, S/N ratios of a given factor were estimated for each level. Next, the
difference (called “delta”) between the maximum and minimum of these values were obtained.
Finally, the “delta” values were ranked from largest to smallest. The results are shown in Table
3.4.

Table 3.4 Response table for S/N ratios and the ranking of the factors (“smaller is better”).

Core Cavity Melt Injection Packing Injection Back Cooling Packing


Level T T T Velocity Pressure Pressure Pressure Time Time
(°C) (°C) (°C) (mm/sec) (bar) (bar) (bar) (sec) (sec)
1 -18.12 -15.87 -17.20 -18.09 -12.47 -18.14 -16.81 -19.18 -18.40

46
2 -17.88 -17.72 -17.40 -17.45 -18.52 -18.41 -17.80 -17.70 -18.50
3 -16.55 -19.88 -18.66 -17.94 -22.47 -17.53 -18.87 -16.60 -16.57
Delta 1.55 4.01 1.46 0.64 10.00 0.98 2.06 2.59 1.93
Rank 6 2 7 9 1 8 4 3 5

In addition, an analysis of variance (ANOVA) was performed on the data. The results of this
analysis are shown in Table 3.5. ANOVA is a statistical evaluation, usually based on the results
of an experiment, to determine the relative percent effect of a single factor and to separate the
important factors from the unimportant ones. The P value seen in the ANOVA results is used
to determine the presence of statistical significance and the level of evidence of difference, if
any. The smaller the P value, the greater the evidence for a statistically significant difference.
If the P value ranges from 0.05 to 0.1, then there is a statistically limited difference. If the P
value ranges from 0.01 to 0.05, then there is a statistically significant difference. If the P value
is between 0.001 and 0.01, there is a highly significant difference. When the results were
evaluated, 4 of the factors employed yielded statistically significant P values (placed in the
shaded rows of Table 3.5).

Table 3.5 Analysis of variance for warpage value (mm), using adjusted SS for tests.
Source DF Seq SS Adj SS Adj MS F P

Core Temp. (°C) 2 45.9 45.9 22.9 3.6 0.07


Cavity Temp. (°C) 2 133.9 133.9 66.9 10.4 0.01
Melt Temp. (°C) 2 3.8 3.8 3.8 0.3 0.8
Injection Velocity (mm/sec 2 9.9 9.9 4.9 0.8 0.5
Packing Pressure (bar) 2 427.1 427.1 213.5 33.2 0.00
Injection Pressure (bar) 2 7,83 7,8 3,9 0,6 0,6
Back Pressure (bar) 2 17.1 17.1 8.5 1.3 0.3
Cooling Time (sec) 2 53.8 53.8 26.9 4.2 0.06
Packing Time (sec) 2 31.3 31.3 15.6 2.4 0.2
Error 26 51.5 51.5 6.4
Total 26 782.1
S = 2,54; R-Sq = 93,4%; R-Sq(adj) = 78,6%

47
A brief description of the various parameters used in ANOVA follows [55]:

- DF (Total Degrees of Freedom): DF is the amount of information in your data. This


information is used in the analysis to estimate the values of unknown population
parameters. Total DF is determined by the number of observations in the sample.

- Adj SS (Adjusted Sums of Squares): Measure of variation for different components of


the model. The order of the predictors in the model does not affect the calculation of the
adjusted sum of squares.

- Adj MS (Adjusted Mean Squares): This is a variance around the fitted values. Usually
the P values and the adjusted R2 statistic are used instead of the adjusted mean squares
in the interpretation of the results.

- Seq SS (Sequential Sums of Squares): Sequential sums of squares depend on the order
the factors are entered into the model. It is the unique portion of SS Regression
explained by a factor, given any previously entered factors.

- F Value: The test statistic used to determine whether the term is related to the response.
A sufficiently large F value indicates that the term or model is significant.

- P Value: A probability that measures the evidence against the null hypothesis. Lower
probabilities provide strongest evidence against the null hypothesis.

After the S/N ratios and ANOVA tests were evaluated, the parameters that had negligible effects
on warpage and were below the significance level in the P value were excluded from the
experimental set in the next step. These parameters were fixed at their optimum values
according to Taguchi experiments. These values were as follows:

Injection Velocity: 55 mm/sec

Melt Temperature: 260 °C

Injection Pressure: 120 bar

In all of the following experiments, the following six process parameters (factors) were used as
input variables that influence the warpage value: 1. Packing pressure, 2. cavity temperature, 3.
cooling time, 4. back pressure, 5. packing time, and 6. core temperature.

48
Note that while the back pressure and packing time did not appear to be statistically significant
(according to ANOVA), they were still designated as important parameters due to their low P
values. Furthermore, these two parameters were among the 5 most effective parameters
according to the S/N ratio results (Table 3.4).

3.5.2 Full Factorial Design

At this stage, a more comprehensive set of experiments was conducted using the 2k full factorial
design. Thanks to the studies carried out in the previous step, both the number of factors were
reduced and the factor ranges were narrowed.

Minitab program was again employed to create the trial set. The trial set and measurement
results are shown in Table 3.6.

Table 3.6 1st trial set created with full factorial design.
Warpage
Trial Parameters Part Measurement Value
Value
Packing Back Cooling Packing Cavity Core Az - Cz
Az Cz
No Pressure Pressure Time Time Temp. Temp. mm, |18,75 – d|
(mm) (mm)
(bar) (bar) (sec) (sec) (°C ) (°C ) (=d)
1 30 10 30 30 8 8 43,37 31,59 11,78 6,97
2 60 4 30 15 20 8 43,49 29,11 14,38 4,37
3 60 4 30 15 8 8 43,4 32,7 10,7 8,05
4 60 4 30 30 20 8 43,39 28,61 14,78 3,97
5 30 4 30 30 8 8 43,36 30,36 13 5,75
6 30 4 60 30 20 20 43,32 28,22 15,1 3,65
7 30 10 30 15 8 20 43,39 33,18 10,21 8,54
8 30 10 60 30 8 20 43,39 32,64 10,75 8
9 60 10 30 15 20 8 43,41 28,57 14,84 3,91
10 30 10 60 15 20 8 43,37 29,2 14,17 4,58
11 60 4 60 30 20 8 43,41 31,56 11,85 6,9
12 60 10 60 15 8 8 43,41 30,91 12,5 6,25
13 30 4 60 30 20 8 43,35 27,42 15,93 2,82
14 60 4 60 30 20 20 43,42 28,41 15,01 3,74
15 30 4 60 15 20 20 43,41 28,11 15,3 3,45
16 60 10 30 15 8 8 43,36 32,88 10,48 8,27
17 30 10 30 30 20 8 43,35 28,4 14,95 3,8
18 60 10 60 15 20 8 43,35 30,13 13,22 5,53

49
19 60 4 60 15 20 8 43,39 30,55 12,84 5,91
20 60 10 30 30 8 20 43,36 29,29 14,07 4,68
21 30 4 60 15 8 8 43,39 32,76 10,63 8,12
22 30 4 30 15 20 20 43,82 31,57 12,25 6,5
23 60 10 60 30 8 8 43,38 30,51 12,87 5,88
24 30 10 60 30 20 8 43,97 28,64 15,33 3,42
25 60 10 30 30 8 8 43,38 31,9 11,48 7,27
26 30 10 60 15 20 20 43,29 28,59 14,7 4,05
27 30 10 30 15 20 20 43,38 30,73 12,65 6,1
28 30 4 60 30 8 8 43,99 32,83 11,16 7,59
29 30 4 30 15 8 20 43,38 31,36 12,02 6,73
30 30 4 30 15 20 8 43,38 30,91 12,47 6,28
31 60 4 60 15 8 20 43,34 30,4 12,94 5,81
32 60 4 60 30 8 8 43,36 31,02 12,34 6,41
33 30 4 30 30 8 20 43,35 30,2 13,15 5,6
34 60 10 30 30 20 20 43,4 31,14 12,26 6,49
35 30 10 30 15 20 8 43,39 28,43 14,96 3,79
36 60 4 60 15 8 8 43,38 30,16 13,22 5,53
37 60 10 60 15 20 20 43,38 33,37 10,01 8,74
38 30 10 60 15 8 8 43,34 29,97 13,37 5,38
39 30 10 30 15 8 8 43,44 32,8 10,64 8,11
40 30 4 30 30 20 8 43,39 29,45 13,94 4,81
41 60 4 60 30 8 20 43,34 30,21 13,13 5,62
42 30 10 30 30 20 20 43,38 32,18 11,2 7,55
43 60 10 30 15 20 20 43,39 31,55 11,84 6,91
44 60 4 60 15 20 20 43,41 31,14 12,27 6,48
45 60 10 60 30 8 20 43,33 30,53 12,8 5,95
46 30 10 60 30 8 8 43,42 32,78 10,64 8,11
47 60 4 30 15 8 20 43,36 31,85 11,51 7,24
48 30 4 60 15 20 8 43,35 27,75 15,6 3,15
49 60 4 30 30 8 20 43,34 29,32 14,02 4,73
50 60 4 30 15 20 20 43,36 28,85 14,51 4,24
51 60 4 30 30 20 20 43,41 29,3 14,11 4,64
52 60 10 60 30 20 20 43,4 29,15 14,25 4,5
53 30 4 60 30 8 20 43,34 30,15 13,19 5,56
54 60 10 60 15 8 20 43,33 30,13 13,2 5,55
55 30 10 60 15 8 20 43,37 32,62 10,75 8
56 30 4 30 30 20 20 43,44 32,36 11,08 7,67
57 60 10 30 30 20 8 43,44 28,05 15,39 3,36
58 60 10 30 15 8 20 43,38 34,61 8,77 9,98
59 30 10 30 30 8 20 43,38 31,64 11,74 7,01
60 60 4 30 30 8 8 43,39 31,65 11,74 7,01
61 60 10 60 30 20 8 43,45 28,63 14,82 3,93
62 30 4 30 15 8 8 43,35 28,87 14,48 4,27

50
63 30 4 60 15 8 20 43,39 33,95 9,44 9,31
64 30 10 60 30 20 20 43,34 29,36 13,98 4,77

As the plastic injection is a rather complicated process and to better model the relations between
the parameters, an additional dataset was created by examining the previous experimental sets.
The parameter values with better results in the previous experimental sets were reviewed. Based
on these values, it was tried to determine the parameter ranges that would both increase the data
diversity and converge more to the optimum warpage value. For example, while the packing
pressure parameter range was 30-60 in the previous experimental set, 30-45 was selected in the
new experimental set to ensure data diversity. On the other hand while the new range was
chosen as 30-40 for the packing time, it was in the 15-30 range previously. The input parameters
chosen for this second set and their outcome in terms of warpage are shown in Table 3.7.

Table 3.7 2nd trial set created with full factorial design.
Warpage
Trial Parameters Part Measurement Value
Value
Packing Back Cooling Packing Cavity Core
Az - Cz
No Pressure Pressure Time Time Temp. Temp. Az (mm) Cz (mm) mm, |18,75 – d|
(bar) (bar) (sec) (sec) (°C ) (°C ) (=d)

1 30 4 60 30 60 20 43,17 21,86 21,31 2,56


2 30 8 60 40 60 20 43,17 21,82 21,35 2,6
3 45 4 80 30 60 20 43,18 22,68 20,5 1,75
4 30 4 60 30 80 8 43,2 23,86 19,34 0,59
5 30 4 60 30 60 8 43,23 21,23 22 3,25
6 45 8 80 40 80 8 43,31 23,51 19,8 1,05
7 45 8 80 30 60 20 43,14 22,84 20,3 1,55
8 30 4 80 30 60 20 43,15 22,86 20,29 1,54
9 30 8 60 30 80 8 43,17 23,69 19,48 0,73
10 30 8 60 30 60 20 43,11 22,13 20,98 2,23
11 30 4 80 40 60 8 43,26 22,59 20,67 1,92
12 30 8 60 30 60 8 43,2 22,75 20,45 1,7
13 45 8 60 30 60 20 43,14 22,87 20,27 1,52
14 45 4 60 40 80 8 43,15 22,9 20,25 1,5
15 30 8 60 40 80 20 43,19 24,12 19,07 0,32
16 30 4 80 40 80 20 43,18 23,79 19,39 0,64
17 30 4 80 30 80 8 43,15 22,68 20,47 1,72
18 45 8 60 30 80 8 43,31 23,64 19,67 0,92

51
19 45 8 60 40 60 8 43,22 24,08 19,14 0,39
20 45 4 80 30 80 20 43,12 22,42 20,7 1,95
21 45 4 60 40 80 20 43,28 23,39 19,89 1,14
22 45 4 60 40 60 8 43,28 24,04 19,24 0,49
23 30 8 80 40 60 20 43,21 22,58 20,63 1,88
24 45 4 80 40 80 20 43,17 23,05 20,12 1,37
25 45 8 60 40 60 20 43,18 23,04 20,14 1,39
26 30 4 80 40 80 8 43,22 23,45 19,77 1,02
27 45 8 80 40 60 8 43,4 24,34 19,06 0,31
28 30 4 80 40 60 20 43,18 22,44 20,74 1,99
29 45 8 80 30 80 20 43,19 23,01 20,18 1,43
30 30 4 60 40 60 20 42,97 22,54 20,43 1,68
31 30 8 80 30 80 20 43,26 23,04 20,22 1,47
32 30 8 80 30 60 8 43,3 24,45 18,85 0,1
33 45 4 80 40 60 8 43,29 23,91 19,38 0,63
34 30 8 80 30 60 20 43,15 21,92 21,23 2,48
35 30 4 60 40 80 8 43,17 23,15 20,02 1,27
36 30 8 80 40 60 8 43,27 24,15 19,12 0,37
37 45 4 80 40 80 8 43,28 23,19 20,09 1,34
38 45 8 80 30 80 8 43,22 22,94 20,28 1,53
39 45 8 80 40 80 20 43,18 22,8 20,38 1,63
40 45 8 60 30 60 8 43,25 24,23 19,02 0,27
41 45 8 60 30 80 20 43,24 23,61 19,63 0,88
42 45 4 60 30 80 20 43,22 23,16 20,06 1,31
43 45 4 80 40 60 20 43,08 22,94 20,14 1,39
44 30 8 80 30 80 8 43,21 23,13 20,08 1,33
45 30 4 80 30 80 20 43,19 22,45 20,74 1,99
46 30 4 60 40 80 20 43,19 22,89 20,3 1,55
47 45 8 60 40 80 8 43,18 23,3 19,88 1,13
48 45 4 60 30 80 8 43,19 23,62 19,57 0,82
49 30 4 60 30 80 20 43,2 22,62 20,58 1,83
50 45 4 80 30 80 8 43,34 23,05 20,29 1,54
51 45 4 60 30 60 20 43,32 23,45 19,87 1,12
52 30 8 60 30 80 20 43,17 22,46 20,71 1,96
53 30 8 60 40 60 8 43,28 24,32 18,96 0,21
54 45 4 60 40 60 20 43,23 23,32 19,91 1,16
55 30 8 80 40 80 20 43,19 23,24 19,95 1,2
56 45 8 80 30 60 8 43,23 24,33 18,9 0,15
57 45 8 80 40 60 20 43,25 23,17 20,08 1,33
58 45 4 60 30 60 8 43,24 24,06 19,18 0,43
59 45 8 60 40 80 20 43,21 23,05 20,16 1,41
60 30 4 80 30 60 8 43,24 23,81 19,43 0,68
61 30 8 80 40 80 8 43,22 23,11 20,11 1,36
62 30 4 60 40 60 8 43,24 23,22 20,02 1,27

52
63 30 8 60 40 80 8 43,21 22,91 20,3 1,55
64 45 4 80 30 60 8 43,41 23,18 20,23 1,48

Table 3.6 and Table 3.7 were created using full factorial experimental design. In these tables,
different value ranges were determined for the relevant parameters and experiments were
conducted accordingly. In this way, both the variety of data and the number of data have been
increased. These data were then used to build a machine learning model.

3.6 Machine Learning Model

3.6.1 Introduction of the Dataset

The full dataset employed at the beginning of the machine learning (ML) model development
consisted of 152 different data rows that included 9 features and 1 target variable (a portion is
shown in Table 3.8). The independent and dependent variables used are as follows:

- Independent Variables: Packing Pressure, Back Pressure, Cavity Temperature, Core


Temperature, Cooling Time, Packing Time

- Dependent Variable: Warpage

Table 3.8 A portion of the dataset employed in the early the machine learning analysis.

53
Since the melt temperature, injection rate and injection pressure parameters had an insignificant
effect on warpage, the values of these properties were kept constant in the last two process
trials. These features were also removed from the ML analysis for the same reason. The
appearance of the dataset after removing the above-mentioned features is shown in Table 3.9.

Table 3.9 A portion of the dataset used in the later machine learning analysis.

3.6.2 Splitting the Dataset

Of the 152 rows of data, 121 rows (80%) were used to train the algorithm, and 31 rows (20%)
to test model accuracy. During the splitting of the dataset a “random_state” value of 1 was
employed. This allowed the proper setting of the seed for the random generator so that results
are reproducible.

3.6.3 Normalization

The values of the features were normalized between 0 and 1 in order to prevent the dominance
of those features with numerically larger values, and therefore, any bias in the ML model.

54
3.6.4 Building the Machine Learning Model

While creating the model, a multilayer artificial neural network (ANN) was used. The ANN
had 6 input layers (the important process parameters), 3 hidden layers and 1 output layer (the
warpage). The number of neurons in the hidden layers is shown in Table 3.10.

Table 3.10 The details of the artificial neural network employed in the ML model
development.

Here, the hyper parameter "kernel_initializer" [56 - 57] to initialize the weights in the neural
network was selected as "normal". While trying to develop the best ML model, other
initialization methods were also tried. For example, the "xavier" and "he_normal" were also
employed instead of the "kernel_initializer". In "xavier" initialization, "sigmoid" and "tanh"
were used as activation functions. In the "he_normal" initialization, the "relu" activation
function was used. It was observed that the best results were obtained with the
“kernel_initilization='normal'” and “activation_'relu'” selections.

While compiling the ML model, "adam", which is one of the most used optimization methods,
was preferred. Here, "mean square error" was selected as the loss parameter. The learning rate
value was taken to be the default value of 0.001.

During several trials using the multiples of 2 in the batch size value, with 121 data in the training
set, the best results were obtained with the values of 16 and 32, respectively. Therefore, the
batch size value was determined to be 16.

55
The graph obtained as a result of the training performed at 250 epochs with these hyper
parameters is shown in Figure 3.9. It is seen that the model converges to the minimum loss
value in approximately 150 epochs.

Figure 3.9 Evolution of loss as a function of epochs during the training and testing of the ML
model.

The regression estimates obtained from the 31 data in the validation (testing) set of the ML
model are as follows:

Table 3.11 Comparison of measured and predicted values of warpage. The predictions are
from the testing data of the ML model.

Original Predicted Difference %


No
(mm) (mm) (mm) Error
0 2.64 2.66 0.02 0.8
1 1.62 1.19 -0.43 -27
2 2.74 3.21 0.47 17
3 -0.27 -0.67 -0.4 148
4 4.37 3.71 -0.66 -15
5 -1.95 -1.62 0.33 -17
6 -2.23 -1.38 0.85 -38
7 12.21 12.41 0.2 2
8 3.41 2.01 -1.4 -41

56
9 3.22 2.51 -0.71 -22
10 -1.41 -1.34 0.07 -5
11 -1.12 -1.1 0.02 -2
12 3.39 3.66 0.27 8
13 -0.32 -1.58 -1.26 394
14 19.77 19.15 -0.62 -3
15 2.17 2.99 0.82 38
16 4.43 3.57 -0.86 -19
17 -1.52 -1.12 0.4 -26
18 -2.6 -1.29 1.31 -50
19 -1.54 -1.55 -0.01 1
20 -0.59 -1.23 -0.64 108
21 -1.34 -1.16 0.18 -13
22 -3.25 -0.89 2.36 -73
23 -1.7 -0.91 0.79 -46
24 -1.05 -1.18 -0.13 12
25 1.56 2.59 1.03 66
26 1.23 0.76 -0.47 -38
27 1.25 1.58 0.33 26
28 2.16 3.27 1.11 51
29 -1.27 -1.14 0.13 -10
30 2.22 3.13 0.91 41

Considering the fact that a rather small dataset was used to train the model, the results shown
in Table 3.11 are satisfactory. It is also important to recall that the measurement accuracy of
the gauge was ±0.02 mm. With these caveats in mind, it is seen that only 5 (out of 31) data had
differences of 1 mm or more. And 16 of the data, more than half, showed differences less than
0.5 mm. While it is clear the ML model can be further improved for accuracy (which will
undoubtedly require additional experiments), the current version is adequate to proceed.

57
3.6.5 Measures of Error in Prediction

An additional evaluation was performed to better quantify the accuracy of the ML model. The
following error metrics were employed on the validation set [58]:

Mean Squared Error (MSE): MSE assesses the average squared difference between the
observed and predicted values. When a model has no error, the MSE equals zero. As model
error increases, its value increases.

Mean Absolute Error (MAE): MAE measures the average magnitude of the errors in a set of
predictions, without considering their direction.

Root Mean Squared Error (RMSE): RMSE is the square root of the average of squared
differences between prediction and actual observation. It is the square root of MSE.

The corresponding equations of each error metric are below (Equations (3.3) – (3.6)):

𝟏
𝑴𝑨𝑬 = ∑𝑵
𝒊 𝟏|𝒚𝒊 − 𝒚| (3.3)
𝑵

𝟏
𝑴𝑺𝑬 = ∑𝑵
𝒊 𝟏(𝒚𝒊 − 𝒚)
𝟐
(3.4)
𝑵

𝑹𝑴𝑺𝑬 = √𝑴𝑺𝑬 (3.5)

∑(𝒚𝒊 𝒚)𝟐
𝑹𝟐 = 𝟏 − ∑(𝒚𝒊 𝒚)𝟐
(3.6)

where, 𝑦: predicted value of y

𝑦: average value of y.

The results for each error metric were:

MSE = 0.644

MAE = 0.632

RMSE = 0.803

R2 = 0.97

R2adj = 0.96

58
Every error metric can range from 0 to ∞ and is indifferent to the direction of the error. The
closer the metric value is to 0, the better the model's performance. The reason the RMSE metric
is high is because the errors are squared before they are averaged, so RMSE gives a relatively
higher weight to large errors.

In the current analysis, mainly the R2 metric was employed to quantify the accuracy of the ML
model. This metric ranges between 0 and 1, and the closer the R2 value to 1, the better the
model. The different hyper parameters described in Section 3.6.4 were modified and those that
yielded the highest R2 value were chosen for the final analysis which yielded R2 = 0.97. As an
additional check, especially since there is more than one independent variable in the model, the
adjusted R2 value (R2adj) was also calculated and was found to be 0.96. Both of these metrics
suggest a reasonably accurate ML model.

3.7 Optimization of Process Parameters Using ML and GA

After a satisfactory ML model was developed to simulate the injection molding process
(specifically, a mathematical model that accurately predicts warpage based on input from the 6
most influential process parameters), the next step in the analysis was to employ this ML model
within the GA optimization algorithm so that the optimum values of those input parameters (to
yield minimum warpage) could be deduced.

This optimization study was conducted using the Python library DEAP [59]. The generic
algorithm consists of generations of individuals with an unique set of attributes, called genes.
In this application, genes were selected as input parameters. The library was provided with
ranges of input parameters to randomly generate individuals within these ranges (Table 3.12).

Table 3.12 The ranges of the input parameters employed in the GA optimization.
Minimum Maximum
INPUT PARAMETERS
Value Value
Core Temperature (°C) 8 70
Cavity Temperature (°C) 8 80

59
Packing Pressure (bar) 30 140
Back Pressure (bar) 4 30
Cooling Time (sec) 10 80
Packing Time (sec) 6 40

For this range-based approach, the polynomial bounded mutation function was used with a 0.5
crowding degree. After the initial set of a generation was established with a population of 300,
the next generation of individuals, called offspring was selected with the tournament algorithm.
Tournament selection is similar to ranking selection in terms of selection pressure, but
accordingly, its calculation is more efficient and more suitable for parallel applications.
Tournament selection provides selection pressure by organizing a tournament between Nu
individuals. The individual with the highest fitness value is the best in the tournament. The
tournament is repeated until the mating pool fills to generate a new offspring. Generally, the
tournament is organized between two individuals, but it can also be applied among a number
of arbitrarily chosen individuals. Likewise, in the present model, the tournament took place
between 2 people. The tournament is organized independently of the groups, but as the number
of groups increases, the selection pressure will increase. The mating pool consisting of the
tournament winners has higher average population fitness. This fitness difference ensures
selection pressure and this drives the genetic algorithm to develop the fitness of the succeeding
genes.

Then, the generation was iterated over to mate and also cross and mutate individuals with
percentage chances of 50% and 0.1% correspondingly. The crossover ratio was in the range of
[0, 1]. If the probability of a crossover occurring for chromosomes in a generation is 100%,
then all offspring are crossed over. Although the crossover probability is 0%, it means that the
entire new generation (except those resulting from the mutation process) has been fully
replicated from the older population. Mutation probability determines how many chromosomes
should be mutated in one generation. The purpose of mutation is to prevent the GA from
converging to local optima, but if it takes places frequently, GA is changed to random search.
In general, for medium-sized datasets (the number of data is less than 200 and more than 20)
the mutation probability is taken as 0.001. The present GA model used the same mutation rate.

60
Mating was done uniformly, meaning each gene had a fixed chance of crossing over which was
selected as 10%. In the uniform crossover, each gene in the children was generated by copying
the corresponding gene from one or the other parent selected according to a random created
binary crossover mask of the same length as the chromosomes. The most suitable crossover
type for our problem was uniform. Uniform crossover is the only type of chromosome that
allows choosing range in a chromosome array. Gene exchange occurs when the crossover mask
is 1. The probability of the crossover mask being 1 is also set to 10% for the present problem.
An example of a uniform crossover type is shown in the table below.

Table 3.13 An example of a uniform crossover type.

Then, the resulting population's fitness values were calculated with the fitness function derived
from the ML model. The fitness function was designed to call the neural network over each
individual and get a prediction of warpage. Since warpage can also be negative, its absolute
value was taken and returned as the fitness. The algorithm was also instructed to minimize the
fitness, in essence, searching for input parameters that result in minimum warpage. This process
was repeated for 20 generations of individuals, resulting in 6000 unique sets of input
parameters.

61
Figure 3.10 Evolution of warpage value after each generation in GA.

The evolution of the warpage value as a function of generation number is shown in Figure 3.10.
The set of the input parameters that yielded this minimum warpage value is one example of
“optimum” input parameters, the main goal of the present study. However, every new run of
the GA yielded a new set. This is to be expected since GA (like other optimization algorithms)
cannot always find the “global minimum”. Furthermore, the complexity of the injection
molding process and the likely correlations between input parameters (i.e., these parameters are
probably not fully independent of each other) makes it possible that different combinations of
input parameters can yield minimum warpage values. This was indeed observed when the GA
was run 30 times. The results are listed in Table 3.14. Every row in this table represents a set
of “optimum” input parameters. A critical future study will involve validation experiments that
will employ a selection of these rows to determine the warpage value each will yield. It is also
worth mentioning that a similar study, too, that employed the same approach observed different
sets of “optimum” parameters [37].

Table 3.14 Results of 30 independent runs of GA.


Packing Back Cooling Packing Warpage
Trial Core T Cavity
Pressure Pressure Time Time Value
No (°C) T (°C)
(bar) (bar) (sec) (sec) (mm)

62
1 36 38 17 13 45 38 0.00
2 49 30 23 20 59 9 0.00
3 11 59 26 27 76 27 0.00
4 55 43 41 11 32 19 0.00
5 37 73 32 21 20 31 0.00
6 41 51 44 11 31 40 0.00
7 28 32 14 18 52 15 0.00
8 16 27 36 6 77 27 0.00
9 21 26 33 8 75 20 -0.01
10 26 55 46 27 67 10 0.00
11 27 25 17 12 68 29 0.00
12 67 39 40 7 50 12 0.00
13 49 36 24 5 35 16 0.00
14 37 76 37 25 36 9 0.01
15 21 75 17 30 32 25 0.00
16 59 52 15 20 11 38 0.00
17 28 57 53 11 42 16 0.00
18 43 65 38 22 35 37 0.00
19 31 45 28 29 45 16 0.00
20 12 57 21 20 46 18 0.00
21 52 47 84 11 58 39 0.00
22 51 70 100 14 73 30 0.00
23 63 46 64 29 78 10 0.00
24 33 80 85 15 68 21 0.00
25 57 48 91 11 50 39 0.00
26 44 57 53 29 71 30 0.00
27 23 69 73 10 59 21 0.00
28 33 41 43 13 60 29 0.00
29 67 40 29 18 62 30 0.00
30 8 66 33 17 51 21 0.00

In the meantime, the covariance matrix of all the parameters listed in Table 3.14 was
constructed and is shown in Figure 3.11. Covariance is a statistical method used to understand
the relationship between two variables. If the value of a variable increases together with the
other variable, this means there is a positive covariance between the variables. On the other
hand, if one variable increases while the other decreases, this suggests a negative covariance.
The following equation shows how the covariance of two series of observed values is calculated
[60]. The results shown in Figure 3.11 were calculated using the Python libraries NumPy and
Pandas [61].

63
∑𝒏
𝒊 𝟏(𝒙𝒊 𝒙)(𝒚𝒊 𝒚)
𝑪𝒐𝒗(𝒙, 𝒚) = (3.7)
𝒏 𝟏
where,

n : total count of samples


𝑥 : single observed value of variable x
𝑥̅ : mean of all values of variable x
𝑦 : single observed value of variable y
𝑦 : mean of all values of variable y
n – 1: population count minus one (Bessel’s correction)

Figure 3.11 Covariance matrix, including the inter-parameter correlation numbers for the
parameters in Table 3.14.

In the covariance matrix, the color is gray when there is no correlation between two variables,
that is, when the relationship is 0 or close to 0. The darkest red indicates a perfect positive
correlation, while the darkest blue indicates a perfect negative correlation. When the covariance
data shown in Figure 3.11 is reviewed, it is seen that there is a consistent relationship between
the input parameters and the output variable warpage. For instance, when we examine the
packing pressure parameter, which is the most effective parameter in our process, it makes a
positive correlation with the warpage. Likewise, a positive correlation is observed in the back
pressure parameter. There was a negative correlation with the warpage in the cooling and
packing time parameters, which are among the other effective parameters. The correlations
between any two input parameters, on the other hand are seen to be in the middle range (both
positive and negative).

64
4 DISCUSSION AND FUTURE WORK

In this thesis, a study was conducted on the warpage problem, which is one of the critical quality
problems of injection molded parts and results in both structural and visual defects. Warpage
can be defined as deviations and dimensional distortions compared to those in the original part
design.

The main cause of warpage is the development of uneven internal stresses during the filling,
compression and cooling phases of the injection process. However, direct measurement or
accurate modeling of these stresses is rather difficult. For this reason, the present study
employed an indirect approach to determine the optimum values for the input parameters of the
plastic injection process that minimize warpage. Firstly, using the Taguchi method, the most
influential process parameters were determined (from the most influential onward): packing
pressure, cavity steel temperature, cooling time, packing time and back pressure, respectively.

Next, using the Taguchi experiment design, a full factorial experiment design set containing a
more comprehensive dataset was created with parameters whose number and range of values
were limited. According to this set of experiments, a second trial was made and the warpage
values of the piece were measured.

A machine learning (ML) model was then created using all the data which consisted of 152
unique sets of important process parameters and their corresponding warpage values. This
model served as the simulation of the process and was employed by the genetic algorithm (GA)
in the search for optimum process parameters that yield minimum warpage.

In this study, unlike other studies, more injection process parameters were considered and their
effects were examined. This permitted a more detailed study of the effects of injection
parameters on the process. In particular, the cavity and core steel temperatures were evaluated
as separate parameters. In this way, the effect of different steel temperatures on product warpage
was also observed.

Studies in literature have shown that increasing the packing pressure yields smaller warpage.
However, in the present study, increasing the packing pressure increased the warpage. A
possible reason for this observation is the design employed here.

65
In terms of future work, the most crucial next step will involve validation experiments. The
submission deadline imposed by university regulations made it impossible to conduct these
experiments and to include their results in the current thesis. The experiments will basically
take the “optimum” input parameters predicted by the genetic algorithm in additional injection
molding runs so that experimental warpage data can be compared to predictions. Hopefully, the
results will validate the methodology employed in this study.

66
REFERENCES

1 Huang, M. C., and Tai, C. C. (2001). The effective factors in the warpage problem of an
injection-molded part with a thin shell feature. Journal of Materials Processing
Technology, 110(1), 1-9.

2 Erzurumlu, T., and Ozcelik, B. (2006). Minimization of warpage and sink index in
injection-molded thermoplastic parts using Taguchi optimization method. Materials and
Design, 27(10), 853-861.

3 Tang, S. H., Tan, Y. J., Sapuan, S. M., Sulaiman, S., Ismail, N., and Samin, R. (2007).
The use of Taguchi method in the design of plastic injection mould for reducing
warpage. Journal of Materials Processing Technology, 182(1-3), 418-426.

4 Oktem, H., Erzurumlu, T., and Uzman, I. (2007). Application of Taguchi optimization
technique in determining plastic injection molding process parameters for a thin-shell
part. Materials and Design, 28(4), 1271-1278.

5 Ozcelik, B., and Sonat, I. (2009). Warpage and structural analysis of thin shell plastic in the
plastic injection molding. Materials and Design, 30(2), 367-375.

6 Longzhi, Z., Binghui, C., Jianyun, L., and Shangbing, Z. (2010). Optimization of plastics
injection molding processing parameters based on the minimization of sink marks. In 2010
International Conference on Mechanic Automation and Control Engineering (pp. 593-595).
IEEE.

7 Shayfull, Z., Fathullah, M., Sharif, S., Nasir, S. M., and Shuaib, N. A. (2011). Warpage
Analysis on Ultra-Thin Shell by Using Taguchi Method and Analysis of Variance(ANOVA)
for Three-Plate Mold. International Review of Mechanical Engineering, 5(6), 1116-1124.

8 Hussin, R., Saad, R. M., Hussin, R., and Dawi, M. S. I. M. (2012). An optimization of
plastic injection molding parameters using Taguchi optimization method. Asian Transactions
on Engineering, 2(5), 75-80.

9 Shan, Z., Qin, S. H., and Wei, L. Q. (2014). Quantify the weld line and its optimization
based on the Taguchi and the moldflow secondary development. In Applied Mechanics and
Materials (Vol. 444, pp. 1021-1025). Trans Tech Publications Ltd.

10 Dangayach, G. S., and Guglani, L. (2015). Application of Moldflow and Taguchi


technique in improving the productivity of injection moulded energy meter base. International
Journal of Process Management and Benchmarking, 5(3), 375-385.

11 Chiang, K. T., and Chang, F. P. (2007). Analysis of shrinkage and warpage in an injection-
molded part with a thin shell feature using the response surface methodology. The International
Journal of Advanced Manufacturing Technology, 35(5-6), 468-479.

67
12 Mathivanan, D., and Parthasarathy, N. S. (2009). Prediction of sink depths using
nonlinear modeling of injection molding variables. The International Journal of Advanced
Manufacturing Technology, 43(7-8), 654-663.

13 Tsai, K. M., and Luo, H. J. (2015). Comparison of injection molding process windows for
plastic lens established by artificial neural network and response surface methodology. The
International Journal of Advanced Manufacturing Technology, 77(9-12), 1599-1611.

14 Sadeghi, B. H. M. (2000). A BP-neural network predictor model for plastic injection


molding process. Journal of Materials Processing Technology, 103(3), 411-416.

15 Liao, S. J., Hsieh, W. H., Wang, J. T., and Su, Y. C. (2004). Shrinkage and warpage
prediction of injection‐molded thin‐wall parts using artificial neural networks. Polymer
Engineering and Science, 44(11), 2029-2040.

16 Yin, F., Mao, H., Hua, L., Guo, W., and Shu, M. (2011). Back propagation neural network
modeling for warpage prediction and optimization of plastic products during injection
molding. Materials and Design, 32(4), 1844-1850.

17 Yin, F., Mao, H., and Hua, L. (2011). A hybrid of back propagation neural network and
genetic algorithm for optimization of injection molding process parameters. Materials and
Design, 32(6), 3457-3464.

18 Wang, R., Zeng, J., Feng, X., and Xia, Y. (2013). Evaluation of effect of plastic injection
molding process parameters on shrinkage based on neural network simulation. Journal of
Macromolecular Science, Part B, 52(1), 206-221.

19 Kurtaran, H., Ozcelik, B., and Erzurumlu, T. (2005). Warpage optimization of a bus
ceiling lamp base using neural network model and genetic algorithm. Journal of Materials
Processing Technology, 169(2), 314-319.

20 Chen, W. C., and Hsu, S. W. (2007). A neural-network approach for an automatic LED
inspection system. Expert Systems with Applications, 33(2), 531-537.

21 Mok, S. L., Kwong, C. K., and Lau, W. S. (2001). A hybrid neural network and genetic
algorithm approach to the determination of initial process parameters for injection
moulding. The International Journal of Advanced Manufacturing Technology, 18(6), 404-409.

22 Ozcelik, B., and Erzurumlu, T. (2006). Comparison of the warpage optimization in the
plastic injection molding using ANOVA, neural network model and genetic algorithm. Journal
of Materials Processing Technology, 171(3), 437-445.

23 Peng, Y., Wei, W., and Wang, J. (2012). Model predictive synchronous control of barrel
temperature for injection molding machine based on diagonal recurrent neural
networks. Materials and Manufacturing Processes, 28(1), 24-30.

24 Sedighi, R., Meiabadi, M. S., and Sedighi, M. (2017). Optimisation of gate location based
on weld line in plastic injection moulding using computer-aided engineering, artificial neural

68
network, and genetic algorithm. International Journal of Automotive and Mechanical
Engineering, 14(3).

25 Ribeiro, B. (2005). Support vector machines for quality monitoring in a plastic injection
molding process. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications
and Reviews), 35(3), 401-410.

26 Shie, J. R. (2008). Optimization of injection molding process for contour distortions of


polypropylene composite components by a radial basis neural network. The International
Journal of Advanced Manufacturing Technology, 36(11-12), 1091-1103.

27 Tian, M., Gong, X., Yin, L., Li, H., Ming, W., Zhang, Z., and Chen, J. (2017). Multi-
objective optimization of injection molding process parameters in two stages for multiple
quality characteristics and energy efficiency using Taguchi method and NSGA-II. The
International Journal of Advanced Manufacturing Technology, 89(1-4), 241-254.

28 Shi, H., Gao, Y., and Wang, X. (2010). Optimization of injection molding process
parameters using integrated artificial neural network model and expected improvement function
method. The International Journal of Advanced Manufacturing Technology, 48(9-12), 955-
962.

29 Meiabadi, M. S., Vafaeesefat, A., and Sharifi, F. (2013). Optimization of Plastic Injection
Molding Process by Combination of Artificial Neural Network and Genetic Algorithm. Journal
of Optimization in Industrial Engineering, 13, 49-54.

30 Manjunath, P. G., and Krishna, P. (2012). Prediction and optimization of dimensional


shrinkage variations in injection molded parts using forward and reverse mapping of artificial
neural networks. In Advanced Materials Research (Vol. 463, pp. 674-678). Trans Tech
Publications Ltd.

31 Chen, W. C., Wang, M. W., Chen, C. T., and Fu, G. L. (2009). An integrated parameter
optimization system for MISO plastic injection molding. The International Journal of
Advanced Manufacturing Technology, 44(5-6), 501-511.

32 Chen, W. C., Wang, M. W., Fu, G. L., and Chen, C. T. (2008). Optimization of plastic
injection molding process via Taguchi’s parameter design method, BPNN, and DFP. In 2008
International Conference on Machine Learning and Cybernetics (Vol. 6, pp. 3315-3321).
IEEE.

33 Chen, W. C., Fu, G. L., and Kurniawan, D. (2012). A two-stage optimization system for
the plastic injection molding with multiple performance characteristics. In Advanced Materials
Research (Vol. 468, pp. 386-390). Trans Tech Publications Ltd.

34 Chen, W. C., Liou, P. H., and Chou, S. C. (2014). An integrated parameter optimization
system for MIMO plastic injection molding using soft computing. The International Journal of
Advanced Manufacturing Technology, 73(9-12), 1465-1474.

69
35 Chen, W. C., Fu, G. L., Tai, P. H., and Deng, W. J. (2009). Process parameter
optimization for MIMO plastic injection molding via soft computing. Expert Systems with
Applications, 36(2), 1114-1122.

36 Chen, W. C., Nguyen, M. H., Chiu, W. H., Chen, T. N., and Tai, P. H. (2016).
Optimization of the plastic injection molding process using the Taguchi method, RSM, and
hybrid GA-PSO. The International Journal of Advanced Manufacturing Technology, 83(9-12),
1873-1886.

37 Tsai, K. M., and Luo, H. J. (2017). An inverse model for injection molding of optical lens
using artificial neural network coupled with genetic algorithm. Journal of Intelligent
Manufacturing, 28(2), 473-487.

38 Roy, R. K. (2010). A Primer on the Taguchi Method, 2nd ed., Society of Manufacturing
Engineers, Michigan.

39 Davis, R., and John, P. (2018) IntechOpen Application of Taguchi-Based Design of


Experiments for Industrial Chemical Process, DOI: 105772/intechopen.69501.

40 Zhou, H. (2013). Computer Modelling for Injection Molding Simulation, Optimization and
Control, John Wiley & Sons, Inc., New Jersey.

41 Sivanandam, S. N. and Deepa, S. N. (2008). Introduction to Genetic Algorithms, 1st ed.,


Springer-Verlag, Berlin.

42 Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization & Machine Learning,


1st ed., Addison-Wesley Publishing Company, Inc., Canada.

43 Kumar, A. (2013). ‘Encoding Schemes in Genetic Algorithm’, International Journal of


Advanced Research in IT and Engineering 2/3 (March 2013): 2-5.

44 Bäck, T., Fogel, D. B., and Michalewicz, Z. (1996). Evolutionary Computation 1 Basic
Algorithms and Operators, 1st ed., Taylor & Francis, New York.

45 Mitchell, M. (1999). An Introduction to Genetic Algorithms, Fifth Printing, MIT Press,


London.

46 Haupt, R. L. and Haupt S. E. (2004). Practical Genetic Algorithms, Second Edition, John
Wiley & Sons Inc., New Jersey.

47 Alpaydin, E. (2010). Introduction to Machine Learning, 2nd ed., The MIT Press,
Cambridge, Massachusetts.

48 https://www.ibm.com/cloud/learn/machine-learning. Accessed on 10 November 2021.

49 Geron, A. (2019). Hands-on Machine Learning with Scikit-Learn, Keras and TensorFlow,
2nd ed., Gravenstein Highway North, Sebastopol.

70
50 Müller, A. C. and Guido, S. (2017). Introduction to Machine Learning with Python,
Gravenstein Highway North, Sebastopol.

51 https://builtin.com/data-science/gradient-descent. Accessed on 11 November 2021.

52 https://ichi.pro/tr/optimizasyon-algoritmalarini-anlamak-53453022124445. Accessed on 11
November 2021.

53 https://towardsdatascience.com/https-medium-com-dashingaditya-rakhecha-
understanding-learning-rate-dd5da26bb6de. Accessed on 12 November 2021.

54 https://www.businesswire.com/news/home/20161014005080/en/Trinseo-to-Begin-
MAGNUM-ABS-Production-in-China-in-2017. Accessed on 12 November 2021.

55 https://support.minitab.com/en-us/minitab/18/help-and-how-to/modeling-
statistics/anova/how-to/one-way-anova/interpret-the-results/all-statistics-and-graphs/analysis-
of-variance/. Accessed on 10 December 2021.

56 https://keras.io/. Accessed on 10 December 2021.

57 https://www.tensorflow.org/api_docs. Accessed on 10 December 2021.

58 https://www.datatechnotes.com/2019/02/regression-model-accuracy-mae-mse-rmse.html.
Accessed on 15 November 2021.

59 Fortin, F.-A., De Rainville, F.-M., Gardner, M.-A., Parizeau, M. and Gagné, C. (2012)
“DEAP: Evolutionary Algorithms Made Easy”, Journal of Machine Learning Research, 13,
2171-2175.

60 https://www.alpharithms.com/covariance-362516/. Accessed on 17 November 2021.

61 https://stackoverflow.com/questions/29432629/plot-correlation-matrix-using-pandas.
Accessed on 12 December 2021.

71
72

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy