Content-Length: 171198 | pFad | https://www.academia.edu/27859562/Hybrid_neural_network_for_gas_analysis_measuring_system

(PDF) Hybrid neural network for gas analysis measuring system
Academia.eduAcademia.edu

Hybrid neural network for gas analysis measuring system

1999, IMTC/99. Proceedings of the 16th IEEE Instrumentation and Measurement Technology Conference (Cat. No.99CH36309)

AI-generated Abstract

The research focuses on the classification and concentration measurement of four gas pollutants using a small array of semiconductor oxide sensors. A hybrid neural network, combining a self-organizing Kohonen layer and a multilayer perceptron (MLP), is developed to process the signals from these sensors. Experimental results demonstrate the efficiency and accuracy improvements in gas estimation, alongside a reduction in calibration data size when employing this neural network structure.

Hybrid Neural Network for Gas Analysis Measuring System ✁ ✂ ✄ ☎ ✆ ✝ ✂ ✞ ✟ ✆ ✠ ✞ ✆ ✡ ☎ Warsaw University of Technology Department of Electrical Engineering 00-661 Warsaw, POLAND sto@iem.pw.edu.pl Abstract The paper presents the application of the hybrid neural network to the solution of the calibration problem of the solid state sensor array used for the gas analysis. The applied neural network is composed of two parts: the selforganizing Kohonen layer and multilayer perceptron (MLP). The role of the Kohonen layer is to perform the feature extraction of the data and the MLP network fulfills the role of estimator of the concentration of the gas components. The obtained results have shown that the array of partially selective sensors, cooperating with hybrid neural network, can be used to determine the individual analyte concentrations in the mixture of gases with good accuracy. The hybrid network is a reasonably small net and thanks to this it learns faster and reaches good generalization ability at reasonably small size of training data set. The system has two interesting features: lower calibration cost and good accuracy. 1.Introduction The paper is concerned with the recognition of combustible or toxic gas pollutants using array of semiconductor oxide sensors and processing of the obtained signals by the neural network. Four gases have been considered as pollutants: carbon oxide, methane, propane/buthane and methanol vapour. A small array of five semiconductor oxide sensing elements with various composite is used as sensors. This array is exposed to various mixtures of air with these four pollutants. The signals obtained from the sensors have been processed using artificial neural network. The paper deals with the calibration of a system for gas analysis composed of a solid-state sensor array and hybrid neural network structure. The hybrid network is composed of two subnetworks: the selforganizing Kohonen layer and multilayer perceptron (MLP). The structure and efficient learning algorithms are shortly presented and discussed in the paper. The results of experiments concerning the estimation of 4 gas pollutants are presented and compared to other techniques of data processing. A real improvement Kazimierz Brudzewski Warsaw University of Technology Department of Chemistry 00-661 Warsaw, POLAND of both: the sensor array accuracy and semiconductor oxide sensing elements with various compositions is used. This the reduction of the calibration data size is demonstrated as the main feature of the neural structure considered. 2. Problem statement The task of the signal processing for a gas sensor system is to classify the recognized gases and to calculate their concentrations [1,2,3,4]. Therefore, the signal processing of the sensor signals is a twofold problem: • classification (recognition) of the types of gases • determination of their concentration. On the basis of many experiments we have found, that the classification of the gas type and the calculation of its concentration is better, when the recognition system is the hybrid network composed of self-organizing layer (Kohonen net) and multilayer perceptron neural network (MLP). Such processing of signals assures the reliability and acceleration of the whole pattern recognition process. This procedure is not straightforward, since qualitative and quantitative aspects are merged together with a degree of complexity generally dependent on the number of gases composing the chemical pattern and on the degree of nonlinearity of the sensor characteristics. The pattern recognition problem is structured in three distinct steps: feature extraction, classification and estimation. In the hybrid network the qualitative aspects are in charge of the Kohonen net which provides the classification of sensor outputs according to their membership to the classes of input signals. The quantitative estimatation of the gas concentrations are then performed by the perceptron network. Cascading these two actions results in more accurate estimation. Supervised and unsupervised learning applied to the hybrid network seem to represent a substantial improvement in gas sensing problem. In the first step of designing the whole measuring system we have applied the principal component analysis (PCA) to minimize the number of sensors used for measurement. Although PCA is limited by its linear nature, this technique has been proved to perform enough well in a large variety of applications using different sensors and for different kinds of chemical patterns. PCA is a linear technique used for extracting features related to each pattern. It consists in projecting the data set on the basis, formed by the eigenvectors (principal components) of the covariance matrix of the data set. The percentage of information contained in each component is directly related to the corresponding eigenvalue, so the data set can be reduced to the most relevant components only. This method of presentation of data can operate as a synthesis of the data, because it removes the redundant information and thanks to this reduces the dimensionality of the problem. For example in our experiments an initial nine sensor array was simplified to only five after a PCA analysis detected redundancy between four sensors used in experiments. 3. Hybrid neural network as a signal processor To improve the performance of the complex classification and estimation system we have applied the neural network of the hybrid structure presented in Fig. 1. It is the generalization of the Hecht-Nielsen counterpropagation network [10]. The first part is the selforganizing Kohonen layer and the second one - the feedforward network called multilayer perceptron (MLP). Both are trained separately using different learning strategy, appropriate to the type of the layer. neuron is fed the N components of the input vector through the weighting coefficients, forming the N-dimensional vector w. In the process of learning, the neurons are self organizing in such a way that the weights of neurons are moving into direction of the data vectors. The self organization algorithm is formed by the sequence of the following operations [6]: • present the input vector x to the network • find the area in the network where the specific neuron responds most strongly to the previously presented vector x; the winner unit Nw is the one, whose weight vector is nearest in the sense of assumed distance measure to the input vector • update the weights of the winner and selected neurons of this area in the direction towards the vector x. Repeating these sequences many times (up to several hundred thousands, depending on the input data, network size and adaptation factors) brings the network to an organized state, in which each neuron represents one separate cluster of data. In the generalized Kohonen algorithm we update the weights of the neurons found in the neighbourhood round the winning neuron Nw, according to the following rule ← ✶ ✹ +η ✶ ✹ ✹ ✺ ( )[ ✻ ✷ ✷ ✼ − ✸ ✹ ] (1) is the vector of weights of ith neuron found in where the neighbourhood of the winner and η is the adaptation coefficient (learning constant), decreasing with time. Usually it is the linear decrease, starting from some initial value and ending with η = The neighbourhood of the winner is also decreasing with time and is adjusted in a special way. The powerful algorithm is so called neural gas [8] in which the neighbourhood function is defined in terms of the distance between the input vector x and the weight vector of the neuron. In this approach we arrange the neurons < < < − , according to these distances, i.e., ✽ ✾ ✿ ❀ ❃ ❄ ❄ ❅ ❁ where ☛ ☞ ✌ ☞ ✍ ✫ ✬ ✭ ✮ ✯ ✎ ✰ ✏ ✱ ✲ ✑ ✳ ✒ ✔ ✕ ✖ ✗ ✴ ✓ ✵ ✫ ✓ ✫ ✬ ✮ ✘ ✙ ✚ ✛ ✜ ✢ ✣ ✤ ✥ ✦ ✜ ✤ ✧ ★ ✜ ✧ ✤ ✛ ✧ ✦ ✛ ✩ ✪ ✣ ✤ ✭ The Kohonen layer is trained using selforganizing strategy based on competition and MLP is traind in supervised mode based on gradient approach and backpropagation. 3.1 Selforganizing Kohonen layer A Kohonen layer is composed of a single layer of neurons working in a selforganizing competitive mode [6]. Each ❆ = ❇ ❉ − ❊ ❋ ❇ ● ❈ ❁ ❅ ❅ ❁ ❂ means the distance between the input vector x and weight vector of m(i)th neuron, for m = 0, 1, ..., n-1. The value of the neighbourhood function is then defined as follows ▲ ❍ ▼ ◆ ■ ❚ = ❖ P ◗   − λ  ❘ ❏ ❙ ❑ (3) where m(i) means the position of ith neuron as a result of sorting and λ is the parameter decreasing with time. The learning coefficient η in this approach is also decreasing in time, usually exponentially or linearly, starting from η and ending on η at the end of learning. The algorithm of neural gas is regarded as the most effective method of training the Kohonen network. To get good organization of neurons it is important to keep all neurons active in the process of learning. The efficient way to do it is to apply the conscience mechanism [5,6]. In this mechanism too active neuron (winning too often) is punished. In practice this punishment was implemented in the competition process by modifying the measure of distance between the input vector x and the weight vector of the neuron. The real distance is multiplied by the number of winnings of each neuron, i.e., ← ⋅ (4) ❲ ❯ ❬ ❝ ❭ ❞ ❪ ❡ ❫ ❛ where ❢ ❜ ❝ ❴ ❞ ❱ ❪ ❳ ❡ ❵ ❛ ❢ ❜ ❨ ❩ ❣ ❣ is the number of winnings of ith neuron. ❤ Thanks to such modification the passive neurons, whose initial weights were placed in infavourable region, have the chances to win and to modify their weights. As a result of such organisation of the process of learning, the whole net is generally better organised and the total quantisation error is smaller. The main role of Kohonen layer is the extraction of the features of the data and classification of each input vector to the appropriate cluster, represented by the winning neuron of the layer. In our approach at final estimation of gas concentration we take into account not only the answer of the winner, but also its first few neighbours, where the activity of each neuron is determined by the rule = ✐ t ❦ ♠ ♥ ♦ [ −α (( )− ♣ ❥ ❧ q r ❥ ( r )) ] (5) - the coefficient and d - the distance between two = and for all other neurons vectors. For the winner ✉ ✈ ✇ ① ② < ③ ④ ⑥ ④ ⑦ ⑤ ⑥ 3.2 The feedforward multilayer network The multilayer feedforward network, often called also the multilayer perceptron [5,10] is the second part of the hybrid neural network of Fig. 1. The output signals of the ⑩ ⑨ = ❶ function ❷ ❸ ❹ , while the output neurons (− ) + ❺ ⑨ ⑨ ❻ is the weighted sum of the are linear. The net signal incoming signals to the ith neuron, with the weight associated with the link connecting the nodes j and i ❼ ❽ ❾ (from j to i). Since the weights ❽ are actually internal ❾ ❿ parameters associated with each neuron, changing these weights will alter the activity of neuron and in turn of the whole MLP network. It was found that MLP network is capable to approximate any multidimensional data with arbitrary accuracy [6]. To achieve this we have to adapt the weights in such a way that the error measure for all training output input-output pairs will be minimized. At training pairs (y, d), where y - the input neurons and vector, d - the destination vector and z - the vector of actual output signals, the error function E is usually defined in the squared form as follows ➀ ➁ 1 p m (k ) ∑∑ z − d (jk ) 2 k =1 j =1 j E= 2 (6) where the index j is referred to jth training pattern. In the gradient method of learning we adapt the weights from cycle to cycle according to the information of gradient (7) + = +η where and is the direction vector of minimization, described using the gradient information. In practical implementation of the learning algorithm we have applied quasi Newton variable metric method [7,11], in which ➄ ➆ ➅ ➄ ➆ ➅ ➄ ➆ ➅ ➇ ➂ ➂ ➉ ➊ ➋ ➌ ➃ ➍ ➎ ➍ ➏ ➐ ➑ ➉ ➑ ➒ ➓ ➔ ➍ → → ➉ ➓ ➉ ➍ ➑ ➋ ➓ ➏ ➎ ➓ ➣ ➎ ➏ ➋ ➍ ↔ ➏ ➋ ➍ ➏ ➓ ➌ ➓ ↕ ➓ ➎ ➍ ➙ ➝ This is an important difference to similar network structure presented in [3]. In their solution only winners’s signal of SOM takes part in further signal processing performed by MLP. In our approach this is enhanced by the residual activities of other selforganizing neurons. All neurons send now their signals to the MLP subnetwork. In this way more information is fed to the second subnetwork. Thanks to this the same accuracy of calibration can be achieved at smaller number of weights. In the recall mode after presentation of the input vector x the activities of neurons in Kohonen layer are determined calculated and on the basis of this their output signals according to relation (5). These signals form the new input vector for the second part of the hybrid neural network - the feedforward one. ⑤ ❺ ⑧ ➈ s ♣ ❦ q Kohonen layer form the inputs to this network. All neurons in hidden layer (from 1 to K) have sigmoidal activation ➠ ➞ =− − ➟ ➠ ➞ ➟ ➠ ➞ (8) with H(k) - the approximated hessian matrix and g(k) - the gradient vector of kth cycle. Gradient is generated according to very powerful backpropagation algorithm [7]. On the other hand hessian is calculated in each cycle using BFGS variable metric approximation [7,11]. In such organization of training process the kth learning cycle is composed of three stages: • determination of direction vector p(k) using information of hessian and gradient • determination of optimal learning rate η using the minimization on the direction vector p(k) • adaptation of the weight vector w according to the rule (7). After training the weights of the network they are frozen and ready for the retrieval mode, in which the vector y put to the input of MLP generates the vector z composed of activities of neurons of the output layer. In defining the hybrid network the important point is the choice of the number of neurons in each layer. The size ➙ ➛ ➡ ➜ of the input layer is dictated by the number N of sensors. The number M of Kohonen neurons should reflect the complexity of data distribution. It is usually much higher than N. In experiments at N=5 we have found M=64 as an optimal number. The input dimension of MLP network is then equal to the number of neurons of Kohonen layer. The output dimension of MLP is defined by the number of classes, i.e., the number of components of the gas mixture (m=4 in experiments). The number of hidden neurons has been adjusted experimentally to obtain the best accuracy of generalization. Note that too big number of hidden neurons results in bad generalization ability of the network, while too low number makes the learning process stop on too high level of the learning error. The experiments of learning different structures of MLP have shown that in our case the optimal is hidden layer composed of 6 neurons of sigmoidal activation function. experiments.. ➮ ➬ ➬ ➬ è æ æ æ ✠ å ✟ ✞ ➷ Ø ➬ ➬ ➘ ➴ ➴ ➶ ➹ ➹ ➚ ➪ ➪ ì ä á æ æ ë æ æ ê æ æ é æ ü âã ✆ ✝ ☎ ✄ ßà ✄ Þ ß ✂ ✁ Ù ÚÛÜ Ý ý þÿ × Õ æ û Ö×Ø ù úûü ➪ æ ➺ ➻ ➼ ➱ ☞ ✡ ✡ ✃ ❐ ❒ ❮ ➽ ➼ ❰ Ï Ð Ñ Ò ➾ ❐ ➚ Ó Ô ➪ æ ç ❮ è í ✡ ✵ ✳ ✳ î ï ð ñ ò ✻ ✼ ✽ ✾ ✿ æ è ó ô õ ö ï ❁ ❂ ❃ ✼ ç ÷ ø ñ ❅ ✾ é æ ✶ ✳ ✳ ✲ ❘ ✱ ◗ ✰✥ ❖P ✏ ✮ ✡ ✡ ✡ ✡ ✹ ✯ ▼ ✳ ✳ ✸ ✳ ✳ ✷ ✳ ✳ ✶ ✳ ◆ ✭ ▲ ✎ ✬ ✫ ❑ ✬ ❏ ✍ ★✩ ✡ ❑ ✡ ✦ ✪ ●❍ ✧ ❊ ■ ❋ ✤ ❈ ✌ ✡ ✡ ✤✥ ✳ ❈❉ ✢ ✣ ❆ ❇ ✡ ✳ ✡ ☛ ☞ ✑ ❙ ❚ ❯ ❱ ❲ ❳ ❨ ✒ ❩ ✓ ✔ ❬ ✕ ✡ ✖ ❭ ☞ ✗ ❩ ✘ ✙ ❪ ✚ ☛ ✓ ❩ ✌ ✛ ✜ ❫ ✡ ✳ ✴ ✕ ✵ ✺ ❴ ❵ ❛ ❜ ❩ ❪ ❝ ❞ ❡ ❝ ❢ ❡ ❩ ✳ ✵ ❀ ❢ ❫ ❭ ✴ ❄ ❵ ❫ ❚ ❝ ❢ ❪ ❝ ❞ ❯ ❵ ❪ ❩ ❪ 4. Experimental results and discussion An initial nine sensor array has been simplified to five after a PCA analysis detected redundancy between four of the sensors. So, the sensor array has been reduced to five sensors, namely TGS-815, TGS-822, TGS-842 Figaro sensors and NAP-11A, NAP-11AE Nemoto sensors. The array of these sensors has been tested in an entirely computer controlled gas line. All measurements have been carried out with wet air (RH=70% at temperature 23o C) as carrier gas and with carbon oxide, methane, methanol vapours and propane/buthane as pollutants. The gas flux of 0.5 l/min was kept constant. The highest concentration of the pollutants did not exceed 1000 ppm. The training data set consisting of the measured sensor signals of 50 known gas mixtures has been used as the input for the network .A set of 20 further cases has been used for testing of the trained system The sensor responses have been normalized to the measured range. Thus the numerical values of neuron input and output signals are restricted to the range (0,1). Fig.2 and Fig. 3 present the preset and predicted values of gas concentrations, respectively. As it is seen both figures are very similar. To assess the quality of prediction the differences between the predicted and actual concentrations for each component have been calculated (in ppm). The maximum error did not exceed the value of 35. The distribution of errors are presented in Fig. 4. Two kinds of statistical errors have been also calculated: the root mean square error (RMS) and mean absolute error (MAE). The RMS error is defined for each component of the mixture of gases as following ∑( ( ) − ➢ ➤ ➥ ➩ = ➦ ➧ ➫ ➫ = ➳ ➭ ) ➵ ➯ ➲ ( ) ➨ ➫ ➭ with c -- the real and z -- the predicted concentrations of the number of measurements. As gas component, and was stated earlier four component mixture has been used in ➸ ➵ ➇ t ➅ s s s ➊ ➆ ➲ r ➂ s ➈ ➈ ➈ ➳ s ➎ ➃➄ ➫ ➈ ➈ ➭➯ ➀➁ ➨➩ ♣ ❿ q q ➀ ➧ ❽ ❾ ➥ ♥ ❸ ♦ ➍ ➈ ➈ ➌ ➈ ➈ ➋ ➈ ➨ ➦ ♦ ➟ ❺❼ ❻ ➡➤ ➢ ❧ ♠ ♠ ❷ ❸❹ ❺ ⑩ ❶❷ ➈ ➞ ➟➠ ➡ ➜ ➝➞ ♠ ➈ ❣ ❤ ✐ ✉ ✈ ✇ ① ② ❥ ✐ ③ ④ ⑤ ⑥ ⑦ ❦ ✇ ❧ ⑧ ⑨ ♠ ➈ ➉ ② ➊ ➏ ß ➐ ➑ ➒ ➓ ➔ è é ê ë ì ➈ ➊ → ➣ ↔ ↕ ➑ í î ï ð é ➉ ➙ ➛ ➓ ò ë ➋ ➈ ã à ✆ ➻ ➸ ➸ ➸ â à à à ☎ Ý Þ ✂✄ ✁ ➚ Ú ➸ ➸ ➾ ➸ ➸ ➽ ➸ ➸ æ à à ÛÜ ØÙ × þÿ Ø ý Õ à à ä à à ã à þ Ö Ð å û ü ö ÒÔ Ó øú ù ➼ ➸ ➸ Ï ÐÑ Ò ❮ ❰Ï à õ ö÷ ø ó ôõ ➸ à ➸ ➺ ➪ ✝ ✞ ✟ ✠ ✡ ☛ ☞ ✌ ➻ ➶ ➹ ✍ ➘ ✎ ➴ ➸ ➷ ➻ ✌ ➬ ➮ ➱ ✏ ✃ ➹ ✞ ➺ ➼ ✑ ❐ ✒ ❒ ➸ à á ➴ â ç ✌ ✏ ✓ ✔ ✕ ✖ ✌ ✗ ✘ ✙ ✑ ✘ ✚ à â ✑ ✌ á ✚ ñ ✒ ✎ ✔ ✒ ✞ ✘ ✚ ✗ The global RMS error for the whole mixture was then defined as follows ∑( ✦ ✛ ✜ ✢ ✣ = ✥ ✤ = ) ✧ ✛ ✜ ✢ ✤ Table 1 presents the results of testing the system for p=20 measurement samples of the mixture composed of 4 listed above gas components. As it is seen the global error stays on the reasonable level, comparable to the accuracy of preparation of the mixture. example at 16 neurons in selforganizing layer the global MAEg error has increased from 12.5 ppm to 33.74 ppm. ■ ✰ ❍ ✱ ☛ ☞ ✌ ✍ ✎ ✏ ✎ ✔ ✕ ✖ ✗ ✕ ✘ ✙ ✎ ✘ ✘ ✛ ✗ ✜ ✎ ✜ ✢ ✌ ✕ ✔ ✙ ✎ ✗ ✣ ✘ ❣ ● ❋ ❢ ✑ ✒ ✓ ✓ ✓ ✓ ✚ ✓ ✓ ✤ ❍ ❇ ❈❉❊ ❝❞❡ ❴ ❵ ✥ ❜ ✦ ✧ ★ ✥ ✩ ✩ ✥ ✪ ✫ ✩ ✬ ✭ ✮ ✱ ❅❆ ❵❛ ❄ ❅ ❀❂ ❃ ❬❪ ❫ ✿❁ GAS Carbon oxide Methane Methanol Propane/butane ❩❭ ✿❀ ❩❬ ✾ ✿ ❨ ❩ ■ ✯ ✰ ✱ ★ ✩ ✪ ✲ ✐ ❍ ▲ ✳ ✴ ✵ ✶ ✫ ✪ ✷ ✸ ✹ ✺ ✻ ✬ ✴ ✭ ✼ ✽ ✮ ❍ ■ ✶ ❏ ▼ ❤ ❽ ❻ ◆ ❖ P ◗ ❍ ❏ ❘ ❙ ❚ ❯ ❱ ■ ❖ ❑ ❲ ❳ ❍ ◗ ❼ MAE [ppm] 12 14 15 9 ➛ ✯ ❤ ❶ ➣ ↔↕➙ ➓ ➔ ⑨⑩ ⑦ ➏➑ ➎➐ ③④ ➎➏ ➒ ③⑤ ③ ➍ ✐ ➎ ❤ ❽ ❧ ✐ ♠ ❥ ♥ ♦ ♣ q r ❤ ❥ s t ✉ ➟ ✈ ➞ ➠ ➶ ➹ ➡ ➘ ❼ ➀ ❤ ➞ ➴ ➢ ➷ ➬ ➤ ➹ ➥ ♦ ➧ ➱ ✃ ❦ ✇ ① ❼ ❽ q ❾ ➁ ➥ ❐ ❤ ➂ ➃ ➄ ➅ ❼ ➆ ❾ ➇ ➈ ➉ ➊ ❽ ➃ ➋ ❿ ➌ ❼ ➅ ➧ ➦ ➮ ✐ ➱ ➨ ➹ ➥ ❐ ❒ ➩ ➫ ➬ ➥ ➘ ❒ ➭ ❮ ➯ ✃ ❐ ➥ ➴ ➲ ✃ ➳ ➥ ❰ Ï ➥ ➘ ➵ ➴ ➸ ➱ ➺ ✃ ➻ ➶ ➼ Ð ✃ ➽ ❐ ➻ ➹ ➾ ➻ ❐ ➸ ❒ ➚ ➵ ➪ ➴ On the other hand the MAE errors express the linear relationship between the errors of different samples. Its definition for each component of gas mixture has been assumed in the following form Ú Ý Ñ Ò = Ó . × ∑ Ô = Ø ( ) Ø − () Ù Û Õ Ü Ö Þ Similarly as before we can also define the global MAE error. For 4 components of the mixture this definition has been assumed in the form of mean value, i.e., ç ä ß à = á â ∑ å ß à á ã æ ã = Table 2 presents the MAE errors for the same case. As it is seen these MAE errors are smaller than RMS and stay on a similar level for each component of the mixture.,. The results presented here have been obtained at the optimal number of neurons in both subnetworks forming the hybrid network. Reducing the number of selforganizing è é þ ê ÿ ✲ ë ì í î ï ð ì ñ ò ó ô ò õ ö ì ð ð õ ð ÷ õ ø ô ù ì ù ú ê ð ò ñ ö ì ô û õ ð On the other side by increasing this number we have not been able to improve the accuracy of estimation at the testing mode. Moreover at too high number of neurons the accuracy has been evidently dropped. For example at 196 neurons the total MAEg error on the test data was equal 14.58 ppm (increase of 2.08 ppm). This is due to the worse generalization ability of the bigger network structure. At the same (unchanged) number of learning data the ratio of the number of the learning samples to the number of weights has been reduced almost three times. It means that the number of learning samples was now too small for satisfactory generalization. Hence the decrease of the accuracy of prediction. It is interesting to compare the behavior of our hybrid network to the performance of MLP applied alone. To make fair comparison we have used the MLP structure of the same number of weights as our hybrid network. Note that equal number of weights does not mean the same number of neurons. In the experiments we have explored the structure 5-20-12-4. The network has been trained on the same learning data set and then tested on the data used previously for testing hybrid network. The obtained RMS errors at testing mode are given in Table 3 As it is seen this time the level of errors for each component differs significantly, from 13 to 29 ppm. The best obtained total RMS error is equal 43.4 ppm, i.e., almost 50% worse than at the application of the hybrid neural structure, presented earlier . ü ✳ ý ✱ ➔→ ⑨ ④⑥ ➝ ✰ ❼ ❷❸❹ ⑧ ② =12.5 ➜ ❺ ý ✁ ✁ ý ✂ ✄ ✁ ☎ ✆ ✴ ❉ GAS Carbon oxide Methane Methanol Propane/butane ✵ ✶ ✷ ✸ ✹ ✺ ✻ ✷ ✼ ✽ ✾ ✿ ✽ ❀ ❁ ✷ ✻ ✻ ❀ ✻ ❂ ❀ ❃ ✝ RMS [ppm] 13 17 16 11 ✞ ✟ ✠ ✡ ❊ ❋ ● ❉ ❍ ❍ ❉ ■ ❏ ❍ GAS Carbon oxide Methane Methanol Propane/butane ❑ ▲ ▼ RMS [ppm] 14 29 13 26 =28.9 ◆ neurons at the unchanged supervised part of the network has resulted in worsening the accuracy of estimation. For ❖ P ◗ =43.4 ✿ ❄ ✷ ❅ ❆ ✺ ❁ ✷ ✿ ❇ ❀ ✻ ❈ The results expressed as the MAE errors of prediction for the same testing data set are shown in Table 4. The MAE global error has been increased from 12.5 ppm (hybrid network) to 14.3 ppm for MLP. ❘ ❙ ❚ ❯ ❱ ❲ ❱ ❳ ❤ ✐ ❥ ❦ ❤ ❧ ❧ ❤ ♠ ❨ ❬ ❭ ❪ ❫ ❭ ❴ ❵ ❱ ❴ ❩ ♥ ❧ ❩ ♦ GAS Carbon oxide Methane Methanol Propane/butane ♣ ❴ ❩ ❩ ❜ ❛ ❫ ❝ ❱ ❞ ❡ ❵ ❨ ❱ ❫ ❢ ❴ ❩ ❣ q MAE [ppm] 10 19 10 18 r s t ✉ =14.3 5. Conclusions The paper has shown the application of the hybrid neural network to the calibration of the solid state sensor array used for the gas analysis. The obtained results have shown that the array of partially selective sensors can be used to determine the individual analyte concentrations in a mixtures of gases. The hybrid network performing at the same time the unsupervised selforganizing competitive classification and supervised quantification is a reasonably small net and thanks to this it learns faster and reaches good generalization ability at reasonably small size of training data set. The system has the interesting features: • lower calibration cost • better accuracy at the reasonably small size of calibration data set • easy adaptability to different working conditions • relative insensitivity to the noise due to the application of clusterization. Splitting the learning process into 2 separate phases accelerates the adaptation of weights and makes the device more flexible and easy for reprogramming. On the other hand reduction of the size of the net is an advantage if the network is to be integrated, since smaller network means cheaper production. References [1] G. S. Broten, H. C. Wood, A neural network approach to analysing multi component mixtures, Meas. Sci. Technol., 4 (1993), pp. 1096-1105 [2] V. Sommer, P. T. Tobias, D. Kohl, H. Sundgren, I. Lundstrom, Neural networks and abductive networks for chemical sensor signals: a case comparison, Sensors and Actuators B, 28 (1995), pp. 217-222 [3] C. Di Natale, F. A. M. Davide, A. D’Amico, W. Gopel and U. Weimar, Sensor array calibration with enhanced neural networks, Sensors and Actuators, 18-19, (1994), pp. 654-657 [4] K. Brudzewski, Smart chemical sensing system for analysis of multi-component mixtures of gases, MST NEWS Poland, 2, (1996) , pp. 1-11 [5] S. Haykin, Neural networks, a comprehensive approach, Macmillan Publishing Company, 1994, N. Y. [6] T. Kohonen, Self organization and associative memory, Springer Verlag, 1988 [7] S. Osowski, Fast second order learning algorithm for feedforward multilayer neural networks and its applications, Neural Networks, 1996, vol. 9, pp. 1583-1596 [8] M. Martinetz, S. Berkowich, K. Schulten, Neural gas network for vector quantization and its application to time series prediction, IEEE Trans. On Neural Networks, 1993, vol. 4, pp. 558 - 569 [9] C. Di Natale, A. Macagnano, F. Davide, A. D. Amico, A. Legin, Y. Vlasov, A. Rudnitskaya, B. Selezniev, Multicomponent analysis on polluted waters by means of an electronic tongue, Sensors and Actuators, B 44,1997, pp. 423 428 [10] R. Hecht-Nielsen, Neurocomputing, Addison Wesley, Amsterdam, 1991 [11] M. Gill, W. Murray, M. Wright, Practical Optimization, Academic Press, N. Y., 1981








ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: https://www.academia.edu/27859562/Hybrid_neural_network_for_gas_analysis_measuring_system

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy