Software Design Level Vulnerability Classification Model
Software Design Level Vulnerability Classification Model
Software Design Level Vulnerability Classification Model
shabana.infosec@gmail.com
Khurram Mustafa
kmustafa@jmi.ac.in
Abstract
Classification of software security vulnerability no doubt facilitates the understanding of securityrelated information and accelerates vulnerability analysis. The lack of proper classification not
only hinders its understanding but also renders the strategy of developing mitigation mechanism
for clustered vulnerabilities. Now software developers and researchers are agreed on the fact that
requirement and design phase of the software are the phases where security incorporation yields
maximum benefits. In this paper we have attempted to design a classifier that can identify and
classify design level vulnerabilities. In this classifier, first vulnerability classes are identified on the
basis of well established security properties like authentication and authorization. Vulnerability
training data is collected from various authentic sources like Common Weakness Enumeration
(CWE), Common Vulnerabilities and Exposures (CVE) etc. From these databases only those
vulnerabilities were included whose mitigation is possible at the design phase. Then this
vulnerability data is pre-processed using various processes like text stemming, stop word
removal, cases transformation. After pre-processing, SVM (Support Vector Machine) is used to
classify vulnerabilities. Bootstrap validation is used to test and validate the classification process
performed by the classifier. After training the classifier, a case study is conducted on NVD
(National Vulnerability Database) design level vulnerabilities. Vulnerability analysis is done on the
basis of classification result.
Keywords: Security Vulnerabilities, Design Phase, Classification, Machine Leaning, Security Properties
1. INTRODUCTION
Developing secure software remains a significant challenge for todays software developers as
they still face difficulty in understanding the reasons of vulnerabilities in the existing software. It is
vital to be able to identify software security vulnerabilities in the early phases of SDLC (Software
Development Lifecycle) and one of early detection approaches is to consult with the prior known
vulnerabilities and corresponding fixes [1]. Identification of candidate security vulnerability pays a
substantial benefit when they are deals in early phases like requirement and design phases of the
software [2]. Classification of vulnerabilities is fruitful in understanding the vulnerabilities better
and classification also helps in mitigating group of vulnerabilities. Identifying and mitigating
security vulnerabilities is no doubt a difficult task therefore taxonomy is developed that can
classify vulnerabilities into classes and this will help designer to mitigate cluster of vulnerabilities.
There are number of approaches of taxonomy development in past, like [3,4,5,6] etc, but no one
ever propose any taxonomy that classify design level vulnerabilities on the basis of security
properties. We have already proposed a taxonomy in [7], as shown in Table 1.0 (a) in which,
priori classification is proposed and vulnerabilities are classified manually. But in this classification
there is chance of Hawthorne Effect, it also largely depends on the expertise of the classifier.
Therefore here we are creating a classifier that can classify a vulnerability data automatically.
Machine learning is now a popular tool in the automation task. Researchers have explored the
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
238
use of machine learning techniques to automatically associate documents with categories by first
using a training set to adapt the classifier to the feature set of the particular document set [8].
Machine learning is a relatively new approach that can be used in classifying vulnerabilities.
Therefore here Classifier is proposed, that classifies vulnerabilities on the basis of previously
identified vulnerability and can help designer to place vulnerability in the predefined vulnerability
classes that are based on the security properties of the software. Therefore mitigation mechanism
can be applied for the whole class of vulnerabilities. In this classifier, first data pre-processing is
done like text stemming, stop word removal, case transformation then SVM (Support Vector
Machine) is used for the final classification with the regression model. Several conclusions are
drawn after applying a classification. At last using this classifier NVD (National Vulnerability
Database) vulnerabilities are classified and analyzed.
First Level
Second Level
Third level
Access Control
at Process
Level
Authentication
Authorization
Access
Control
Access Control
at
Communication
Level
Secured Session
Management
Secured
Information Flow
Exposures
leading to
Access
Violation
Exposures in
Error Message
Predictable
Algorithm
/sequence
numbers/file
names
User Alertness
Fourth Level
Missing Authentication procedure
Insufficient Authentication procedure
Wrong Authentication procedure
Missing Authorization procedure
Insufficient Authorization procedure
Wrong Authorization procedure
Missing Audit and logging
Insufficient Logging or Audit of
information
Wrong Audit or Logging of information
Missing
Secured
Session
management
Insufficient
Secured
Session
Management
Wrong Secured Session Management
Missing Encryption of Sensitive Data
During Transmission
Insufficient Encryption of Sensitive
Data during Transmission
Wrong Encryption of Sensitive Data
during Transmission
Missing Secured Error Message
Insufficient Secured Error Message
Wrong Secured Error Message
Missing Randomness in the Random
Sequence Ids
Insufficient Randomness
in the
Random Sequence Ids
Wrong Randomness in the Random
Sequence Ids or Wrong Choice of File
Name
Missing User Alerting Information
Insufficient User Alerting Information
Wrong User Alerting Information
While considering the number of classes in the proposed classifier, we consider only access
control at process level and access control at communication level and all the other type of
vulnerabilities are considered in the Others class. Because exposure leading to access violation
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
239
class covers a large domain of vulnerabilities and needs a separate study, therefore after
exploring the domain of this class, we exclude this from the classifier and will consider for the
future work.
Rest of the paper is organized as follows, in section 2, related works in the vulnerability
classification is discussed, and then in section 3, the development process of vulnerabilities
classification model is explained in detail. Classification of vulnerabilities using developed
classifier is done in section 4. Conclusion and future work are discussed in section 5.
2. RELATED WORK
There are many classification approaches using machine learning techniques like [9] proposed
uses a ontological approach to retrieving vulnerability data and establishing a relationship
between them, they also reason about the cause and impact of vulnerabilities. In their ontology
vulnerability management (OVM), they have populated all vulnerabilities of NVD (National
Vulnerability Database), with additional inference rules, knowledge representation, and datamining mechanisms. Another relevant work in vulnerability classification area is done by [10], they
proposed a CVE categorization framework that transforms the vulnerability dictionary into a
classifier that categorizes CVE( Common Vulnerability and Exposure) with respect to diverse
taxonomic features and evaluates general trends in the evolution of vulnerabilities. [11], in their
paper, entitled Secure software Design in Practice presented a SODA (a Security-Oriented
Software Development Framework), which was the result of a research project where the main
goal had been to create a system of practical techniques and tools for creating secure software
with a special focus on the design phase of the software. Another approach of categorizing
vulnerabilities is that of [12]. In their paper [12], they looked at the possibilities of categorizing
vulnerabilities in the CVE using SOM. They presented a way to categorize the vulnerabilities in
the CVE repository and proposed a solution for standardization of the vulnerability categories
using a data-clustering algorithm. [13], proposed SecureSync, an automatic approach to detect
and provide suggested resolutions for recurring software vulnerabilities on multiple systems
sharing/using similar code or API libraries. There are many other vulnerability classification
approaches like [14,15,16], but all the above mentioned approaches are either to generic in
nature or they cannot be used to classify vulnerabilities on the basis of security properties of the
software. Therefore in this research work we are proposing a classifier that is developed using
machine learning techniques and is very specific to the design phase of the software. In the next
section, a development stage of the classifier is explained.
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
240
The vulnerability categorization framework proposed by [10] is similar to this design level
classifier. But Chens framework is a generalized categorization framework that is developed to
classify all the vulnerabilities of CVE, on the bases of classification categories of BID, X-force and
Secunia. The training data in their framework is also taken from these vulnerabilities databases
only.
In our design level vulnerability classifier, only design level vulnerabilities are classified and in
training data only those identified vulnerabilities are considered which can be mitigated at the
design level of the software. Moreover the classes are defined on the basis of security properties
of the software like authentication, authorization etc., which are generally considered while
developing the security design patterns of the software. Therefore after classification
developers/researchers can priorities prevailing vulnerabilities class before choosing security
design pattern.
3.1 Feature Vector Creation
The vulnerabilities in the CVE are defined in the natural language form. Therefore only way to
identify a feature vector using the vulnerability description is the frequency of keywords in the
description. Therefore feature vector are identified by the keywords used in the description of the
vulnerabilities. To make vulnerability description into a structured representation that can be used
by machine learning algorithms, first the text will be converted into tokens and after stemming and
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
241
stop word removal, and case transformation. There are five steps in the feature creation process,
specified as follows:
a). Tokenization
b). Case Transformation
c). Stopword elimination
d). Text stemming of CVE entries
e). Weight Assignment
a).Tokenization
The isolation of word-like units form a text is called tokenization. It is a process in which text
stream is to break down into words, phrases and symbols called tokens [17]. These tokens can
be further used as input for the information processing. In order to convert text in machine
learning form, first the raw text is transformed into a machine readable form, and first step
towards it is a tokenization. As shown in Fig. 3.1 (a), the raw text is first feed to the pre-processor,
convert the text in the form of tokens then further morphological analysers are used to perform
required linguistic analysis.
In order to feed vulnerability description in machine learning process, the textual description of
vulnerability is first converted in the form of tokens. In Table 3.1 (a), a vulnerabilities description is
shown after tokenization.
Vulnerability ID
Vulnerability Description
CVE-2007-0164
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
242
Vulnerability Description
Camouflage,
embeds,
password,
information, in, the, carrier, file, which,
allows, remote, attackers, to, bypass,
authentication,
requirements,
and,
decrypt, embedded, steganography, by,
replacing, certain, bytes, of, the, JPEG,
image,
with,
alternate,
password,
information
Vulnerability
description
after
tokenization and case transformation
camouflage,
embeds,
password,
information, in, the, carrier, file, which,
allows, remote, attackers, to, bypass,
authentication,
requirements,
and,
decrypt, embedded, steganography, by,
replacing, certain, bytes, of, the, jpge,
image, with, alternate, password,
information.
Vulnerability
description
after
Stopword Removal
camouflage, embeds, password
,information, carrier, file, allows,
remote,
attackers,
bypass,
authentication,
requirements,
decrypt,
embedded,
steganography, replacing, bytes,
jpge, image, alternate, password,
information.
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
243
SSES
SS
caresses
caress
IES
I
ponies
poni
SS
SS
caress
caress
S
cats
cat
The vulnerability description after applying porter stemming algorithm is shown in Table 3.1 (d).
Vulnerability ID
Vulnerability Description
CVE-2007-0164
tft,d
where
ft,d is the frequency (number of occurrence of the term t in document d ) and
maxk f k,d (maximum number of occurrences of any term)
Thus, the most frequent term in document d gets a TF as 1, and other terms get fractions as
their term frequency for this document. But the disadvantage of using a term frequency is that, in
this all the terms are considered equally important. In order to avoid this bias, term frequency and
inverse document frequency (tf-idf) weighting is used. The IDF for a term is defined as follows
[23].
Idft,d
log | D |
| {d: t d} |
where
| D | : the total number of documents
As defined in [22], the tf-idf weighting scheme assigns the term t a weight in document d given
by
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
244
lower when the term occurs fewer times in a document, or occurs in many documents;
lowest when the term occurs in virtually all the documents.
Words
abil
abs
enc
Accep
t
1
2
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
5
6
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
10
0.0
0.0
0.0
Access
accoria
account
actio
n
0.0
0.0
0.069434622
98821593
0.120849276
3637866
0.0
0.0
0.071144256
6706304
0.0
0.129932379
10966035
0.079183463
69069837
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
Row No.
0.0
0.2787299435
038924
0.0
TABLE 3.1 (e): Example set of feature vector with their tf-idf
Now after implementing all the above processing we get each vulnerability description as a
vector, weight that is calculated on using tf-idf formula. Example set of ten rows is shown in table
3.1 (e). This vector form will be used in the scoring and ranking of vulnerabilities.
3.2 Categorization Using Support Vector Machine
Text categorization is the process of categorizing text documents into one or more predefined
categories or classes. Differences in the results of such categorization arise from the feature set
chosen to base the association of a given document with a given category [24]. There are a
number of statistical classification methods that can be applied to text categorization, such as
Nave Bayesian [25], Bayesian Network [25], Decision Tree [26, 27], Neural Network [28], Linear
Regression [29], k-NN [30]. SVM (support vector machine) learning method introduced by [31],
are well-founded in terms of computational science. Support vector machines have met with
significant success in numerous real-world learning tasks [25]. Compared with alternative
machine learning methods including Naive Bayes and neural networks, SVMs achieve
significantly better performance in terms of generalization [32, 33]
SVM classification algorithms, proposed by Vapnik [34] to solve two-class problems, are based
on finding a separation between hyperplanes defined by classes of data, shown in Figure 3.2 (a).
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
245
This means that the SVM algorithm can operate even in fairly large feature sets as the goal is to
measure the margin of separation of the data rather than matches on features [24]. The SVM is
trained using pre-classified documents.
As explained in [34], for a given set of training data T = {.xi, yi} (i = 1,, m), each data point .xi
Rd with d features and a true label yi Y = {l1, . . . , lk}. In case of binary classifier, label set is Y
= {l1 = 1, l2 = +1}, it classifies data point in positive and negative by finding separating
hyperplane. The separating hyperplane can be expressed as shown in Eq. 3.2 (a).
w .x + b = 0 --------------------------------------------------------------------------Eq 3.2(a)
where w Rd is a weight vector is normal to the hyperplane, operator () computes
the inner-product of vectors w and x and b is the bias. Now we want to choose the wand b to
maximize the margin, or distance between the parallel hyperplanes that are as far apart as
possible while still separating the data. These hyperplanes can be described by the equations
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
246
two classes are involved then regression model is used. regression model builds a classifier
using a regression method which is specified by the inner operator. For each class i a regression
model is trained after setting the label to (+1) if the label equals i and to (-1) if it is not. Then the
regression models are combined into a classification model. Here we are using SVM
classification model, therefore Regression model is combined into SVM. In order to determine the
prediction for an unlabeled example, all models are applied and the class belonging to the
regression model which predicts the greatest value is chosen.
3.3 Identification and Preparation of Training Data
There are number of public and private vulnerability databases that classify vulnerabilities on
different bases like cause, phase of occurrence, product specific etc. the list is shown in Table 3.3
(a). But they all are too generic in nature. The CWE (Common weakness Enumeration) [36] is a
vulnerability database and portal in which each vulnerability is specified with its type, mitigation,
phase of introduction and security property it belongs to. Therefore it is a best place from where
vulnerability data can be collected on the basis of phase of introduction and the security property
it belongs to. Fig 3.3 (a) is screenshot of the CWE window, it is showing a information that each
entry of CWE contains. In this we are interested in only those vulnerabilities that can be mitigated
in the design phase of the software. But in CWE the vulnerabilities are divided into number of
classes and number of examples are given in the description of each class. In order to collect the
training data from CWE, we explore the required security class, then collect vulnerability example
from each class. Almost all the examples that are used in CWE are from CVE (Common
Vulnerability and Exposure).Maximum possible numbers of examples are collected from the CWE
for the training set.
S. No.
1.
2.
3.
4.
5.
6.
7.
8.
9.
Database Name
Common Vulnerability and Exposures
Common Weakness Enumeration
Computer Associates Vulnerability
Encyclopedia
Dragonsoft vulnerability database
ISS X-Force
National Vulnerability Database
Open Source Vulnerability Database
Public Cooperative Vulnerability Database
Security Focus
URL
http://cve.mitre.org/
http://cwe.mitre.org/
http://www3.ca.com/securityadvisor/vulninfo/br
owse.aspx
http://vdb.dragonsoft.com
http://xforce.iss.net/xforce/search.php
http://nvd.nist.gov/
http://www.osvdb.org/
https://cirdb.cerias.purdue.edu/coopvdb/public/
http://www.securityfocus.com/vulnerabilities/
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
247
Vulnerability
is introduced
at the Design
phase of the
SDLC
Examples
from CVE,
that can be
included in
training set of
authentication
vulnerabilitie
s
FIGURE 3.3 (a): CWE Vulnerability Class Description Window
After the exhaustive search of CWE vulnerability classes, the number of vulnerability examples
that are collected, are shown in Table 3.3 (b). While collecting data from CWE, at most care is
taken to include only those vulnerability classes where the time of introduction is specified as
design phase
S. No.
1.
2.
3.
4.
5.
Vulnerability Class
Authentication
Authorization
Audit and Logging
Secure Information Flow
Secure Session Management
Number of
Training data
54
50
36
31
24
TABLE 3.3 (b): Number of training data identified under each class
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
248
regression model, bootstrap validation is used to validate the classification. Fig 6.2.4 (a) is
showing the screen shot of rapid miner while implementing bootstrap validation.
FIGURE 3.4 (a): Screen shot from Rapidminer Tool, while implementing bootstrap validation
where Acc (Mi)test-set is the accuracy of the model obtained with the bootstrap sample i when it is
applied to the test set i. Acc(Mi)train_set is the accuracy of the model obtained with bootstrap
sample i when it is applied to the original set of the data tuples. The bootstrap method works well
with the small data set.
The whole process that is followed in making the classifier is shown in Figure 3.5 (a). the rapid
miner data mining tool is used that have almost all the available data-mining process in the form
of operators. First of all training data is feed to the regression model that is integrated with SVM,
then Apply Model operator is used to apply the created model and performance operator is used
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
249
to measure the performance of the classifier. As an output, the confusion matrix is be obtained
that will show the accuracy of the classifier.
The confusion matrix that is obtained after the application of the classifier is shown in Table 3.5
(a) and the 3D- graphical representation of the confusion matrix is shown in Fig 3.5 (b).
True
True
True
Authorization Others
Pred.
True SecureInformationFlow
True
Audit
True
True SessionClass
and
Authentication management Precision
Logging
Pred.
Authorization
177
96.72%
Pred. Others
139
98.58%
Pred. SecureInformationFlow
131
100.00%
Pred. Audit
and Logging
128
92.09%
Pred.
Authentication
16
189
88.73%
Pred. Sessionmanagement
92
92.93%
95.52%
92.20%
100.00%
Class Recall
98.88% 84.24% 100.00%
Accuracy: 94.52% +/- 1.85% (mikro: 94.48%)
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
250
All most all the classes have class precision value above 90%. The accuracy rate of about 90%
makes the classifier quite accurate (Han and Kamber, 2006). The overall accuracy rate of
developed classifier is 94.5 %.
As shown in Table 3.5 (a), the class precision of authentication class is only 88%, because the
keywords used in the authentication class are common to other classes also. For example the
vulnerability description mainly consist of words like unauthenticated user, not allowed
authenticated user, etc, which actually dont indicate the cause as authentication, but classifier
gets confused due to the frequent use of theses terms in other classes also, which affect the
performance of classifier. But overall accuracy of the classifier is acceptable, which is 94.5%.
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
251
C*
V.N
o.
2
3
4
5
6
Confidence
(Authorizati
on)
0.70382212
6777
0.46653638
8761
0.80392399
6569
0.92738005
8636
0.63117841
7772
0.67407254
0417
1.02248907
1514
Confidence
(Others)
0.74117964
80
1.0
0.56403681
66
0.80057971
24
0.72385838
954
1.03283405
41
0.93112276
32
Confidence
(Secure
Information
Flow)
0.05035211
037
1.04654845
455
0.61342764
113
0.56610700
139
0.99080223
010
1.03255019
721
0.13558666
871
Confidenc
e
(Session
Managem
ent)
Prediction
0.0
Audit and
Logging
0.0
Others
1.0
0.0
Authentica
tion
0.01646968
988
1.0
0.0
Authentica
tion
0.45735700
877
1.0
0.0
Authentica
tion
0.32822280
439
1.0
0.0
Authentica
tion
0.0
Audit and
Logging
Confidence
(Audit and
logging)
1.0
0.54866717
509
0.39066602
401
1.0
Confidence
(Authenticati
on)
0.995248931
575
0.733920948
135
0.563459046
881
FIGURE 4.0 (a): Screenshot from rapid miner, while implementing the final model
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
252
The percentage of audit and logging, secure information flow and session management are
18%, 15% and 12% respectively, which makes them almost equally important.
Vulnerability Class
Count
Percentage
Authentication
Authorization
Audit and Logging
Secure-Information-Flow
Session-management
Others
Total
96
72
56
47
39
117
427
22.48244
16.86183
13.11475
11.00703
9.133489
27.40047
100
Percentage
excluding Others
30.96774
23.22581
18.06452
15.16129
12.58065
0.0
100
These vulnerability classification data can be used with the severity rating to calculate the risk of
vulnerability occurrence at the design phase.
REFERENCES
[1]
[2]
G. Hoglund and G. McGraw. Exploiting Software: How to Break Code, New York:
Addison-Wesley, 2004
[3]
[4]
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
253
[5]
[6]
I.V. Krsul, Software Vulnerability Analysis. Ph.D. Thesis. Purdue University. USA, 1998.
[7]
[8]
T. Joachims. Text categorization with support vector machines: learning with many
relevant features. 10th European Conference on Machine Learning. 1998.
[9]
J. A. Wang, and M. Guo. OVM: An Ontology for Vulnerability Management. 7th Annual
Cyber Security and Information Intelligence Research Workshop.Tennessee, USA. 2009.
[10]
[11]
P.H. Meland, and J. Jensen. Secure Software Design in Practice. Third International
Conference on Availability, Reliability and Security. 2008.
[12]
Y. Li, H.S. Venter, and J.H.P Eloff. Categorizing vulnerabilities using data clustering
techniques, Information and Computer Security Architectures (ICSA) Research Group.
2009.
[13]
N.H.Pham, T.T Nguyen, H.A Nguyen,., X.Wang, , A.T. Nguyen, and T.N Nguyen.
Detecting Recurring and Similar Software Vulnerabilities, International Conference of
Software Engineering. Cape Town, South Africa. 2010.
[14]
[15]
[16]
Y.Wu, R.A. Gandhi, and H. Siy. Using Semantic Templates to Study Vulnerabilities
th
Recorded in Large Software Repositories. 6 International workshop on software
Engineering for secure system, Cape Town, South Africa. 2010.
[17]
[18]
C. Fox. Lexical Analysis and Stoplist-Data Structures and Algorithms. New York: PrenticeHall. 1992.
[19]
[20]
Lemur Project (2008). The Lemur Toolkit: For Language Modeling and Information
Retrieval, 2008. Available: http://www.lemurproject.org.
[21]
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
254
[22]
[23]
[24]
A. Basu, C. Walters, M. Shepherd. Support vector machines for text categorization. 36th
Annual Hawaii International Conference,2003
[25]
T. Joachims. A probabilistic analysis of the Rocchio algorithm with TFIDF for text
categorization, 14th International Conference on Machine Learning. 1997.
[26]
J.R. Quinlan. Programs for machine learning. San Francisco: Morgan Kaufmann
Publishers.1993.
[27]
S. M. Weiss, C. Apte, F.J. Damerau, D.E. Johnson, F.J. Oles, T., Goetz, T. Hampp.
Maximizing text-mining performance. IEEE Intelligent Systems Magazine, 1999.
[28]
[29]
[30]
[31]
[32]
C. Burges. "A tutorial on support vector machines for pattern recognition. Data Mining and
Knowledge Discovery, 2, 1998, pp. 1-47.
[33]
J.T.K. Kwok. Automated Text Categorization Using Support Vector Machine. International
Conference on Neural Information Processing, 1998.
[34]
V. Vapnik. Statistical Learning Theory. New York: John Wiley and Sons. 1998.
[35]
T. Hastie, and R. Tibshirani, Classification by pair wise coupling. Ann. Statist, 26, 1998,
pp. 451471.
[36]
[37]
[38]
J. Han, and M. Kamber Data Mining: Concepts and Techniques. San Francisco: Morgan
Kaufmann Publisher, 2006.
International Journal of Computer Science and Security (IJCSS), Volume (6): Issue (4)
255