0% found this document useful (0 votes)
23 views

Module 4 Lecture -2

This document covers various text classification techniques, including Decision Trees, Stochastic Gradient Descent, Logistic Regression, and Support Vector Machines. It explains the structure and functioning of Decision Trees, their advantages and disadvantages, and how to implement them using the CART algorithm. Additionally, it discusses the use of Stochastic Gradient Descent for linear models, the principles of Logistic Regression, and the effectiveness of Support Vector Machines in classification tasks.

Uploaded by

vasif951521
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Module 4 Lecture -2

This document covers various text classification techniques, including Decision Trees, Stochastic Gradient Descent, Logistic Regression, and Support Vector Machines. It explains the structure and functioning of Decision Trees, their advantages and disadvantages, and how to implement them using the CART algorithm. Additionally, it discusses the use of Stochastic Gradient Descent for linear models, the principles of Logistic Regression, and the effectiveness of Support Vector Machines in classification tasks.

Uploaded by

vasif951521
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 65

Module No.

4
Text Classification
Lecture – 2
•Types of learning techniques,
•Text Classification,
•Sampling,
• Naïve Bayes,
•Decision trees,
•Stochastic gradient descent,
•Logistic Regression
• Support vector machine,
•Text clustering,
Decision Trees
Decision Trees are a
type of Supervised Machine
Learning (that is you explain
what the input is and what the
corresponding output is in the
training data) where the data is
continuously split according to a
certain parameter. The tree can
be explained by two entities,
namely decision nodes and
leaves.
Why use Decision Trees?
•There are various algorithms in Machine learning, so choosing
the best algorithm for the given dataset and problem is the main
point to remember while creating a machine learning model.
Below are the two reasons for using the Decision tree:
•Decision Trees usually mimic human thinking ability while
making a decision, so it is easy to understand.
•The logic behind the decision tree can be easily understood
because it shows a tree-like structure.
•Decision Tree is a Supervised learning technique that can be
used for both classification and Regression problems, but mostly it is
preferred for solving Classification problems. It is a tree-structured
classifier, where internal nodes represent the features of a dataset,
branches represent the decision rules and each leaf node represents
the outcome.
•In a Decision tree, there are two nodes, which are the Decision
Node and Leaf Node. Decision nodes are used to make any decision and
have multiple branches, whereas Leaf nodes are the output of those
decisions and do not contain any further branches.
•The decisions or the test are performed on the basis of features of the
given dataset.
•It is a graphical representation for getting all the possible solutions to
a problem/decision based on given conditions.
•It is called a decision tree because, similar to a tree, it starts with the
root node, which expands on further branches and constructs a tree-like
structure.
•In order to build a tree, we use the CART algorithm, which stands
for Classification and Regression Tree algorithm.
Decision Tree Terminologies
•Root Node: Root node is from where the decision tree starts. It represents
the entire dataset, which further gets divided into two or more
homogeneous sets.
•Leaf Node: Leaf nodes are the final output node, and the tree cannot be
segregated further after getting a leaf node.
•Splitting: Splitting is the process of dividing the decision node/root node
into sub-nodes according to the given conditions.
•Branch/Sub Tree: A tree formed by splitting the tree.
•Pruning: Pruning is the process of removing the unwanted branches from
the tree.
•Parent/Child node: The root node of the tree is called the parent node,
and other nodes are called the child nodes.
•How does the Decision Tree algorithm Work?
•Step-1: Begin the tree with the root node, says S, which contains the
complete dataset.
•Step-2: Find the best attribute in the dataset using Attribute Selection
Measure (ASM).
•Step-3: Divide the S into subsets that contains possible values for the best
attributes.
•Step-4: Generate the decision tree node, which contains the best attribute.
•Step-5: Recursively make new decision trees using the subsets of the
dataset created in step -3. Continue this process until a stage is reached
where you cannot further classify the nodes and called the final node as a
leaf node.
Example: Suppose there is a
candidate who has a job offer
and wants to decide whether he
should accept the offer or Not.
So, to solve this problem, the
decision tree starts with the root
node (Salary attribute by ASM).
The root node splits further into
the next decision node (distance
from the office) and one leaf
node based on the corresponding
labels. The next decision node
further gets split into one
decision node (Cab facility) and
one leaf node. Finally, the
decision node splits into two leaf
nodes (Accepted offers and
Declined offer)
Attribute Selection Measures
•While implementing a Decision tree, the main issue arises that how to
select the best attribute for the root node and for sub-nodes. So, to solve
such problems there is a technique which is called as Attribute selection
measure or ASM.
•By this measurement, we can easily select the best attribute for the
nodes of the tree.
• There are two popular techniques for ASM, which are:
Entrpoy
Information Gain
Gini Index
2. Information Gain:
•Information gain is the measurement of changes in entropy after the
segmentation of a dataset based on an attribute.
•It calculates how much information a feature provides us about a class.
•According to the value of information gain, we split the node and build
the decision tree.
•A decision tree algorithm always tries to maximize the value of
information gain, and a node/attribute having the highest information gain
is split first. It can be calculated using the below formula:

Information Gain= Entropy(S)- [(Weighted Avg) *Entropy(each feature)

Entropy: Entropy is a metric to measure the impurity in a given attribute.


It specifies randomness in data. Entropy can be calculated as:
Entropy(s)= -P(yes)log2 P(yes)- P(no) log2 P(no)
Where,
•S= Total number of samples
•P(yes)= probability of yes
•P(no)= probability of no
3. Gini Index:
•Gini index is a measure of impurity or purity used while
creating a decision tree in the CART(Classification and
Regression Tree) algorithm.
•An attribute with the low Gini index should be preferred as
compared to the high Gini index.
•It only creates binary splits, and the CART algorithm uses
the Gini index to create binary splits.
•Gini index can be calculated using the below formula:
Advantages of the Decision Tree
•It is simple to understand as it follows the same process which a
human follow while making any decision in real-life.
•It can be very useful for solving decision-related problems.
•It helps to think about all the possible outcomes for a problem.
•There is less requirement of data cleaning compared to other
algorithms.
Disadvantages of the Decision Tree
•The decision tree contains lots of layers, which makes it complex.
•It may have an overfitting issue, which can be resolved using
the Random Forest algorithm.
•For more class labels, the computational complexity of the decision
tree may increase.
Decision trees ----- continued
•Decision trees are one of the oldest predictive modeling
techniques, where for the given features and target, the
algorithm tries to build a logic tree.
• There are multiple algorithms that exist for decision trees.
One of the most famous and widely used algorithm is CART.
•CART constructs binary trees using this feature, and
constructs a threshold that yields the large amount of
information from each node.
•Let's write the code to get a CART classifier:
https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#
>>>from sklearn import tree
>>>clf = tree.DecisionTreeClassifier().fit(X_train.toarray(), y_train)
>>>y_tree_predicted = clf.predict(X_test.toarray())
>>>print y_tree_predicted
>>>print ' \n Here is the classification report:'
>>>print classification_report(y_test, y_tree_predicted)

•The only difference is in the input format of the training set. We


need to modify the sparse matrix format to a NumPy array
because the scikit tree module takes only a NumPy array.
•Generally, trees are good when the number of features are very
less. So, although our results look good here, people hardly use
trees in text classification. On the other hand, trees have some
really positive sides to them.
•There are many implementations of tree-based algorithms, such
as ID3, C4.5, and C5. scikit-learn uses an optimized version of
the CART algorithm.
Stochastic gradient descent
Stochastic gradient descent

•Stochastic gradient descent (SGD) is a simple, yet very efficient

approach that fits linear models.

• It is particularly useful when the number of samples (and the number

of features) is very large.

•If you follow the cheat sheet, you will find SGD to be the one-stop

solution for many text classification problems.

•Since it also takes care of regularization and provides different losses, it

turns out to be a great choice when experimenting with linear models.


•SGD, also known as Maximum entropy (MaxEnt), provides
functionality to fit linear models for classification and regression using
different (convex) loss functions and penalties.
•For example, with loss = log, fits a logistic regression model, while
with loss = hinge, it fits a linear support vector machine (SVM).
•Stochastic Gradient Descent is a popular algorithm for training a wide
range of models in Machine Learning, including (linear) support vector
machines, logistic regression, and graphical models. When combined
with the backpropagation algorithm, it is the de facto standard algorithm
for training artificial neural networks. Recently, SGD has been applied to
large-scale and sparse machine learning problems often encountered in
text classification and Natural Language Processing.
•Stochastic Gradient Descent (SGD) Classifier is an optimization
algorithm used to find the values of parameters of a function that
minimizes a cost function.
•The algorithm is very much similar to the traditional Gradient Descent
. However, it only calculates the derivative of the loss of a single
random data point rather than all of the data points (hence the name,
stochastic). This makes the algorithm much faster than Gradient
Descent.
An example of SGD is as follows:
>>>from sklearn.linear_model import SGDClassifier
>>>from sklearn.metrics import confusion_matrix
>>>clf = SGDClassifier(alpha=.0001, n_iter=50).fit(X_train, y_train)
>>>y_pred = clf.predict(X_test)
>>>print '\n Here is the classification report:'
>>>print classification_report(y_test, y_pred)
>>>print ' \n confusion_matrix \n '
>>>cm = confusion_matrix(y_test, y_pred)
>>>print cm
The advantages of Stochastic Gradient Descent are:
•Efficiency.
•Ease of implementation (lots of opportunities for code tuning).

The disadvantages of Stochastic Gradient Descent include:


•SGD requires a number of hyperparameters such as the
regularization parameter and the number of iterations.
•SGD is sensitive to feature scaling.
Example
1. Importing
necessary
libraries

2. Importing the dataset


3. Separating the features and target variable
4. Splitting the data set into training and test set

5. Fitting the SGD Classifier model to the training set

6. Predicting the test results


7. Evaluating the model

The confusion matrix determines the performance of the predicted model. Other
metrics such as precision, recall, and f1-score are given by the classification report
module of scikit-learn.
8. Plotting the decision boundary of SGD Classifier
Stochastic Gradient Descent in NLP Text Classification

https://www.theclickreader.com/stochastic-gradient-desc
ent-sgd-classifier/
Logistic regression
Logistic regression
•Logistic regression is a linear model for classification. It's also
known in the literature as logit regression, maximum-entropy
classification (MaxEnt), or the log-linear classifier.
•In this model, the probabilities describing the possible outcomes of a
single trial are modelled using a logit function.

What is logistic regression in NLP?


In natural language processing, logistic regression is the base- line
supervised machine learning algorithm for classification, and also
has a very close relationship with neural networks.
Example
After created a 70/30 train-test split of the dataset, we applied
logistic regression which is a classification algorithm used to solve
binary classification problems. The logistic regression classifier
uses the weighted combination of the input features and passes
them through a sigmoid function. Sigmoid function transforms
any real number input, to a number between 0 and 1.
•We applied logistic regression classifier, on both Bag-of-triGrams

and Tf-Idf features to compare their accuracy scores.

•Building the models on the default parameters gives us the

accuracy scores as below:


•However, when the number of features is higher than the number of
data points, the model tends to be underdetermined.
•To fix this problem, we need to introduce additional constraints that
are known as hyperparameters.
•By using GridSearch we try different combinations of values to find
the model with the lowest error metric, which is in this case log loss.
•In logistic regressions ‘C’ determines the amount of regularization,
and the lower values increase regularization.
Logistic regression Text Classification in NLP

https://medium.com/analytics-vidhya/applying-text-classific
ation-using-logistic-regression-a-comparison-between-bow-
and-tf-idf-1f1ed1b83640
Support vector machines
Support vector machines

•Support vector machines (SVM) is currently the-state-of-art


algorithm in the field of machine learning.
•SVM is a non-probabilistic classifier. SVM constructs a set of
hyperplanes in an infinite-dimensional space, which can be used
for classification, regression, or other tasks. Intuitively, a good
separation is achieved by a hyperplane that has the largest
distance to the nearest training data point of any class (the so-
called functional margin), since in general, the larger the
margin, the lower the size of classifier.
Let's build one of the most sophisticated supervised learning algorithms
with scikit:
>>>from sklearn.svm import LinearSVC
>>>svm_classifier = LinearSVC().fit(X_train, y_train)
>>>y_svm_predicted = svm_classifier.predict(X_test)
>>>print '\n Here is the classification report:'
>>>print classification_report(y_test, y_svm_predicted)
>>>cm = confusion_matrix(y_test, y_pred)
>>>print cm
The Random forest algorithm

•A random forest is an ensemble classifier that estimates based on the


combination of different decision trees.
•Effectively, it fits a number of decision tree classifiers on various
subsamples of the dataset. Also, each tree in the forest built on a random
best subset of features. Finally, the act of enabling these trees gives us
the best subset of features among all the random subsets of features.
•Random forest is currently one of best performing algorithms for many
classification problems.
•Random Forest is a popular machine learning algorithm that belongs to the
supervised learning technique. It can be used for both Classification and
Regression problems in ML.
• It is based on the concept of ensemble learning, which is a process
of combining multiple classifiers to solve a complex problem and to improve
the performance of the model.
•As the name suggests, "Random Forest is a classifier that contains a number
of decision trees on various subsets of the given dataset and takes the average
to improve the predictive accuracy of that dataset."
•Instead of relying on one decision tree, the random forest takes the prediction
from each tree and based on the majority votes of predictions, and it predicts
the final output.
•The greater number of trees in the forest leads to higher accuracy and prevents the
problem of overfitting.
https://www.javatpoint.com/machine-learning-random-forest-algorithm

https://rpubs.com/shubhangi_12/Classification_of_Reviews_using_NLP_and_Rando
m_Forest_Algorithm

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy