Module 4 Lecture -2
Module 4 Lecture -2
4
Text Classification
Lecture – 2
•Types of learning techniques,
•Text Classification,
•Sampling,
• Naïve Bayes,
•Decision trees,
•Stochastic gradient descent,
•Logistic Regression
• Support vector machine,
•Text clustering,
Decision Trees
Decision Trees are a
type of Supervised Machine
Learning (that is you explain
what the input is and what the
corresponding output is in the
training data) where the data is
continuously split according to a
certain parameter. The tree can
be explained by two entities,
namely decision nodes and
leaves.
Why use Decision Trees?
•There are various algorithms in Machine learning, so choosing
the best algorithm for the given dataset and problem is the main
point to remember while creating a machine learning model.
Below are the two reasons for using the Decision tree:
•Decision Trees usually mimic human thinking ability while
making a decision, so it is easy to understand.
•The logic behind the decision tree can be easily understood
because it shows a tree-like structure.
•Decision Tree is a Supervised learning technique that can be
used for both classification and Regression problems, but mostly it is
preferred for solving Classification problems. It is a tree-structured
classifier, where internal nodes represent the features of a dataset,
branches represent the decision rules and each leaf node represents
the outcome.
•In a Decision tree, there are two nodes, which are the Decision
Node and Leaf Node. Decision nodes are used to make any decision and
have multiple branches, whereas Leaf nodes are the output of those
decisions and do not contain any further branches.
•The decisions or the test are performed on the basis of features of the
given dataset.
•It is a graphical representation for getting all the possible solutions to
a problem/decision based on given conditions.
•It is called a decision tree because, similar to a tree, it starts with the
root node, which expands on further branches and constructs a tree-like
structure.
•In order to build a tree, we use the CART algorithm, which stands
for Classification and Regression Tree algorithm.
Decision Tree Terminologies
•Root Node: Root node is from where the decision tree starts. It represents
the entire dataset, which further gets divided into two or more
homogeneous sets.
•Leaf Node: Leaf nodes are the final output node, and the tree cannot be
segregated further after getting a leaf node.
•Splitting: Splitting is the process of dividing the decision node/root node
into sub-nodes according to the given conditions.
•Branch/Sub Tree: A tree formed by splitting the tree.
•Pruning: Pruning is the process of removing the unwanted branches from
the tree.
•Parent/Child node: The root node of the tree is called the parent node,
and other nodes are called the child nodes.
•How does the Decision Tree algorithm Work?
•Step-1: Begin the tree with the root node, says S, which contains the
complete dataset.
•Step-2: Find the best attribute in the dataset using Attribute Selection
Measure (ASM).
•Step-3: Divide the S into subsets that contains possible values for the best
attributes.
•Step-4: Generate the decision tree node, which contains the best attribute.
•Step-5: Recursively make new decision trees using the subsets of the
dataset created in step -3. Continue this process until a stage is reached
where you cannot further classify the nodes and called the final node as a
leaf node.
Example: Suppose there is a
candidate who has a job offer
and wants to decide whether he
should accept the offer or Not.
So, to solve this problem, the
decision tree starts with the root
node (Salary attribute by ASM).
The root node splits further into
the next decision node (distance
from the office) and one leaf
node based on the corresponding
labels. The next decision node
further gets split into one
decision node (Cab facility) and
one leaf node. Finally, the
decision node splits into two leaf
nodes (Accepted offers and
Declined offer)
Attribute Selection Measures
•While implementing a Decision tree, the main issue arises that how to
select the best attribute for the root node and for sub-nodes. So, to solve
such problems there is a technique which is called as Attribute selection
measure or ASM.
•By this measurement, we can easily select the best attribute for the
nodes of the tree.
• There are two popular techniques for ASM, which are:
Entrpoy
Information Gain
Gini Index
2. Information Gain:
•Information gain is the measurement of changes in entropy after the
segmentation of a dataset based on an attribute.
•It calculates how much information a feature provides us about a class.
•According to the value of information gain, we split the node and build
the decision tree.
•A decision tree algorithm always tries to maximize the value of
information gain, and a node/attribute having the highest information gain
is split first. It can be calculated using the below formula:
•If you follow the cheat sheet, you will find SGD to be the one-stop
The confusion matrix determines the performance of the predicted model. Other
metrics such as precision, recall, and f1-score are given by the classification report
module of scikit-learn.
8. Plotting the decision boundary of SGD Classifier
Stochastic Gradient Descent in NLP Text Classification
https://www.theclickreader.com/stochastic-gradient-desc
ent-sgd-classifier/
Logistic regression
Logistic regression
•Logistic regression is a linear model for classification. It's also
known in the literature as logit regression, maximum-entropy
classification (MaxEnt), or the log-linear classifier.
•In this model, the probabilities describing the possible outcomes of a
single trial are modelled using a logit function.
https://medium.com/analytics-vidhya/applying-text-classific
ation-using-logistic-regression-a-comparison-between-bow-
and-tf-idf-1f1ed1b83640
Support vector machines
Support vector machines
https://rpubs.com/shubhangi_12/Classification_of_Reviews_using_NLP_and_Rando
m_Forest_Algorithm