0% found this document useful (0 votes)
6 views26 pages

AI REPORT

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 26

CONTENTS

Declaration i
Abstract ii
Acknowledgement iii
Contents iv
List of Figures vii

Chapter No Chapter Title Page No


1 COMPANY PROFILE 1-3
1.1 History of the Organization 1
1.1.1 Objectives 1
1.1.2 Operations of the Organization 1
1.2 Major Milestones 2

1.3 Structure of the Organization 2


1.4 Services Offered 3

2 ABOUT THE DEPARTMENT 4-5


2.1 Specific Functionalities of the Department 4
2.2 Roles and Responsibilities of Individuals 4
2.3 Testing 5

3 TASK PERFORMED 6-8

4 REFLECTION NOTES 9-19


4.1 Experience 9
4.2 Technical Outcomes 9
4.2.1 System Requirement Specification 9
4.3 System Analysis and Design 9
4.3.1 Existing System 10
4.3.2 Disadvantages of the Existing System 10

i
4.3.3 Proposed System 11
4.3.4 Advantages of the Proposed System 11
4.4 System Architecture 12
4.4.1 Data Flow Diagram 12
4.4.2 UML Diagram 12
4.4.3 USE CASE Diagram 13
4.4.4 Class Diagram 13
4.4.5 Sequence Diagram 14
4.4.6 Activity Diagram 15
4.5 Implementation 16
4.5.1 Modules 17
4.6 Screen Shots 18
5 CONCLUSION 20

BIBLIOGRAPHY 21

APPENDIX 22
Appendix A: Abbreviations 23

ii
LIST OF FIGURES

Figure No Name of the figure Page No


Figure 4.4.1.1 Dataflow Diagram 12
Figure 4.4.2.1 UML Diagram 12
Figure 4.4.3.1 Use Case Diagram 13

Figure 4.4.4.1 Class Diagram 13

Figure 4.4.5.1 Sequence Diagram 14

Figure 4.4.6.1 Activity Diagram 15

Figure 4.6.1 Static Output(True) 18


Figure 4.6.2 Static Output(False) 18
Figure 4.6.3 Fake News Detector(Home page) 19
Figure 4.6.4 Fake News Detector(Output page) 19

iii
Fake News Detection

CHAPTER - 1
COMPANY PROFILE

1.1 History of the Organization

ProgMaster Private Limited - Company History

ProgMaster Private Limited was established with a vision to become a leading force in the
software industry. Incorporated as a private, non-government company, ProgMaster Pvt Ltd is
headquartered in Govardhana Giri, Chittoor, Andhra Pradesh. Since its inception, the company
has focused on delivering cutting-edge technology solutions and high-quality education in the IT
domain.

Core Activities:

 Corporate Training: ProgMaster has trained over 5,000 students, equipping them with
industry-relevant skills.
 Real-Time Projects: Through its unique RealTechWorld system, ProgMaster enables
students to work alongside experienced developers on real-time projects, enhancing their
practical experience.
 Software Development: The company specializes in creating custom software solutions,
web applications, and full-stack development services for diverse clients.
 Project Consultancy: ProgMaster collaborates with various companies, providing
technical expertise and building innovative software products tailored to their needs.

Mission:
ProgMaster's mission is to bridge the gap between theoretical knowledge and practical
application, empowering students and clients to excel in the rapidly evolving IT industry.

Vision:
To become a globally recognized name in software development and IT education, fostering
innovation and creating future-ready professionals.

ProgMaster continues to invest in research, explore new technologies, and contribute to the
growth of the software ecosystem, setting new benchmarks in quality and excellence.

1.1.1 Objectives
 To be a world-class research and development organization committed to enhancing
stakeholder’s value.
 To build best products that is socially innovative with high-quality attributes and
provides excellent education to all.
 Zeal to excel and zest for change. Respect for dignity and potential of individuals.
 They are continuously involved in research about futuristic technologies and finding
ways to simplify them for their clients.

2
Fake News Detection

1.1.2 Operations of the Organization


 The organization is focused on providing quality education on latest technologies and
develop products which are of great need to the society.
 They also involve in distribution and sales of latest electronic innovation products
developed all over the globe to their customers.
 They run a project consultancy where they undertake various projects from wide
range of companies and assist them technically and build products and provide
services to them.

 They are continuously involved in research about futuristic technologies and finding
ways to simplify them for their clients

1.2 Major Milestones


Progmaster is a reliable organization engaged provider a qualitative range of industrial
products. They are also one of the leading companies of this highly commendable range
of products. The team of experts maintain a vigil on the quality of the products. Every
single piece of work is ensured with proper quality assurance. Since they are continually
improving our quality to serve their clients better. Use of modern technology, industry
standards, timely and quality deliveries, experienced workforce are their USPs.
In today’s competitive marketplace, it is important to bring the businesses and
technologies together to deliver on your promise. More than ever, ProgMaster Pvt Ltd is
committed to deliver on our promise so that you can deliver on yours, the success of
your organization.

Through the years, and have been successfully delivering value to our customers. We
truly believe that our customer's success is our success We don’t look at ourselves as a
vendor for their projects instead. You would be excited to hear some of our stories and
know to what extent we have gone in the interest of the success of our customers. and we
work hard to make that happen.

3
Fake News Detection

1.3 Services Offered


 Trained students will avail premium job recommendation from Job Square's Super
Match feature.
 Offers training on trending technologies such as Cyber Security, Full Stack Web
Development, Internet of Things, Artificial Intelligence and Machine Learning.
 Provides flexible learning platform, with expert tutors to explain the trending
technologies to the interns.
 Provides certifications once the interns are trained and successfully completed the
projects assigned to them.

4
CHAPTER-2
ABOUT THE DEPARTMENT

2.1 Specific Functionalities of the Department


There are several departments in the organization. The Tequed Labs provides online courses
related to IT technology, Aptitude. In IT sector, it offers several services which includes
Cloud Computing, Cyber Security, Full Stack Web Development, Internet of Things,
Artificial Intelligence and Machine Learning. It also provides additional technical courses
such as Data Structures, Java, Python programming languages, MongoDB, Bug Bounty,
Design and Analysis of programs, Robotic process automation, Programming in C++ and
many more courses available to get a hand on experience on trending technologies at an
affordable price.
They are focused on providing quality education on latest technologies and develop
products which are of great need to the society. They also involve in distribution and sales of
latest electronic innovation products developed all over the globe to their customers. The
intern is honored by the internship program under this curriculum. This program has
enhanced the skill and enthusiasms of the students as they get knowledge of the company
environments and to learn different aspects of working mechanism that prevail in the
organizations. They are continually improving our quality to serve their clients better. Use of
modern technology, industry standards, timely and quality deliveries, experienced workforce
are their USPs.

2.2 Roles and Responsibilities of Individuals


Since the internship was online, to ensure easy onboarding of interns, the company had
additional individuals who took care of the smooth run of online training.
 Operation and Strategy Head- Ensured there were no difficulties for interns while
onboarding. Best of mentors and doubt clarifying sessions were arranged too.
 Technical Lead- Ensured the technicalities of online training to be smooth. Best
platforms were arranged for our meetings and trainings.
 Mentors- They have helped us to understand the concepts, gave us tasks to get
practical take a way and clarified doubts to the best.

4
Fake News Detection

 Interns- Worked through the tasks given either individually or in a group

2.3 Testing
Testing was done according to the Corporate Standards. As each component was being built,
Unit testing was performed in order to check if the desired functionality is obtained. Each
component in turn is tested with multiple test cases to verify if it is properly working. These
unit tested components are integrated with the existing built components and then integration
testing is performed. Here again, multiple test cases are run to ensure the newly built
component runs in co-ordination with the existing components. Unit and Integration testing
are iteratively performed until the complete product is built.

Once the complete product is built, it is again tested against multiple test cases and all the
functionalities. The product could be working fine in the developer’s environment but might
not necessarily work well in all other environments that the users could be using. Hence, the
product is also tested under multiple environments (Various operating systems and devices).
At every step, if a flaw is observed, the component is rebuilt to fix the bugs. This way, testing
is done hierarchically and iteratively.

5
CHAPTER-3
TASK PERFORMED

Training Program: The internship is a platform where the trainees are assigned with the
specific task. In the initial days of the internship, I was trained on the following:
 Python Programming
 Machine Learning Algorithms
A. Pre-processing Data:
Social media data is highly unstructured – majority of them are informal communication
with typos, slangs and bad-grammar etc. Quest for increased performance and reliability has
made it imperative to develop techniques for utilization of resources to make informed
decisions. To achieve better insights, it is necessary to clean the data before it can be used for
predictive modeling. For this purpose, basic pre-processing was done on the News training
data. This step was comprised of
Data Cleaning:
While reading data, we get data in the structured or unstructured format. A structured format
has a well-defined pattern whereas unstructured data has no proper structure. In between the
2 structures, we have a semi-structured format which is a comparably better structured than
unstructured format.
Cleaning up the text data is necessary to highlight attributes that we’re going to want our
machine learning system to pick up on. Cleaning (or pre-processing) the data typically
consists of a number of steps:
a) Remove punctuation
Punctuation can provide grammatical context to a sentence which supports our
understanding. But for our vectorizer which counts the number of words and not the
context, it does not add value, so we remove all special characters. eg: How are you?-
>How are you
b) Tokenization
Tokenizing separates text into units such as sentences or words. It gives structure to
previously unstructured text. eg: Plata o Plomo-> ‘Plata’, ’o’, ’Plomo’.
c) Remove stopwords

6
Fake News Detection

Stopwords are common words that will likely appear in any text. They don’t tell us
much about our data so we remove them. eg: silver or lead is fine for me-> silver,
lead, fine.
d) Stemming
Stemming helps reduce a word to its stem form. It often makes sense to treat related
words in the same way. It removes suffices, like “ing”, “ly”, “s”, etc. by a simple rule-
based approach. It reduces the corpus of words but often the actual words get
neglected. eg: Entitling, Entitled -> Entitle. Note: Some search engines treat words
with the same stem as synonyms.
B. Feature Generation:
We can use text data to generate a number of features like word count, frequency of large
words, frequency of unique words, n-grams etc. By creating a representation of words that
capture their meanings, semantic relationships, and numerous types of context they are used
in, we can enable computer to understand text and perform Clustering, Classification etc.
Vectorizing Data: Vectorizing is the process of encoding text as integers i.e. numeric form to
create feature vectors so that machine learning algorithms can understand our data.
1. Vectorizing Data: Bag-Of-Words Bag of Words (BoW) or CountVectorizer describes the
presence of words within the text data. It gives a result of 1 if present in the sentence and 0 if
not present. It, therefore, creates a bag of words with a document-matrix count in each text
document.
2. Vectorizing Data: N-Grams N-grams are simply all combinations of adjacent words or
letters of length n that we can find in our source text. Ngrams with n=1 are called unigrams.
Similarly, bigrams (n=2), trigrams (n=3) and so on can also be used. Unigrams usually don’t
contain much information as compared to bigrams and trigrams. The basic principle behind
n-grams is that they capture the letter or word is likely to follow the given word. The longer
the n-gram (higher n), the more context you have to work with.
3. Vectorizing Data: TF-IDF It computes “relative frequency” that a word appears in a
document compared to its frequency across all documents TF-IDF weight represents the
relative importance of a term in the document and entire corpus. TF stands for Term
Frequency: It calculates how frequently a term appears in a document. Since, every document
size varies, a term may appear more in a long sized document that a short one. Thus, the
length of the document often divides Term frequency.

7
Fake News Detection

C. Algorithms used for Classification


 Naïve Bayes Classifier:
This classification technique is based on Bayes theorem, which assumes that the presence of
a particular feature in a class is independent of the presence of any other feature. It provides
way for calculating the posterior probability.
 Random Forest:
Random Forest is a trademark term for an ensemble of decision trees. In Random Forest,
we’ve collection of decision trees (so known as “Forest”). To classify a new object based on
attributes, each tree gives a classification and we say the tree “votes” for that class. The forest
chooses the classification having the most votes (over all the trees in the forest). The random
forest is a classification algorithm consisting of many decisions trees. It uses bagging and
feature randomness when building each individual tree to try to create an uncorrelated forest
of trees whose prediction by committee is more accurate than that of any individual tree.
Random forest, like its name implies, consists of a large number of individual decision trees
that operate as an ensemble. Each individual tree in the random forest spits out a class
prediction and the class with the most votes becomes our model’s prediction. The reason that
the random forest model works so well is: A large number of relatively uncorrelated models
(trees) operating as a committee will outperform any of the individual constituent models.
 Logistic Regression:
It is a classification not a regression algorithm. It is used to estimate discrete values (Binary
values like 0/1, yes/no, true/false) based on given set of independent variable(s). In simple
words, it predicts the probability of occurrence of an event by fitting data to a logit function.
Hence, it is also known as logit regression. Since, it predicts the probability, its output values
lies between 0 and 1 (as expected). Mathematically, the log odds of the outcome are modeled
as a linear combination of the predictor variables.
 Passive Aggressive Classifier:
The Passive Aggressive Algorithm is an online algorithm; ideal for classifying massive
streams of data (e.g. twitter). It is easy to implement and very fast. It works by taking an
example, learning from it and then throwing it away. Such an algorithm remains passive for a
correct classification outcome, and turns aggressive in the event of a miscalculation, updating
and adjusting. Unlike most other algorithms, it does not converge. Its purpose is to make
updates that correct the loss, causing very little change in the norm of the weight vector.

8
CHAPTER-4

REFLECTION NOTES
4.1 Experience
As per our experience during the internship, Tequed Labs follows a good work culture and it
has friendly employees, starting from the staff level to the management level. The trainers are
well versed in their fields and they treat everyone equally. There is no distinguishing between
fresher graduates and corporates and everyone is respected equally. There is a lot of
teamwork followed in every task, be it hard or easy and there is a very calm and friendly
atmosphere maintained at all times. There is a lot of scope for self-improvement due to the
great communication and support that can be found. Interns have been treated and taught well
and all our doubts and concerns regarding the training or the companies have been properly
answered. All in all, Tequed Labs was a great place for a fresher to start career and also for a
corporate to boost his/her career. It has been a great experience to be an intern in such a
reputed organization.

4.2 Technical Outcomes


4.2.1 System Requirements and
Specification
HARDWARE REQUIREMENTS:
 Processor : x86 or x64
 Hard Disk: 500 GB or more.
 Ram : 512 MB(minimum) , 1 GB(recommended)

SOFTWARE REQUIREMENTS:
Operating System : Windows or Linux
Platform used : Anaconda Navigator (Jupyter notebook)

4.3 System Analysis and Design


4.3.1 Existing System
There exists a large body of research on the topic of machine learning methods for deception
detection, most of it has been focusing on classifying online reviews and publicly available
social media posts. Particularly since late 2016 during the American Presidential election, the
9
Fake News Detection

question of determining ‘fake news’ has also been the subject of particular attention within
the literature.

Conroy, Rubin, and Chen outlines several approaches that seem promising towards the aim of
perfectly classify the misleading articles. They note that simple content-related n-grams and
shallow parts-of-speech (POS) tagging have proven insufficient for the classification task,
often failing to account for important context information. Rather, these methods have been
shown useful only in tandem with more complex methods of analysis. Deep Syntax analysis
using Probabilistic Context Free Grammars (PCFG) have been shown to be particularly
valuable in combination with n-gram methods. Feng, Banerjee, and Choi are able to achieve
85%-91% accuracy in deception related classification tasks using online review corpora.
Feng and Hirst implemented a semantic analysis looking at ‘object:descriptor’ pairs for
contradictions with the text on top of Feng’s initial deep syntax model for additional
improvement. Rubin, Lukoianova and Tatiana analyze rhetorical structure using a vector
space model with similar success. Ciampaglia et al. employ language pattern similarity
networks requiring a pre-existing knowledge base.

4.3.2 Disadvantages of the Existing System


 Information was not clear and not able to extract the correct information in the bulk
of news.
 Defamation is among the disadvantages of fake news
 False Perception
 Fake News may lead to Social Unrest

10
Fake News Detection

4.3.3 Proposed System


In this paper a model is build based on the count vectorizer or a tfidf matrix ( i.e ) word tallies
relatives to how often they are used in other articles in your dataset ) can help . Since this
problem is a kind of text classification, Implementing a Naive Bayes classifier will be best as
this is standard for text-based processing. The actual goal is in developing a model which was
the text transformation (count vectorizer vs tfidf vectorizer) and choosing which type of text
to use (headlines vs full text). Now the next step is to extract the most optimal features for
countvectorizer or tfidf-vectorizer, this is done by using a n-number of the most used words,
and/or phrases, lower casing or not, mainly removing the stop words which are common
words such as “the”, “when”, and “there” and only using those words that appear at least a
given number of times in a given text dataset.

4.3.4 Advantages of the Proposed System


 Information was very clear and understandable.
 It gives accurate predictions which is very clear to the user.
 User friendly and faster time compatibility.

4.4 System Architecture


4.4.1 Data flow diagram
The DFD takes an input-process-output view of a system i.e., data objects flow into the
software, are transformed by processing elements, and resultant data objects flow out of the
software. the system which is developed in three parts. The first part is static which works on
machine learning classifier. We studied and trained the model with 4 different classifiers and
chose the best classifier for final execution. The second part is dynamic which takes the
keyword/text from user and searches online for the truth probability of the news. The third
part provides the authenticity of the URL input by user.
In this paper, we have used Python and its Sci-kit libraries [14]. Python has a huge set of
libraries and extensions, which can be easily used in Machine Learning. Sci-Kit Learn
library is the best source for machine learning algorithms where nearly all types of machine
learning algorithms are readily available for Python, thus easy and quick evaluation of ML
algorithms is possible.

Dept. of CSE, SJCIT 11 2020-2021


Fake News Detection

Figure 4.4.1.1: Dataflow diagram to check the truth probability of the URL

4.4.2 UML Diagram

Figure 4.4.2.1: UML Diagram

Dept. of CSE, SJCIT 12 2020-2021


Fake News Detection Tequed Labs

4.4.3 Use Case Diagram

Figure 4.4.3.1: Use case diagram

4.4.4 Class Diagram

Figure 4.4.4.1: Class Diagram

Dept. of CSE, SJCIT 13 2020-2021


Fake News Detection Tequed Labs

4.4.5 Sequence Diagram

Figure 4.4.5.1: Sequence Diagram

Dept. of CSE, SJCIT 14 2020-2021


Fake News Detection Tequed Labs

4.4.6 Activity Diagram

4.4.6.1 Activity Di

Figure 4.4.6.1: Activity Diagram

4.5 Implementation
A. Static Search Implementation-
In static part, we have trained and used 3 out of 4 algorithms for classification. They are
Naïve Bayes, Random Forest and Logistic Regression.

Dept. of CSE, SJCIT 15 2020-2021


Fake News Detection Tequed Labs

Step 1: In first step, we have extracted features from the already pre-processed dataset.
These features are; Bag-of-words, Tf-Idf Features and N-grams.
Step 2: Here, we have built all the classifiers for predicting the fake news detection. The
extracted features are fed into different classifiers. We have used Naive-bayes, Logistic
Regression, and Random forest classifiers from sklearn. Each of the extracted features was
used in all of the classifiers.

Step 3: Once fitting the model, we compared the f1 score and checked the confusion
matrix. Step 4: After fitting all the classifiers, 2 best performing models were selected as
candidate models for fake news classification.
Step 5: We have performed parameter tuning by implementing GridSearchCV methods
on these candidate models and chosen bestperforming parameters for these classifier.
Step 6: Finally selected model was used for fake news detection with the probability of
truth.
Step 7: Our finally selected and best performing classifier was Logistic Regression which
was then saved on disk. It will be used toclassify the fake news.

It takes a news article as input from user then model is used for final classification output
that is shown to user along with probability oftruth.

B. Dynamic Search Implementation-


Our dynamic implementation contains 3 search fields which are-
1) Search by article content.
2) Search using key terms.
3) Search for website in database.

In the first search field we have used Natural Language Processing for the first search field
to come up with a proper solution for the problem, and hence we have attempted to create a
model which can classify fake news according to the terms used in the newspaper articles.
Our application uses NLP techniques like CountVectorization and TF-IDF Vectorization
before passing it through a Passive Aggressive Classifier to output the authenticity as a
percentage probability of an article.

Dept. of CSE, SJCIT 16 2020-2021


Fake News Detection Tequed Labs

The second search field of the site asks for specific keywords to be searched on the net
upon which it provides a suitable output for the percentage probability of that term actually
being present in an article or a similar article with those keyword references in it.
The third search field of the site accepts a specific website domain name upon which the
implementation looks for the site in our true sitesdatabase or the blacklisted sites database.

The true sites database holds the domain names which regularly provide proper and
authentic news and vice versa. If the site isn’t found in either of the databases then the
implementation doesn’t classify the domain it simply states that the news aggregator does
not exist.

Working-
The problem can be broken down into 3 statements-
1) Use NLP to check the authenticity of a news article.
2) If the user has a query about the authenticity of a search query then we he/she can
directly search on our platform and using our custom algorithm we output a confidence
score.
3) Check the authenticity of a news source.
These sections have been produced as search fields to take inputs in 3 different forms in our
implementation of the problem statement.

Dept. of CSE, SJCIT 17 2020-2021


Fake News Detection Tequed Labs

4.6 Screen Shots

Figure 4.3 : Information related to subject present in the


dataset.

Figure 4.4 : Analysing fake and real news from the dataset.

Dept. of CSE, SJCIT 18 2020-2021


Fake News Detection Tequed Labs

Figure 4.5 : Confusion matrix

Dept. of CSE, SJCIT 19 2020-2021


CHAPTER-5

CONCLUSION
In the 21st century, the majority of the tasks are done online. Newspapers that were earlier
preferred as hard-copies are now being substituted by applications like Facebook, Twitter,
and news articles to be read online. Whatsapp’s forwards are also a major source. The
growing problem of fake news only makes things more complicated and tries to change or
hamper the opinion and attitude of people towards use of digital technology. When a
person is deceived by the real news two possible things happen- People start believing that
their perceptions about a particular topic are true as assumed. Thus, in order to curb the
phenomenon, we have developed our Fake news Detection system that takes input from
the user and classify it to be true or fake. To implement this, various NLP and Machine
Learning Techniques have to be used.
The model is trained using an appropriate dataset and performance evaluation is also done
using various performance measures. The best model, i.e. the model with highest accuracy
is used to classify the news headlines or articles. As evident above for static search, our
best model came out to be Logistic Regression with an accuracy of 65%. Hence we then
used grid search parameter optimization to increase the performance of logistic regression
which then gave us the accuracy of 75%. Hence we can say that if a user feed a particular
news article or its headline in our model, there are 75% chances that it will be classified to
its true nature.
We intend to build our own dataset which will be kept up to date according to the latest
news. All the live news and latest data will be kept in a database using Web Crawler and
online database

20
Fake News Detection Tequed Labs

BIBLIOGRAPHY

[1] Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu, “Fake News
Detection on Social Media: A Data Mining Perspective”arXiv:1708.01967v3 [cs.SI], 3
Sep 2017
[2] Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu, “Fake News
Detection on Social Media: A Data Mining Perspective”arXiv:1708.01967v3 [cs.SI], 3
Sep 2017
[3] M. Granik and V. Mesyura, "Fake news detection using naive Bayes classifier," 2017
IEEE First Ukraine Conference on Electricaland Computer Engineering (UKRCON),
Kiev, 2017, pp. 900-903.
[4] Fake news websites. (n.d.) Wikipedia. [Online]. Available:
https://en.wikipedia.org/wiki/Fake_news_website. Accessed Feb. 6,
2017
[5] Cade Metz. (2016, Dec. 16). The bittersweet sweepstakes to build an AI that
destroys fake news.
Websites referred
www.google.com
www.w3schools.com
www.youtube.com
www.freecode.com

21
Fake News Detection Tequed Labs

APPENDIX
Appendix A: Abbreviation

IDE: An integrated development environment (IDE) is software for building applications that
combines common developer tools into a single graphical user interface (GUI).

CSS: Cascading Style Sheets, fondly referred to as CSS, is a simply designed language
intended to simplify the process of making web pages presentable. CSS allows you to apply
styles to web pages. More importantly, CSS enables you to do this independent of the HTML
that makes up each web page.

HTML: HTML stands for Hyper Text Markup Language. It is used to design the front end
portion of web pages using markup language. HTML is the combination of Hypertext and
Markup language. Hypertext defines the link between the web pages. The markup language
issued to define the text documentation within tag which defines the structure of web pages.

JS: JavaScript is a famous scripting language used to create the magic on the sites to make
the site interactive for the user. It is used to enhancing the functionality of a website to
running cool games and web-based software

TF-IDF : TF-IDF stands for Term Frequency Inverse Document Frequency of records. It
can be defined as the calculation of how relevant a word in a series or corpus is to a text.
The meaning increases proportionally to the number of times in the text a word appears but
is compensated by the word frequency in the corpus (data-set).

NLP : Natural language processing (NLP) is a subfield of Artificial Intelligence (AI). This
is a widely used technology for personal assistants that are used in various business
fields/areas. This technology works on the speech provided by the user, breaks it down for
proper understanding and processes accordingly. This is a very recent and effective
approach due to which it has a really high demand in today’s market. Natural Language
Processing is an upcoming field where already many transitions such as compatibility with
smart devices, interactive talks with a human have been made possible.

22
Fake News Detection Tequed Labs

4.7Services Offered

• Full Stack Web Development


• Embedded System and IOT
• Blockchain
• Cyber Security and Ethical Hacking
• Artificial Intelligence and Machine Learning
• Augmented and Virtual Reality

These are the services offered by the company Tequed labs, in which I have opted for the
Artificial Intelligence and Machine Learning which is most popular domain using nowadays
in every field.

23

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy