Batch No 14

Download as pdf or txt
Download as pdf or txt
You are on page 1of 62

EVALUATION OF MACHINE LEARNING ALGORITHMS FOR THE

DETECTION OF FAKE BANK CURRENCY

ABSTRACT
Fake currency is the money produced without the approval of the government, creation of it is
considered as a great Offense. The elevation of color printing technology has increased the rate of fake
currency note printing on a very large scale. Years before, the printing could be done in a print house,
but now anyone can print a currency note with maximum accuracy using a simple laser printer. This
results in the issue of fake notes instead of the genuine ones has been increased very largely. It is the
biggest problem faced by many countries including India. Though Banks and other large organizations
have installed Automatic machines to detect fake currency notes, it is really difficult for an average
person to distinguish between the two. This has led to the increase of corruption in our country
hindering the country's growth. Some of the methods to detect fake currency are watermarking,
optically variable ink, security thread, latent image, techniques like counterfeit detection pens. We
hereby propose an application system for detecting fake currency where image processing is used to
detect fake notes. We are going to detect the variation in barcode among the real and fake one and also,
we will find out dissimilarities between the image under consideration and the prototype. CNN
classifiers will be used to detect fake currency. The proposed app for fake currency detection will be
simple, accurate and easy to use

I
TABLE OF CONTENTS

TOPICS PAGE NO

● Certificates
● Acknowledgement
● Abstract
● Figures/Tables

CHAPTER-1: INTRODUCTION 1

CHAPTER-2: LITERATURE SURVEY 2

CHAPTER-3: SYSTEM ANALYSIS

3.1 EXISTING SYSTEM 3

3.2 PROPOSED SYSTEM 4

CHAPTER-4: SYSTEM REQUIREMENTS

4.1 FUNCTIONAL REQUIREMENTS 6

4.2 FUNCTIONAL REQUIREMENTS 7

CHAPTER-5: SYSTEM STUDY

5.1 FEASIBILITY STUDY 8

5.2 FEASIBILITY ANALYSIS 8

CHAPTER-6: SYSTEM DESIGN


II
6.1 DATA FLOW DIAGRAM 10

6.2 UML DIAGRAMS 11

CHAPTER-7: INPUT AND OUTPUT DESIGN

7.1 INPUT DESIGN 16

7.2 OUTPUT DESIGN 19

CHAPTER-8: IMPLEMENTATION

8.1 MODULES 22

8.1.1 MODULE DESCRIPTION 23

CHAPTER-9: SOFTWARE ENVIRONMENT

9.1 PYTHON 25

9.2 SOURCE CODE 34

CHAPTER-10: RESULTS/DISCUSSIONS

10.1 SYSTEM TEST 54-57

10.2 OUTPUT SCREENS 57-62

CHAPTER-11: CONCLUSION 35

CHAPTER-12: REFERENCES/BIBLIOGRAPHY

III
LIST OF FIGURES

1 Architecture of system
2 Data flow Diagram
3 UML diagrams
4 Use Case
5Class
6 Sequence
7C
8 Activity
9 Unsupervised learning
10 CNN
11 Command Prompt
12 http request link
13 App in the browser
14 The web application front page
15 Python Application Installation
16.Python Setup and Success logs
17 OutPut screen 1
18 OutPut screen 2
19 Output screen 3
20 Output screen 4

III
EVALUATION OF MACHINE LEARNING ALGORITHMS FOR THE
DETECTION OF FAKE BANK CURRENCY

MINI Project Report submitted to


Jawaharlal Nehru Technological University Hyderabad
in partial fulfillment for the award of degree of

Bachelor of Technology
in
Computer Science & Engineering
by

VANGALA SANTHOSH REDDY


Roll No:20X31A6249

Under the Guidance of


INTERNAL GUIDE NAME
DESIGNATION

IV
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

SRI INDU INSTITUTE OF ENGINEERING &TECHNOLOGY


(Affiliated to JNTUH, Hyderabad, Approved by AICTE, New Delhi)
Sheriguda (V), Ibrahimpatnam (M), R.R.Dist., Telangana- 501510.

CERTIFICATE

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

This is to certify that the dissertation entitled “Title of the Project”, being submitted by

Vangala Santhosh Reddy, bearing Roll No: 20X31A6249, to Jawaharlal Nehru Technological

University Hyderabad in partial fulfillment of the requirements for the award of the degree of

Bachelor of Technology in Computer Science & Engineering(Cyber S, is a record of bonafide work

carried out by him. The results of investigations enclosed in this report have been verified and found

satisfactory. The results embodied in this dissertation have not been submitted to any other University

or Institute for the award of any other degree.

INTERNAL GUIDE HEAD OF THE DEPARTMENT

V
SRI INDU INSTITUTE OF ENGINEERING & TECHNOLOGY
(Affiliated to JNTUH, Hyderabad, Approved by AICTE, New Delhi)
Sheriguda (V), Ibrahimpatnam (M), R.R.Dist., Telangana- 501510.

DECLARATION

I, VANGALA SANTHOSH REDDY bearing Roll No 20X31A6249, hereby certify

that the dissertation entitled “EVALUATION OF MACHINE LEARNING ALGORITHMS

FOR THE DETECTION OF FAKE BANK CURRENCY”, carried out under the guidance

of INTERNAL GUIDE NAME ,DESIGNATION is submitted to Jawaharlal Nehru

Technological University Hyderabad in partial fulfillment of the requirements for the award

of the degree of Bachelor of Technology in Computer Science & Engineering. This is a record

of bonafide work carried out by me and the results embodied in this dissertation have not been

reproduced or copied from any source. The results embodied in this dissertation have not been

submitted to any other University or Institute for the award of any other degree.

Date: Vangala Santhosh Reddy

Roll No: 20X31A6249


Department of CSE, SIIET

V
CHAPTER-1

INTRODUCTION

Computers and mobile phones have become an unavoidable part of our lives. There are a lot of things which
we can do with these technologies. With the rapid development of mobile phones and technologies come
several services like application creation - (refers to the process of making application software for handheld
and desktop devices such as mobile phones, personal computers and Personal Digital Assistants. Through the
usage of apps, the user is provided with various features that will enable him to fulfil all his needs and much
more. Apps should be interactive to the users, Camera/webcam services- includes use of camera services for
processing various aspects of image. Fake currency Detection is a system that can be used to overcome the
limitations most of the people and our institutions of higher learning face with respect to making difference
between counterfeit currencies- (is imitation currency produced without the legal sanction of the state or
government, usually in a deliberate attempt to imitate that currency and so as to deceive its recipient) and real
currencies. The project involves making use of Digital Image Processing Domain - Digital image processing
is the use of computer algorithms to perform image processing on digital images.

1
CHAPTER-2

LITERATURE SURVEY
The paper titled as “Fake currency Detection using Basic Python Programming and Web Framework” (2020)
presented by Prof Chetan More, Monu Kumar, Rupesh Chandra, Raushan Singh. System proposed in this
paper makes use of flask web framework (Flask is micro web framework of python and web programming)
and is written in python programming language returned.

The paper titled as “Detection of Counterfeit Indian Currency Note Using Image Processing” presented by
Vivek Sharan and Amandeep Kaur in 2019 describes Detection of Counterfeit Indian Currency Notes using
Image Processing. In this paper, three major features were taken into consideration; Latent image, Logo of
RBI and denomination numeral with Rupee symbol with color part of the currency note. Using these three
features they had applied an algorithm which detects counterfeit Indian currency notes.

The paper titled “Indian Paper currency detection “presented by Aakash S. Patil in 2019, introduced a new
technique to improve the Recognition ability and the transaction speed to classify Indian currency. It
involved making use of OpenCv library of computer functions mainly aimed at real-time computer vision
which covered functions such as note identification, segmentation and Recognition and NumPy module of
Python used for numerical processing, arg parse to parse command line arguments cv2 for the OpenCV
bindings.

The paper titled as “Identification of fake notes and denomination recognition” presented by Archana MR,
Kalpitha C P, Prajwal S K, Pratiksha N proposed Identification of fake note and denomination recognition in
2018 to reduce human power. This system is mainly divided into two halves: currency recognition &
conversion system. They made use of a software interface which could be utilized for different types of
monetary standards.

The paper titled as “Fake currency detection using Image processing” presented by S. Atchaya, K. Harini, G.
Kaviarasi, B. Swathi in 2017 gave the technique called Performance Matrix for the Fake currency detection
using MATLAB image processing system. Neural networks and model-based reasoning are the two methods
behind this technique. Various methods like water marking, optically variable ink, fluorescence, etc. are used
to detect fake currency in this paper.

2
CHAPTER -3

SYSTEM ANALYSIS

3.1 EXISTING SYSTEM

From the observation of the papers we can say that there are certain stages which are very important
in the existing system architecture. Firstly we have the step called image acquisition means we have to take
input as the image only through the scanner and in this there is no use of any digital camera to capture the
image in the real time system. In this existing architecture, only the front part of the note is take into
consideration and not the rear part. After that we have next step called as pre-processing method. In this there
are basically 3 to 4 sub stages involved like preprocessing, grayscale conversion, edge detection and
segmentation.

3.1.1 DISADVANTAGES OF EXISTING SYSTEM:

Existing fake bank currency detection systems have their limitations and disadvantages, which can include:
False Positives: One of the most significant drawbacks of fake currency detection systems
is the potential for false positives. Legitimate currency notes may be flagged as fake, causing inconvenience
to users and businesses.

False Negatives: Conversely, there is also the risk of false negatives, where counterfeit
notes go undetected by the system, leading to the circulation of counterfeit currency.

3
Limited Detection Methods: Many fake currency detection systems rely on a limited set of
detection methods, such as UV (Ultraviolet) and magnetic ink detection. Counterfeiters may use more
sophisticated methods, making it difficult for these systems to catch advanced counterfeit bills.

Cost: High-quality, multi-modal detection systems can be costly, especially for smaller
businesses. This cost may deter some from investing in effective counterfeit detection technology.

Maintenance: The systems require regular maintenance and calibration to ensure accuracy.
Neglecting maintenance can lead to decreased effectiveness over time.

3.2 PROPOSED SYSTEM


The proposed system contains the advantages of the existing system and eliminates the disadvantages
of it. The project centers on the design and implementation of Fake Currency Detection Application for the
Department of Computer Science, for Pillai College of Engineering. The scope of the project is to provide
approaches and strategies, which have proved to be suitable when accessing the image of the desired
currency note.
The scope of this project includes:
Study existing image detection schemes and concern on recognition base types.
Study the usability features of the existing fake currency detection methods from the
general and ISO features.
Mapping between the recognition-based image detection system methods and the usability
features and extract a collection of usability features to be built in the new system prototype.

The basic plan behind the working of the project includes:


● Applying one of the Machine Learning Algorithms recognized for Image Detection and Processing.
● Training the machine using an already prepared dataset of currency notes, which will contain sample
images of fake and real currency notes.
● Analyzing the content of the dataset, using the applied algorithm to extract required
● features which will help in recognizing other input images of similar format.
● Interpreting a given set of input images, to identify a proportion or distribution of features in it

4
3.2.1 ADVANTAGES

Detecting fake bank currency using machine learning algorithms offers several advantages:
Improved Accuracy: Machine learning algorithms can be trained on a large dataset of genuine and
counterfeit currency notes, enabling them to learn intricate patterns and features that may be difficult for
human operators or traditional detection methods to discern. This leads to higher accuracy in identifying
counterfeit notes.

Real-time Detection: Machine learning systems can process and analyze currency notes in real-time,
providing rapid results. This is particularly beneficial in high-volume environments like banks, retail stores,
and ATMs.

Adaptability: Machine learning algorithms can adapt to new counterfeit techniques and variations as they
encounter them. This adaptability makes them more robust against evolving counterfeit methods.

Reduced False Positives: ML algorithms can be fine-tuned to minimize false positives, reducing the chances
of genuine notes being flagged as counterfeit and causing inconvenience to users.

Scalability: Machine learning systems can be easily scaled to accommodate different currency types and
denominations, making them versatile for use in various countries and settings.

Multimodal Detection: ML algorithms can utilize multiple modalities for detection, including visual
analysis, infrared, ultraviolet, and more, enhancing their ability to spot counterfeit currency.

Continuous Improvement: As more data is collected and more counterfeit notes are detected, machine
learning algorithms can continuously improve their performance, refining their ability to identify fake
currency over time.

Reduced Human Error: ML-based systems are less prone to human error, making them more reliable in
detecting counterfeit currency consistently.

Integration with Existing Systems: Machine learning algorithms can be integrated with existing
point-of-sale (POS) systems, ATMs, and other financial equipment, making it easier for businesses to
implement counterfeit detection measures.
5
Cost Savings: Over time, machine learning systems can be cost-effective, as they require less human
intervention, resulting in potential cost savings for businesses.

6
CHAPTER-4

SYSTEM REQUIREMENTS

4.1 FUNCTIONAL REQUIREMENTS

The functional requirements for detecting fake bank currency using machine learning algorithms should
encompass a range of features and capabilities to ensure effective counterfeit detection. Here are some key
functional requirements:
Data Collection:Ability to collect and maintain a comprehensive dataset of genuine and counterfeit currency
samples for training and testing.

Training and Model Development:Capable of training machine learning models using the collected data to
distinguish between genuine and counterfeit currency.Ability to fine-tune and update models as new
counterfeit techniques emerge.

Currency Compatibility:Support for various currency types, denominations, and designs, ensuring
versatility in different regions.

Multimodal Sensing:Ability to incorporate multiple modalities for detection, including visual analysis,
ultraviolet (UV), infrared (IR), magnetic ink, and other relevant detection techniques.

Real-time Processing:Capability to process currency notes in real-time to ensure swift detection and
minimize delays in transactions.

Adaptability:Ability to adapt and evolve to new counterfeit techniques and variations over time.

Accuracy:High accuracy in distinguishing between genuine and counterfeit currency, with a low rate of false
positives and false negatives.

User Interface:User-friendly interface for operators or end-users to interact with the system, providing clear
feedback on detected currency authenticity.

7
Integration:Compatibility with existing point-of-sale (POS) systems, ATMs, and other financial equipment
for seamless integration into banking and retail operations.

Scalability:
Ability to handle high transaction volumes in busy environments, such as banks and retail stores.

4.2 NON - FUNCTIONAL REQUIREMENTS

4.2.1 Hardware Requirements


Processor -Pentium -IV

RAM -4 GB (min)

Hard Disk-500 GB

Processor-i5

4.2.2 SOFTWARE REQUIREMENTS


Python - 3.7 or above
numpy==1.18.1
matplotlib==3..3
pandas==0.25.3
keras==2.3.1
tensorflow==1.14.0
Opencv-contrib-python==3.4.15.55
Sklearn
scikit-learn==0.22.2.post1

8
CHAPTER-5

SYSTEM STUDY

5.1 FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and a business proposal is put forth with a very
general plan for the project and some cost estimates. During system analysis the feasibility study of the
proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the
company. For feasibility analysis, some understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are,

● ECONOMICAL FEASIBILITY
● TECHNICAL FEASIBILITY
● SOCIAL FEASIBILITY

5.2 FEASIBILITY ANALYSIS

ECONOMIC FEASIBILITY
This study is carried out to check the economic impact that the system will have on the organization. The
amount of funds that the company can pour into the research and development of the system is limited. The
expenditures must be justified. Thus the developed system as well within the budget and this was achieved
because most of the technologies used are freely available. Only the customized products had to be
purchased.

TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical requirements of the system.
Any system developed must not have a high demand on the available technical resources. This will lead to
high demands on the available technical resources. This will lead to high demands being placed on the client.
The developed system must have a modest requirement, as only minimal or null changes are required for
implementing this system.

9
SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of
training the user to use the system efficiently. The user must not feel threatened by the system, instead must
accept it as a necessity. The level of acceptance by the users solely depends on the methods that are
employed to educate the user about the system and to make him familiar with it. His level of confidence must
be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final
user of the system.

10
CHAPTER-6

SYSTEM DESIGN

6.1 DATA FLOW DIAGRAM

11
6.2 UML DIAGRAMS

UML stands for Unified Modeling Language. UML is a standardized general-purpose


modeling language in the field of object-oriented software engineering. The standard is
managed, and was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object-
oriented computer software. In its current form UML is comprised of two major components:
a Meta-model and a notation. In the future, some form of method or process may also be added
to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business
modeling and other non-software systems.
The UML represents a collection of best engineering practices that have proven
successful in the modeling of large and complex systems.
The UML is a very important part of developing objects-oriented software and the
software development process. The UML uses mostly graphical notations to express the design
of software projects.

12
GOALS:

The Primary goals in the design of the UML are as follows:


1. Provide users a ready-to-use, expressive visual modeling Language so that they can
develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of the OO tools market.
6. Support higher level development concepts such as collaborations, frameworks,
patterns and components.
7. Integrate best practices.

6.2.1 USE CASE

A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram defined by
and created from a Use-case analysis. Its purpose is to present a graphical overview of the functionality
provided by a system in terms of actors, their goals (represented as use cases), and any dependencies between
those use cases. The main purpose of a use case diagram is to show what system functions are performed for
which actor. Roles of the actors in the system can be depicted

13
.

6.2.2 CLASS DIAGRAM

In software engineering, a class diagram in the Unified Modeling Language (UML) is a type
of static structure diagram that describes the structure of a system by showing the system's classes, their
attributes, operations (or methods), and the relationships among the classes. It explains which class contains
information.

14
6.2.3 SEQUENCE DIAGRAM

A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that

shows how processes operate with one another and in what order. It is a construct of a Message
Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and timing
diagrams.

6.2.4 6.2.4 COLLABORATION DIAGRAM

In UML diagrams, collaboration is a type of structured classifier in which roles and attributes co-operate to
define the internal structure of a classifier. You use a collaboration when you want to define only the roles
and connections that are required to accomplish a specific goal of the collaboration.

15
6.2.5 ACTIVITY DIAGRAM

Activity diagrams are graphical representations of workflows of stepwise activities and action
with support for choice, iteration and concurrency. In the Unified Modeling Language, activity
diagrams can be used to describe the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of control

16
CHAPTER-7

INPUT AND OUTPUT DESIGN

7.1 INPUT DESIGN

Designing a system to detect fake bank currency using machine learning algorithms involves multiple steps.
Here's an outline of the process, from data collection to model deployment:

1. Data Collection:

Gather a diverse dataset of both genuine and fake banknotes. This dataset should include images or data
points related to various features of the banknotes that can be used for analysis. These features might include
security features like watermarks, holograms, and UV patterns.

2. Data Preprocessing:

Clean and preprocess the data, ensuring that it is consistent and free from noise. Data preprocessing steps
may include resizing images, converting them to grayscale, normalizing pixel values, and extracting relevant
features.

3. Feature Engineering:

Extract relevant features from the banknote images or data points. These features might include texture,
colour, shape, or statistical characteristics. You can use techniques like edge detection, texture analysis, or
colour histograms.

4. Data Splitting:

Divide the dataset into three subsets: training, validation, and test sets. Typically, an 80-20 or 70-30 split is
used, with the majority of the data allocated for training.

5. Model Selection:

Choose an appropriate machine learning algorithm for the task. Common choices include Support Vector
Machines (SVM), Random Forests, Convolutional Neural Networks (CNNs), or Gradient Boosting.

17
6. Model Training:

Train the selected model on the training data. Ensure that you use the appropriate loss function, and monitor
the model's performance on the validation set. You may need to adjust hyperparameters like learning rates
and regularisation terms.

7. Model Evaluation:

Assess the model's performance using various evaluation metrics, such as accuracy, precision, recall,
F1-score, and ROC-AUC. Additionally, consider confusion matrices to understand false positives and false
negatives.

8. Hyperparameter Tuning:

Fine-tune the model's hyperparameters based on the validation performance. This process may require
multiple iterations.

9.Model Testing:

Once the model's performance is satisfactory on the validation set, evaluate it on the test set to assess its
real-world performance.

10. Deployment:

Deploy the trained model in a production environment. This can be done through web applications, APIs, or
integration into banknote processing machines.

7.1.1 INPUT OBJECTIVE

The primary objective for the detection of fake bank currency using machine learning
algorithms is to create a reliable and accurate system for distinguishing genuine currency
from counterfeit currency. The following are specific objectives associated with this task:

1.High Accuracy:
Develop a machine learning model that can achieve a high level of accuracy in distinguishing genuine
currency from counterfeit currency.

18
2.Real-time Detection:
Enable real-time detection of fake banknotes, ensuring that the system can quickly process and verify
currency within a reasonable time frame.

3.Versatility:
Create a system that can detect various types of counterfeit techniques, including those involving image
manipulation, printing errors, and fraudulent security features.

4.Generalization:
Ensure that the model can generalize well to detect fake banknotes from different countries and with various
denominations.

5.Robustness:
Design a system that remains effective in the presence of variations in lighting, angles, and conditions
commonly encountered in real-world scenarios.

6. False Positive Minimization:


Minimize false positives to avoid inconveniencing users with genuine banknotes.

7. Data Security:
Implement strong data security measures to protect sensitive information, ensuring that banknote images are
not misused or compromised.

8.Scalability:
Develop a system that can be easily integrated into various platforms, such as ATM machines, point-of-sale
systems, and mobile apps.

9.Adaptability: in
Allow for regular updates and adaptability to new counterfeit methods as counterfeiters continually evolve
their techniques.

10.User-Friendly Interface:
Provide a user-friendly interface for operators and end-users to easily interact with the system and understand
the results.
19
7.2 OUTPUT DESIGN

The output design for the detection of fake bank currency using machine learning algorithms involves the
presentation and communication of results to users, administrators, and relevant stakeholders. Here's how
you can design the output to effectively convey the findings of the currency authentication system:

1.Authentication Decision:
Clearly communicate the authentication decision for each banknote, indicating whether it is genuine or
potentially fake. This can be a binary "Genuine/Fake" classification.

2.Confidence Score:
Provide a confidence score or probability associated with the decision. This score helps users understand the
system's level of confidence in its decision.

3.Image Visualization:
Display the banknote image or a representation of it with any pertinent annotations, such as areas of
suspicion or security features that have been examined.

4.Log and Audit Trail:


Maintain a log or audit trail of all authentication decisions for future reference and auditing purposes. This
log should include timestamps and details of the banknote in question.

5.Alerts and Notifications:


If a potentially fake banknote is detected, generate alerts or notifications to the appropriate parties, such as
administrators or security personnel.

6.User Feedback:
Provide feedback to end-users, such as a display message or indicator, indicating the outcome of the
verification process. For example, "Authentic" or "Please contact a supervisor."

7.Reports and Statistics:


Generate reports and statistics for administrators and management to analyze the performance of the system.
This may include metrics on detection rates, false positives, and system uptime.
20
8.Explanation of Decision:
Offer an explanation of why the system made a particular decision. This can involve highlighting specific
features or patterns that led to the decision, helping to build trust in the system's accuracy.

9.Integration with Existing Systems:


Ensure that the output can be integrated into the existing systems used by financial institutions or businesses.
This might involve APIs, database integration, or communication with cash handling equipment.

10.User Interface:
Design a user-friendly interface with intuitive visuals and instructions for operators, cashiers, or end-users
interacting with the system.

7.2.1 OUTPUT OBJECTIVE


The output objectives for the detection of fake bank currency using machine learning algorithms are crucial
to ensuring that the system effectively communicates its findings and supports the goals of currency
authentication. Here are the specific objectives associated with the output:

1.Clear and Accurate Authentication Results:


Objective: Present clear and accurate authentication results to users, ensuring that genuine and counterfeit
banknotes are distinguished without ambiguity.

2.Confidence Level Indication:


Objective: Provide a confidence level or probability score alongside the authentication decision, enabling
users to gauge the system's certainty in its verdict.

3.Visualization of Examined Banknotes:


Objective: Display visual representations of the banknotes that have been examined, allowing users to
visually confirm the findings and identify potential issues.

4.Auditable Log and Records:

21
Objective: Maintain an auditable log and records of all authentication decisions, including timestamps,
images, and related data, for accountability and auditing purposes.

5.Alerts and Notifications:


Objective: Generate alerts and notifications for administrators or security personnel when potentially
counterfeit banknotes are detected, facilitating timely responses.

6.User Feedback and Guidance:


Objective: Provide clear and informative feedback to end-users, such as cashiers, to guide their actions based
on the authentication results (e.g., "Authentic" or "Please consult a supervisor").

7.Comprehensive Reports and Statistics:


Objective: Generate comprehensive reports and statistics for administrators and management, offering
insights into system performance, including detection rates, false positives, and system uptime.

8.Explanation of Decision:
Objective: Offer explanations of the system's decision, highlighting specific features or patterns that
contributed to the verdict to build trust and understanding.

9.Seamless Integration:
Objective: Ensure seamless integration of the output with existing systems used by financial institutions or
businesses, facilitating data flow and real-time updates.

10.User-Friendly Interface:
Objective: Design a user-friendly interface that simplifies the interpretation of results and instructions for
operators or end-users.

22
CHAPTER-8

IMPLEMENTATION

8.1 MODULES

Detecting fake banknotes using machine learning algorithms is a challenging yet important task. To create a
system that can evaluate the authenticity of banknotes, you will need various modules and techniques. Here's
an outline of the key modules and steps you might consider:
Data Collection and Preprocessing:
● Gather a dataset of genuine and counterfeit banknotes. Ensure that the dataset is diverse and includes
different denominations and variations of counterfeit banknotes.
● Preprocess the images, removing noise, standardizing dimensions, and enhancing the quality if
necessary.
Feature Extraction:
● Extract relevant features from the banknote images. Common features include texture, color, and
patterns.
● Use techniques like Gabor filters, Histogram of Oriented Gradients (HOG), and Local Binary Pattern
(LBP) to capture distinctive characteristics.
Machine Learning Model Selection:
● Choose an appropriate machine learning algorithm for classification. Some popular choices are
Support Vector Machines (SVM), Random Forest, Decision Trees, and Neural Networks.
Model Training and Validation:
● Split your dataset into training and validation sets. Train your model on the genuine and counterfeit
banknote samples.
● Employ techniques like cross-validation to ensure the model's robustness.
Model Evaluation Metrics:
● Use appropriate evaluation metrics such as accuracy, precision, recall, F1-score, and ROC-AUC to
assess the model's performance.
Hyperparameter Tuning:
● Fine-tune the model's hyperparameters to achieve better accuracy and generalization.
Data Augmentation:
● Augment the dataset to include variations of genuine and counterfeit banknotes. This can help
improve the model's ability to generalize to different scenarios.

23
Deployment:
Integrate the trained model into a software application or system. You can use libraries like Tkinter which is
used to implement GUI
User Interface:
● Create a user-friendly interface for users to input banknote images and receive results. This interface
may include features like uploading images, displaying results, and providing user feedback.
Real-time Processing:
● If needed, implement real-time processing by integrating the system with cameras or image capture
devices.

8.1.1 MODULES DESCRIPTION

Preprocessing

It is very difficult to process an image. Before any image is processed, it is very significant to
remove unnecessary items it may hold. After removing unnecessary artifacts, the image can
be processed successfully. The initial step of image processing is Image Pre-Processing Pre-
processing involves processes like conversion to grayscale image, noise removal and image
reconstruction. Conversion to grey scale image is the most common pre-processing practice.
After the image is converted to grayscale, then remove excess noise using different filtering
methods.

Image segmentation

Segmentation of images is important as large numbers of images are generated during the
scan and it is unlikely for clinical experts to manually divide these images in a reasonable
time. Image segmentation refers to segregation of given image into multiple non-overlapping
regions. Segmentation represents the image into sets of pixels that are more significant and
easier for analysis. It is applied to approximately locate the boundaries or objects in an image
and the resulting segments collectively cover the complete image. The segmentation
algorithms works on one of the two basic characteristics of image intensity; similarity and
discontinuity.

24
Feature extraction

Feature extraction is an important step in the construction of any pattern classification and
aims at the extraction of the relevant information that characterizes each class. In this process
relevant features are extracted from objects/ alphabets to form feature vectors. These feature
vectors are then used by classifiers to recognize the input unit with target output unit. It
becomes easier for the classifier to classify between different classes by looking at these18
features as it allows fairly easy to distinguish. Feature extraction is the process to retrieve the
most important data from the raw data.

Classification

Classification is used to classify each item in a set of data into one of predefined set of classes
or groups. In other words, classification is an important technique used widely to differentiate
normal and tumor brain images. The data analysis task classification is where a model or
classifier is constructed to predict categorical labels. Classification is a data mining function
that assigns items in a collection to target categories or classes. The goal of classification is to
accurately predict the target class for each case in the data

25
CHAPTER-9

SOFTWARE ENVIRONMENT

Python is a general-purpose interpreted, interactive, object-oriented, and high- level programming


language. An interpreted language, Python has a design philosophy that emphasize code readability
(notably using whitespace indentation to delimit code blocks rather than curly brackets or keywords), and
a syntax that allows programmers to express concepts in fewer lines of code than might be used in
languages such as C++or Java. It provides constructs that enable clear programming on both small and
large scales. Python interpreters are available for many operating systems. CPython, the reference
implementation of Python, is open source software and has a community-based development model, as do
nearly all of its variant implementations. CPython is managed by the non-profit Python Software
Foundation. Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object- oriented, imperative, functional and procedural, and
has a large and comprehensive standard library

9.1 PYTHON

Python is a general-purpose interpreted, interactive, object-oriented, and high-level programming language.


An interpreted language, Python has a desig philosophy that emphasizes code readability (notably using
whitespace indentation to delimit code blocks rather than curly brackets or keywords), and a syntax that
allows programmers to express concepts in fewer lines of code than might be used in languages such as
C++or Java. It provides constructs that enable clear programming on both small and large scales. Python
interpreters are available for many operating systems. CPython, the reference implementation of Python,is
open source software and has a community-based development model, as do nearly all of its variant
implementations. CPython is managed by the non-profit Python Software Foundation. Python features a
dynamic type system and automatic memory management. It supports multiple programming
paradigms, including object- oriented, imperative, functional and procedural, and has a large and
comprehensive standard library
What is Python
Python is a popular programming language. It was created by Guido van Rossum, and released in 1991.

26
It is used for:
● web development (server-side),
● software development,
● Mathematics,
● system scripting.
● Python can be used on a server to create web applications.
● Python can be used alongside software to create workflows.
● Python can connect to database systems. It can also read and modify files.
● Python can be used to handle big data and perform complex mathematics.
● Python can be used for rapid prototyping, or for production-ready software development.
● Python works on different platforms (Windows, Mac, Linux, Raspberry Pi, etc).
● Python has a simple syntax similar to the English language.
● Python has syntax that allows developers to write programs with fewer lines than some other
programming languages.
● Python runs on an interpreter system, meaning that code can be executed as soon as it is written.
This means that prototyping can be very quick.

Python can be treated in a procedural way, an object-oriented way or a functional way.


Good to know
The most recent major version of Python is Python 3, which we shall be using in this tutorial. However,
Python 2, although not being updated with anything other than security updates, is still quite popular.
In this tutorial Python will be written in a text editor. It is possible to write Python in an Integrated
Development Environment, such as Thonny, Pycharm, Netbeans or Eclipse which are particularly useful
when managing large collections of Python files.
Python Syntax compared to other programming languages
Python was designed for readability, and has some similarities to the English language with influence from
mathematics.
Python uses new lines to complete a command, as opposed to other programming languages which often
use semicolons or parentheses.

Python relies on indentation, using whitespace, to define scope; such as the scope of loops, functions and
classes. Other programming languages often use curly-brackets for this purpose.

27
MACHINE LEARNING

Machine learning is a type of computer technology that allows computers to learn and make predictions or
decisions without being explicitly programmed. In simple terms, it's about teaching computers to learn from
data and use that knowledge to perform tasks or solve problems. Here's how it works:
Data Collection: First, you gather a lot of data related to the task you want the computer to perform. This data
can be anything from images and text to numbers and sensor readings.
Training: You feed this data into a machine learning algorithm. During the training phase, the algorithm looks
for patterns, relationships, or rules in the data. It tries to figure out how the input data (e.g., the features of an
image) relates to the desired output (e.g., whether the image contains a cat or a dog).
Learning: The machine learning algorithm adjusts its internal parameters based on the patterns it finds in the
training data. It essentially learns from the data.
Prediction or Decision: Once the algorithm has learned from the data, it can be used to make predictions or
decisions on new, unseen data. For example, it can classify new images as either cats or dogs based on what
it learned during training.
Here's a simple analogy: Think of machine learning like teaching a computer to recognize different fruits.
You show it a bunch of apples, oranges, and bananas, and it learns to distinguish them by their size, color,
and shape. Once it's learned, you can give it a new, unlabeled fruit, and it can tell you whether it's an apple,
orange, or banana based on what it learned from the training data.
In machine learning, tasks are generally classified into broad categories. These categories are based on
how learning is received or how feedback on the learning is given to the system developed.

Two of the most widely adopted machine learning methods are supervised learning which trains algorithms
based on example input and output data that is labeled by humans, and unsupervised learning which provides
the algorithm with no labeled data in order to allow it to find structure within its input data. Let’s explore
these methods in more detail.

Supervised Learning

In supervised learning, the computer is provided with example inputs that are labeled with their desired
outputs. The purpose of this method is for the algorithm to be able to “learn” by comparing its actual output
with the “taught” outputs to find errors, and modify the model accordingly.Supervised learning therefore uses
patterns to predict label values on additional unlabeled data. For example,

with supervised learning, an algorithm may be fed data with images of sharks labeled as fish and images of
oceans labeled as water. By being trained on this data, the supervised learning algorithm should be able to
28
later identify unlabeled shark images as fish and unlabeled ocean images as water.A common use case of
supervised learning is to use historical data

o predict statistically likely future events. It may use historical stock market information to anticipate
upcoming fluctuations, or be employed to filter out spam emails. In supervised learning, tagged photos of
dogs can be used as input data to classify untagged photos of dogs.

Unsupervised Learning
In unsupervised learning, data is unlabeled, so the learning algorithm is left to find commonalities among
its input data. As unlabeled data are more abundant than labeled data, machine learning methods that
facilitate unsupervised learning are particularly valuable.The goal of unsupervised learning may be as
straightforward as discovering hidden patterns within a dataset, but it may also have a goal of feature
learning, which allows the computational machine to automatically discover the representations that are
needed to classify raw data.
Unsupervised learning is commonly used for transactional data. You may have a large dataset of customers
and their purchases, but as a human you will likely not be able to make sense of what similar attributes can
be drawn from customer profiles and their types of purchases. With this data fed into an unsupervised
learning algorithm, it may be determined that women of a certain age range who buy unscented soaps are
likely to be pregnant, and therefore a marketing campaign related to pregnancy and baby products can be
targeted to this audience in order to increase their number of purchases.
Approaches

As a field, machine learning is closely related to computational statistics, so having a background


knowledge in statistics is useful for understanding and leveraging machine learning algorithmsSupport
Vector Machine or SVM is one of the most popular Supervised Learning algorithms, which is used for
Classification as well as Regression problems. However, primarily, it is used for Classification problems
in Machine Learning.The goal of the SVM algorithm is to create the best line or decision boundary that
can segregate n-dimensional space into classes so that we can easily put the new data point in the correct
category in the future. This best decision boundary is called a hyperplane.SVM chooses the extreme
points/vectors that help in creating the hyperplane. These extreme cases are called support vectors, and
hence the algorithm is termed as Support Vector Machine. Consider the below diagram in which there are
two different categories that are classified using a decision boundary or hyperplane:

29
Convolutional neural network (CNN/ConvNet) is a class of deep neural networks, most commonly applied to
analyze visual imagery. Now when we think of a neural network we think about matrix multiplications but
that is not the case with ConvNet. It uses a special technique called Convolution. Now in mathematics
convolution is a mathematical operation on two functions that produces a third function that expresses how
the shape of one is modified by the other.

30
CNN(CONVOLUTIONAL NEURAL NETWORK)

CNN stands for Convolutional Neural Network, which is a class of deep learning neural networks commonly
used for image and video analysis, as well as in various other applications like natural language processing.
CNNs are designed to automatically and adaptively learn patterns, features, and hierarchies from data,
particularly in the context of grid-like data, such as images.

Here are the key components and concepts associated with CNNs:

1. Convolutional Layer: Convolutional layers are the building blocks of CNNs. They consist of a set of
learnable filters (also known as kernels) that slide over the input data to perform convolution operations. This
operation extracts local patterns or features from the input data. Convolution helps the network recognize
spatial hierarchies and patterns in the data.

2. Pooling Layer: Pooling layers are used to reduce the spatial dimensions of the data while retaining
important information. Common pooling operations include max-pooling and average-pooling. Pooling helps
to make the network more robust to variations in the input data and reduces the number of parameters.

3. Activation Function: After each convolutional and pooling operation, an activation function is applied,
typically the Rectified Linear Unit (ReLU) function. This introduces non-linearity into the model, allowing it
to learn complex patterns and features.

4. Fully Connected Layer: CNNs often conclude with one or more fully connected layers, which act as
traditional neural network layers. These layers help combine high-level features and make predictions based
on the learned features. In the case of image classification, the final fully connected layer typically outputs
class probabilities.

5. Stride: Stride determines the step size at which the filter moves across the input data during convolution. A
larger stride reduces the spatial dimensions of the output feature maps.

31
6. Padding: Padding is the addition of extra rows and columns of zeros around the input data before
convolution. It helps control the spatial dimensions of the feature maps and prevent them from shrinking too
quickly.

7. Filters/Kernels: Filters are small, learnable matrices that are applied during convolution. These filters are
responsible for recognizing different features within the input data, such as edges, textures, or more complex
structures.

8. Hierarchical Feature Learning: CNNs are designed to learn hierarchical features, starting from low-level
features (e.g., edges and corners) in the early layers to more complex and abstract features in the deeper
layers. This ability to capture hierarchical features makes CNNs highly effective in image analysis tasks.

9. Transfer Learning: CNNs often benefit from transfer learning, which involves using a pre-trained network
(e.g., on a large dataset like ImageNet) as a starting point for a new task. This can save time and
computational resources, as the lower layers of the network have already learned useful features.

CNNs are widely used in various applications, including image classification, object detection, image
segmentation, facial recognition, medical image analysis, and more. They have revolutionized the field of
computer vision and have made significant contributions to the field of deep learning.

Python Installation

Many PCs and Macs will have python already installed.To check if you have python installed on a Windows
PC, search in the start bar for Python or run the following on the Command Line (cmd.exe):C:\Users\Your
Name>python --version

To check if you have python installed on a Linux or Mac, then on linux open the command line or on Mac
open the Terminal and type:
python --version

32
Download the Correct version into the system
Step 1: If you find that you do not have python installed on your computer, then you can download it for
free from the following website: https://www.python.org/

Now, check for the latest and the correct version for your operating system.

Step 2: Click on the Download Tab.

33
Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow Color or you can
scroll further down and click on download with respective to their version. Here, we are downloading the
most recent python version for windows 3.7.4

Step 4: Scroll down the page until you find the Files option.

Step 5: Here you see a different version of python along with the operating system.

• To download Windows 32-bit python, you can select any one from the three options: Windows x86
embeddable zip file, Windows x86 executable installer or Windows x86 web-based installer.

34
•To download Windows 64-bit python, you can select any one from the three options: Windows
x86-64 embeddable zip file, Windows x86-64 executable installer or Windows x86-64 web-based
installer.

Here we will install Windows x86-64 web-based installer. Here your first part regarding which version of
python is to be downloaded is completed. Now we move ahead with the second part in installing python i.e.,
Installation

Installation of Python

Step 1: Go to Download and Open the downloaded python version to carry out the installation process.

Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to PATH.

35
Step 3: Click on Install NOW After the installation is successful. Click on Close.

With these above three steps on python installation, you have successfully and correctly installed Python.
Now is the time to verify the installation. Note: The installation process might take a couple of minutes.

Python Quickstart:

Python is an interpreted programming language, this means that as a developer you write Python (.py) files in
a text editor and then put those files into the python interpreter to be executed.

The way to run a python file is like this on the command line:

C:\Users\Your Name>python helloworld.py


Where "helloworld.py" is the name of your python file.

Let's write our first Python file, called helloworld.py, which can be done in any text editor.
helloworld.py

print("Hello, World!")\

Simple as that. Save your file.

Open your command line, navigate to the directory where you saved your file, and run: C:\Users\Your
Name>python helloworld.py
36
The output should read: Hello, World!

Congratulations, you have written and executed your first Python program.

The Python Command Line tests a short amount of code in python. Sometimes it is quickest and easiest
not to write the code in a file. This is made possible because Python can be run as a command line itself.
Type the following on the Windows, Mac or Linux command line: C:\Users\Your Name>python Or, if
the "python" command did not work, you can try "py": C:\Users\Your Name>py
From there you can write any python, including our hello world example from earlier in the tutorial:
C:\Users\Your Name>python
Python 3.6.4 (v3.6.4:d48e ceb, Dec 19 2017, 06:04:45) [MSC v.1900 32 bit (Intel)] on win32 Type
"help", "copyright", "credits" or "license" for more information.

>>>print("Hello, World!")

Which will write "Hello, World!" in the command line: C:\Users\Your Name>python

Python 3.6.4 (v3.6.4:d48e ceb, Dec 19 2017, 06:04:45) [MSC v.1900 32 bit (Intel)] on win32 Type
"help", "copyright", "credits" or "license" for more information.

>>>print("Hello, World!") Hello, World!

Whenever you are done in the python command line, you can simply type the following to quit the
python command line interface: exit().

37
DJANGO

Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic
design. Built by experienced developers, it takes care of much of the hassle of Web development, so you
can focus on writing your app without needing to reinvent the wheel. It’s free and open source. Django's
primary goal is to ease the creation of complex, database-driven websites. Django emphasizes reusability
and "pluggability" of components, rapid development, and the principle of don't repeat yourself. Python
is used throughout, even for settings files and data models.

38
PYTHON LIBRARIES

Tensorflow

TensorFlow is a free and open-source software library for dataflow and differentiable programming across a
range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural
networks. It is used for both research and production at Google. TensorFlow was developed by the Google
Brain team for internal Google use. It was released under the Apache 2.0 open-source license on November
9, 2015.

Numpy
Numpy is a general-purpose array-processing package. It provides a high-performance multidimensional
array object, and tools for working with these arrays. It is the fundamental package for scientific computing
with Python. It contains various features including these important ones:

▪ A powerful N-dimensional array object


▪ Sophisticated (broadcasting) functions

▪ Tools for integrating C/C++ and Fortran code

▪ Useful linear algebra, Fourier transform, and random number capabilities

Besides its obvious scientific uses, Numpy can also be used as an efficient multi-dimensional container of
generic data. Arbitrary data-types can be defined using Numpy which allows Numpy to seamlessly and
speedily integrate with a wide variety of databases.

Pandas
Pandas is an open-source Python Library providing high-performance data manipulation and analysis tool
using its powerful data structures. Python was majorly used for data munging and preparation. It had very
little contribution towards data analysis. Pandas solved this problem. Using Pandas, we can accomplish five
typical steps in the processing and analysis of data, regardless of the origin of data load, prepare, manipulate,
model, and analyze. Python with Pandas is used in a wide range of fields including academic and commercial
domains including finance, economics, Statistics, analytics, etc.

Matplotlib
Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy
formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python
and IPython shells, the Jupyter Notebook, web application servers, and four graphical user interface toolkits.
39
Matplotlib tries to make easy things easy and hard things possible. You can generate plots, histograms, power
spectra, bar charts, error charts, scatter plots, etc., with just a few lines of code. For examples, see the sample
plots and thumbnail gallery. For simple plotting the pyplot module provides a MATLAB-like interface,
particularly when combined with IPython. For the power user, you have full control of line styles, font
properties, axes properties, etc, via an object-oriented interface or via a set of functions familiar to MATLAB
users.

Scikit – learn
Scikit-learn provides a range of supervised and unsupervised learning algorithms via a consistent interface in
Python. It is licensed under a permissive simplified BSD license and is distributed under many Linux
distributions, encouraging academic and commercial use. Python Python is an interpreted high-level
programming language for general-purpose programming. Created by Guido van Rossum and first released
in 1991, Python has a design philosophy that emphasizes code readability, notably using significant
whitespace. Python features a dynamic type system and automatic memory management. It supports multiple
programming paradigms, including object-oriented, imperative, functional and procedural, and has a large
and comprehensive standard library.

• Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to compile your
program before executing it. This is similar to PERL and PHP.

• Python is Interactive − you can actually sit at a Python prompt and interact with the interpreter directly to
write your programs.

Python also acknowledges that speed of development is important. Readable and terse code is part of this,
and so is access to powerful constructs that avoid tedious repetition of code. Maintainability also ties into
this may be an all but useless metric, but it does say something about how much code you have to scan, read
and/or understand to troubleshoot problems or tweak behaviors. This speed of development, the ease with
which a programmer of other languages can pick up basic Python skills and the huge standard library is key
to another area where Python excels. All its tools have been quick to implement, saved a lot of time, and
several of them have later been patched and updated by people with no Python background - without
breaking.

40
9.2 SOURCE CODE

from tkinter import *


import tkinter
from tkinter import filedialog
import numpy as np
from tkinter.filedialog import askdirectory
from tkinter import simpledialog
import cv2
from keras.utils.np_utils import to_categorical
from keras.layers import Input
from keras.models import Model
from keras.layers import MaxPooling2D
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D
from keras.models import Sequential
import keras
import pickle
import matplotlib.pyplot as plt
import os
from keras.models import model_from_json

main = tkinter.Tk()#fake currency detection using image processing


main.title("fake currency detection using image processing") #designing main screen
main.geometry("1000x700")

global filename
global classifier

def upload():
global filename
filename = filedialog.askdirectory(initialdir = ".")
text.delete('1.0', END)
text.insert(END,filename+' Loaded')

41
text.insert(END,"Dataset Loaded")

def processImages():
text.delete('1.0', END)
X_train = np.load('model/features.txt.npy')
Y_train = np.load('model/labels.txt.npy')
text.insert(END,'Total images found in dataset for training = '+str(X_train.shape[0])+"\n\n")

def generateModel():
global classifier
text.delete('1.0', END)
if os.path.exists('model/model.json'):
with open('model/model.json', "r") as json_file:
loaded_model_json = json_file.read()
classifier = model_from_json(loaded_model_json)
classifier.load_weights("model/model_weights.h5")
classifier._make_predict_function()
print(classifier.summary())
f = open('model/history.pckl', 'rb')
data = pickle.load(f)
f.close()
acc = data['accuracy']
accuracy = acc[9] * 100
text.insert(END,"CNN Training Model Accuracy = "+str(accuracy)+"\n")
else:
classifier = Sequential()
classifier.add(Convolution2D(32, 3, 3, input_shape = (64, 64, 1), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Convolution2D(32, 3, 3, activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Flatten())
classifier.add(Dense(output_dim = 256, activation = 'relu'))
42
classifier.add(Dense(output_dim = 1, activation = 'softmax'))
print(classifier.summary())
classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
hist = classifier.fit(X_train, Y_train, batch_size=16, epochs=10, shuffle=True, verbose=2)
classifier.save_weights('model/model_weights.h5')
model_json = classifier.to_json()
with open("model/model.json", "w") as json_file:
json_file.write(model_json)
f = open('model/history.pckl', 'wb')
pickle.dump(hist.history, f)
f.close()
f = open('model/history.pckl', 'rb')
data = pickle.load(f)
f.close()
acc = data['accuracy']
accuracy = acc[9] * 100
text.insert(END,"CNN Training Model Accuracy = "+str(accuracy)+"\n")

def predict():
name = filedialog.askopenfilename(initialdir="testImages")
img = cv2.imread(name)
img = cv2.resize(img, (64,64))
im2arr = np.array(img)
im2arr = im2arr.reshape(1,64,64,3)
XX = np.asarray(im2arr)
XX = XX.astype('float32')
XX = XX/255
preds = classifier.predict(XX)
print(str(preds)+" "+str(np.argmax(preds)))
predict = np.argmax(preds)
print(predict)
img = cv2.imread(name)
43
img = cv2.resize(img,(450,450))
msg = ''
if predict == 0:
cv2.putText(img, 'Fake', (10, 25), cv2.FONT_HERSHEY_SIMPLEX,0.6, (0, 255, 255), 2)
msg = 'Fake'
else:
cv2.putText(img, 'Real', (10, 25), cv2.FONT_HERSHEY_SIMPLEX,0.6, (0, 255, 255), 2)
msg = 'Real'

cv2.imshow(msg,img)
cv2.waitKey(0)

def graph():
f = open('model/history.pckl', 'rb')
data = pickle.load(f)
f.close()

accuracy = data['accuracy']
loss = data['loss']

plt.figure(figsize=(10,6))
plt.grid(True)
plt.xlabel('Iterations')
plt.ylabel('Accuracy/Loss')
plt.plot(loss, 'ro-', color = 'red')
plt.plot(accuracy, 'ro-', color = 'green')
plt.legend(['Loss', 'Accuracy'], loc='upper left')
plt.title('CNN Accuracy & Loss')
plt.show()

font = ('times', 16, 'bold')


title = Label(main, text='detection of fake currency ', justify=LEFT)
title.config(bg='deep skyblue', fg='white')
title.config(font=font)
44
title.config(height=3, width=120)
title.place(x=100,y=5)
title.pack()

font1 = ('times', 13, 'bold')


uploadButton = Button(main, text="Upload Dataset", command=upload)
uploadButton.place(x=10,y=100)
uploadButton.config(font=font1)

processButton = Button(main, text="Image Preprocessing", command=processImages)


processButton.place(x=280,y=100)
processButton.config(font=font1)

cnnButton = Button(main, text="Generate CNN Model", command=generateModel)


cnnButton.place(x=10,y=150)
cnnButton.config(font=font1)

predictButton = Button(main, text="Upload Test Image", command=predict)


predictButton.place(x=280,y=150)
predictButton.config(font=font1)

graphButton = Button(main, text="Accuracy & Loss Graph", command=graph)


graphButton.place(x=10,y=200)
graphButton.config(font=font1)

font1 = ('times', 12, 'bold')


text=Text(main,height=20,width=120)
scroll=Scrollbar(text)
text.configure(yscrollcommand=scroll.set)
text.place(x=10,y=250)
text.config(font=font1)

main.config(bg='LightSteelBlue3')
main.mainloop()
45
46
CHAPTER-10

RESULTS/DISCUSSIONS

10.1 SYSTEM TESTING


The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable
fault or weakness in a work product. It provides a way to check the functionality of components, sub
assemblies, assemblies and/or a finished product. It is the process of exercising software with the intent of
ensuring that the Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of tests. Each test type addresses a specific testing requirement.

10.1.1 TEST CASES


Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is functioning
properly, and that program inputs produce valid outputs. All decision branches and internal code flow should
be validated. It is the testing of individual software units of the application .It is done after the completion of
an individual unit before integration. This is a structural testing that relies on knowledge of its construction
and is invasive. Unit tests perform basic tests at component level and test a specific business process,
application, and/or system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined inputs and expected
results.

Integration testing
Integration tests are designed to test integrated software components to determine if they actually run as one
program. Testing is event driven and is more concerned with the basic outcome of screens or fields.
Integration tests demonstrate that although the components were individually satisfactory, as shown by
successfully unit testing, the combination of components is correct and consistent. Integration testing is
specifically aimed at exposing the problems that arise from the combination of component.

Functional testing
Functional tests provide systematic demonstrations that functions tested are available as specified by the
business and technical requirements, system documentation, and user manuals.
Functional testing is centered on the following items:

47
Valid Input: identified classes of valid input must be accepted. Invalid Input:identified classes of invalid input
must be rejected. Functions: identified functions must be exercised.
Output:identified classes of application outputs must be exercised. Systems/Procedures : interfacing systems
or procedures must be invoked.Organization and preparation of functional tests is focused on requirements,
key functions, or special test cases. In addition, systematic coverage pertaining to identifying Business
process flows; data fields, predefined processes, and successive processes must be considered for testing.
Before functional testing is complete, additional tests are identified and the effective value of current tests is
determined.

White Box Testing

White Box Testing is a testing in which the software tester has knowledge of the inner workings, structure
and language of the software, or at least its purpose. It has a purpose. It is used to test areas that cannot be
reached from a black box level.

Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner workings, structure or language
of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive
source document, such as specification or requirements document, such as specification or requirements
document. It is a test in which the software under test is treated as a black box.
You cannot “see” into it. The test provides inputs and responds to outputs without considering how the
software works.

Unit Testing

Unit testing is usually conducted as part of a combined code and unit test phase of the software
lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases.
Test strategy and approach
Field testing will be performed manually and functional tests will be written in detail.

Test objectives
● All field entries must work properly.
● Pages must be activated from the identified link.
48
● The entry screen, messages and responses must not be delayed.

Features to be tested
● Verify that the entries are of the correct format
● No duplicate entries should be allowed
● All links should take the user to the correct page.

Integration Testing

Software integration testing is the incremental integration testing of two or more integrated software
components on a single platform to produce failures caused by interface defects.The task of the integration
test is to check that components or software applications, e.g. components in a software system or – one step
up – software applications at the company level – interact without error.

Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation by the end
user. It also ensures that the system meets the functional requirements. Test Results: All the test cases
mentioned above passed successfully. No defects encountered.

49
10.2 OUTPUT SCREENS

50
51
52
53
CHAPTER-11

CONCLUSION

11.1 CONCLUSION

We commenced with a brief introduction to our system and discussed the scope and objectives
of our project. During the literature survey we got an opportunity to look closely into the
problem that people are facing in the current environment, we reviewed multiple research
papers out of which we taper down to ten papers and selected five papers as our base research
papers. We analyzed all existing architectures of our base papers and by understanding their
working we have discovered some flaws in the currently existing system. We have kept all the
prime features of existing systems as a primary focus with some of the additional features for
our proposed system.

11.2 FUTURE SCOPE

Many different adaptations, tests and innovations have been kept for the future due to the lack of time. As
future work concerns deeper analysis of particular mechanisms, new proposals to try different methods or
simple curiosity.
1. In future we would be including a module for currency conversion.
2. We can implement the system for foreign currencies.
3. Tracking of device’s location through which the currency is scanned and maintaining the same in the
database.

54
CHAPTER-12

REFERENCES

[1] Prof Chetan More, Monu Kumar, Rupesh Chandra, Raushan Singh, “Fake currency
Detection using Basic Python Programming and Web Framework” IRJET International
Research Journal of Engineering and Technology, Volume: 07 Issue: 04 | Apr 2020 ISSN: 2395-
0056

[2] Vivek Sharan, Amandeep Kaur,” Detection of Counterfeit Indian Currency Note Using
Image Processing” International Journal of Engineering and Advanced Technology (IJEAT),
Volume.09, Issue:01, ISSN: 2249-8958 (October 2019)

[3] Aakash S Patel, “Indian Paper currency detection” International Journal for Scientific
Research & Development (IJSRD), Vol. 7, Issue 06, ISSN: 2321-0613 (June 2019)

[4] Archana M Kalpitha C P, Prajwal S K, Pratiksha N,” Identification of fake notes and
denomination recognition” International Journal for Research in Applied Science &
Engineering Technology (IJRASET), Volume. 6, Issue V, ISSN: 2321-9653, (May 2018)

[5] S. Atchaya, K. Harini, G. Kaviarasi, B. Swathi, “Fake currency detection using Image
processing”, International Journal of Trend in Research and Development (IJTRD), ISSN:
2394-9333 (2017).

55

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy