0% found this document useful (0 votes)
20 views

Group 6-Micro Project Documentation

Uploaded by

Venkat Sahith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Group 6-Micro Project Documentation

Uploaded by

Venkat Sahith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 50

A

Project Report on

CRIME PREVENTION USING MACHINE LEARNING

Submitted for partial fulfilment of the requirements for the award of the degree of
BACHELOR OF TECHNOLOGY
in

INFORMATION TECHNOLOGY

Submitted by
GUMMERLA NIKHIL 21K81A1286

GUNTUPALLI HARSHITH 21K81A1287


JANGA VIKAS 21K81A1288

KALLUTLA TARUN KUMAR REDDY 21K81A1289

Under the Guidance of

Mr. M. HARIKUMAR
ASSISTANT PROFESSOR

DEPARTMENT OF INFORMATION TECHNOLOGY

St. MARTIN'S ENGINEERING COLLEGE

UGC Autonomous

Affiliated to JNTUH, Approved by AICTE,


Accredited by NBA & NAAC A+, ISO 9001-2008
Certified
Dhulapally, Secunderabad - 500 100
www.smec.ac.in
1
AUGUST – 2023

2
St. MARTIN'SENGINEERING COLLEGE
UGC Autonomous
Affiliated to JNTUH, Approved by AICTE
NBA & NAAC A+ Accredited
Dhulapally,Secunderabad-500 100
www.smec.ac.in

CERTIFICATE

This is to certify that the project entitled “Crime Prevention Using Machine
Learning” is being submitted G.Nikhil (21K81A1286), G.Harshith (21K81A1287),
J.Vikas (21K81A1288), K.Tarun (21K81A1289) in fulfilment of the requirement for
the award of degree of BACHELOR OF TECHNOLOGY IN INFORMATION
TECHNOLOGY is recorded of bonafide work carried out by them. The result embodied
in this report have been verified and found satisfactory.

Guide Head Of the Department


Mr. M. HARIKUMAR Dr. V. K. SENTHIL RAGAVAN
Assistant Professor Head of the Department
Department of Information Technology Department of Information Technology

Internal Examiner External Examiner

Place:

Date:

2
St. MARTIN'S ENGINEERING COLLEGE
UGC Autonomous
NBA & NAAC A+ Accredited
Dhulapally, Secunderabad - 500 100
www.smec.ac.in

DEPARTMENT OF INFORMATION TECHNOLOGY

DECLARATION

We, the students of ‘Bachelor of Technology in Department of Information


Technology’, session: 2019 - 2023, St. Martin’s Engineering College, Dhulapally,
Kompally, Secunderabad, hereby declare that the work presented in this Project
Work entitled “Crime Prevention Using Machine Learning” is the outcome of our
own bonafide work and is correct to the best of our knowledge and this work has been
undertaken taking care of Engineering Ethics. This result embodied in this project
report has not been submitted in any university for award of any degree.

GUMMERLA NIKHIL 21K81A1286

GUNTUPALLI HARSHITH 21K81A1287


JANGA VIKAS 21K81A1288

KALLUTLA TARUN KUMAR REDDY 21K81A1289

3
ACKNOWLEDGEMENT

The satisfaction and euphoria that accompanies the successful completion of


any task would be incomplete without the mention of the people who made it possible
and whose encouragement and guidance have crowded our efforts with success.
First and foremost, we would like to express our deep sense of gratitude and
indebtedness to our College Management for their kind support and permission to use
the facilities available in the Institute.
We especially would like to express our deep sense of gratitude and
indebtedness to Dr. P. SANTOSH KUMAR PATRA, Principal, St. Martin’s
Engineering College Dhulapally, for permitting us to undertake this project.
We are also thankful to Dr. V. K. SENTHIL RAGAVAN, Head of the
Department, Information Technology, St. Martin’s Engineering College, Dhulapally,
Secunderabad. for his support and guidance throughout our project as well as
Project Coordinator Mr. M. HARIKUMAR, Assistant Professor, Information
Technology department for his valuable support.
We would like to express our sincere gratitude and indebtedness to our project
supervisor Mr. M. HARIKUMAR, Assistant Professor, Information Technology, St.
Martins Engineering College, Dhulapally, for his support and guidance throughout
our project.
Finally, we express thanks to all those who have helped us successfully
completing this project. Furthermore, we would like to thank our family and friends
for their moral support and encouragement. We express thanks to all those who have
helped us in successfully completing the project.

GUMMERLA NIKHIL 21K81A1286

GUNTUPALLI HARSHITH 21K81A1287


JANGA VIKAS 21K81A1288

KALLUTLA TARUN KUMAR REDDY 21K81A1289

i
ABSTRACT

There is an abnormal increase in the crime rate and also the number of
criminals is increasing, this leads towards a great concern about the security issues.
Crime preventions and criminal identification are the primary issues before the
Police personnel. Because of the processes that law enforcement officials follow like
thumbprint verification, there is a chance of mis planting the thumbprints or being
careful about the thumbprints on the scene area by the criminals that to not to leave
them. These activities will make the officials not to move ahead in the case. With
the advancement in technology, we are placed CCTV at many public places to
capture the criminal’s crime, So by using these images and videos we can identify
the criminals very easily.so in these above scenarios our application will be helpful
i.e., Facial Recognition and Detection technique.

ii
LIST OF FIGURES

Figure Description Page Number

Figure 1.1 Types of Identification System 1

Figure 3.1 System Architecture 9

Figure 4.1 Level 0 DFD diagram of a System 13

Figure 4.2 Level 1 DFD diagram of a System 14


Figure 4.3 Level 2 DFD diagram of a System 15

Figure 4.4 Use Case diagram of System 16

Figure 4.5 Class Diagram of System 17

Figure 4.6 Sequence Diagram of System 18


Figure 5.1 Input Image from Dataset 21

iii
LIST OF SCREENS

Screen Description Page Number

Screen 5.1 Dataset 21


Screen 5.2 Comparison Of Images 22

Screen 5.3 Loading Of Camera 24

Screen 5.4 Finding Face Locations 25

Screen 5.5 To Show Name and Rectangle 26

Screen 5.6 Output 1: Image is Detected 28

Screen 5.7 Output 2: Image is Detected 29

iv
CONTENTS

Acknowledgement i

Abstract ii

List of Figures iii

List of Screens iv

Contents
1. INTRODUCTION
1.1 Motivation 2

1.2 Problem Definition 2

1.3 Objective of Project 3

1.4 Limitations of Project 3

1.5 Organization of Documentation 3

2. LITERATURE SURVEY
2.1 Introduction 4

2.2 Existing System 4

2.3 Disadvantages of Existing system 6

2.4 Proposed System 6

3. ANALYSIS

3.1 Introduction 7

3.2 Software Requirement Specification 7

3.2.1 User Requirements 7

3.2.2 Software Requirements 8

3.2.3 Hardware Requirements 8

3.3 Architecture of the System 9

3.4 Algorithms and Flowcharts 10

v
3.4.1 Face Recognition 10
3.4.2. Open CV 11
4. DESIGN

4.1 Introduction 12

4.2 DFD / ER / UML diagram (any other project diagrams) 12


4.2.1 Data Flow Diagram 12
4.2.2 Use Case Diagram 15
4.2.3 Class Diagram 16
4.2.4 Sequence Diagram 17
4.3 Module design and organization 18

5. IMPLEMENTATION AND RESULTS

5.1 Introduction 20

5.2 Explanation of Key functions 20

5.2.1 Installation 20

5.2.2 Face Recognition of Images 20

5.2.3 Face Recognition in real-time on a webcam 23

5.3 Method of Implementation

5.3.1 Output Screens 28

5.3.2 Result Analysis 29

6. TESTING AND VALIDATION

6.1 Introduction 30

6.2 Design of test cases and scenarios 31

6.3 Validation 31

7. CONCLUSION AND FUTURE WORK

7.1 Conclusion 33

7.2 Future Work 33

8. REFERENCES 34

vi
Chapter 1
INTRODUCTION

Criminal identification is the most important task for the Police who are finding the
criminals, but it is the difficult and most time-consuming task as they have to check
everywhere. It will be more difficult in cities or public places with high people
density. In some cases, manual type of identification gives chance for getting more
information related to criminals. Hence this project proposes an automatic criminal
identification system by detecting the face of criminals. This will help Police to
identify and catch the criminals in public places.
Criminal identification can be done in two ways, which is shown in figure 1. In
Manual Identification System (MIS), identification is done by the Police officers
searching them at public places. It takes a lot of time to give the proper attention and
it also has the chances of skipping criminals as they will be alerted by seeing cops
easily gets escape from there. Since the MIS is in the process of taking more time
and we will not properly focus on everyone. But when it comes to an automated
identification system (AIS) there is no need for observation going in a public place.
Here all the process involved in this system is automated.

Figure 1.1. Types of Identification


System

1
Automated Criminal identification monitoring system’s some important things shown below:
1. Criminal Enrolment: Criminal images with their name to photos are added to
the criminal database so that we can compare the captured images with database
one.
2. Criminal Confirmation: If a person is found from a public place by using this
system, then check who was the criminal using a special folder available on the
desktop.

1.1 Motivation
The primary motivation for this project is to build a model to identify the criminals
using the facial recognition. Further, this project is used to detect the criminals with the details
that present in the database i.e., criminal records. Usually, to identify and detect the criminals,
officials will check with their fingerprints or the manual checking of the criminals, But with a
growing crimes and criminals these ways are not that efficient, there are several techniques
like mis planting of the thumbprints, not leaving any thumbprints, and many more, which
makes it difficult to catch the criminals. In these times building a model that easily identify
and detect the criminals by the images and video footages is very useful.

1.2 Problem Definition


 Manual identification of the criminals is very time taken process and then also it is not
confirmed that they are caught or not by the officials.
 Basically, officials will identify the criminals with their thumbprints, but now a days
criminals are not leaving their thumbprints or instead of leaving they are mis planting
the thumbprints which will divert the whole case. These scenarios can be solved by
this project.

2
1.3 Objective of Project
The main objective of criminal identification based on face recognition
Application is to help police personnel to identify criminals by providing the
information about them. Police personnel can use this application anytime, anywhere
to find a criminal. We can also find criminals from live CCTV surveillance cameras.
This application is fast, robust, reasonably simple, and accurate with a
relatively simple and easy to understand.

1.4 Limitations of project


There are some limitations in this project. This system will identify and
detect the faces of the criminals that are present in the database only.
So, to overcome this problem, the solution is frequent updating of the database of the
criminals so that new criminal’s faces are been uploaded into the database and
problem will be resolved.

1.5 Organization of Documentation


Organization of the documentation explains the gist of what other chapters
explain. Chapter 2 explains about the literature survey, the background work done
before starting the project. Chapter 3 makes an analysis of what we must do and how
we can do. Explains in detail about requirements to complete the project. Next chapter
4 explains about pictorial view of the project i.e., design phase. Chapter 5 is complete
implementation of the project with key functions and methods of implementation.
Chapter 6 discusses about testing, the test cases and test scenarios. Chapter 7 is the
conclusion and future work explains what 12 has been done what else can be done on
this project followed by references and author profile.

3
Chapter 2
LITERATURE SURVEY

2.1 Introduction
Open-CV: Open-CV is Open-Source Computer Vision Library.
The library contains 2500+ algorithms that are optimized which include
a comprehensive set of both classic and state of-the-art computer vision
and machine learning techniques. Also, it has C++, PYTHON, JAVA,
and MATLAB interfaces which support Windows, Linux, Android, and
Mac-OS. For commercial and noncommercial, Open-CV is free for
use. OpenCV is used for capturing the images and videos in public-
place.
Face Detection: The primary function of this step is to capture
the faces of the people who are available in front of the camera. The
outputs from this step are patches that contain each face in the input
image. To design a perfect and preferable face recognition system.
Face alignment is performed to rationalize the scales and orientation of
these patches. Further Next step after the face detection step is human
face patches are extracted.
Face Recognition: Face recognition is a method of identifying
or verifying the identity of an individual using their face. The step after
the representation of faces is to identify them. In this comparison of the
detected face image with the images, we have in our database based on
face encodings. A facial recognition system maps facial expressions
from an image or video using biometrics. To find known faces match
from the database, it compares the details to a database. Facial
recognition may aid in the identification of personal identity, but it also
introduces privacy concerns. Commercial applications use facial
recognition as well as it is used for a variety of purposes ranging from
security to promotions.

2.2 Existing System


4
In various previous studies several researchers demonstrated
their efforts in developing better models for identifying Criminal. This
section explores such research work similar to proposed methodology.

[1] In this paper, the authors are taking help of the CCTV footage and comparing the
images from the footage with criminal database if they didn’t find any fingerprint
from the crime scene. This system consists of five stages. For criminal identification,
authors had used PCA Technique for finding similar features of images available in
the database with captured images of footage. The machine will use a database that
contains the person's personal information so that if FRCI identifies a face, it can
display the person's information. The system interface is implemented using Visual
Studio Code and database and coding using MATLAB R2013b. They achieved 60%
accuracy using the proposed model.
[2] In this paper Using the Passport database, they are identifying whether the traveler
is an authorized passport holder or not. In this, they are using image processing
techniques as well as LBPH mathematical model. This method consists of six steps
for airport security purpose that are:
a) Capture image using webcam.
b) Captured image is sent to the Django server
c) Using LBPH feature set is taken from image
d) Image is compared with database image by applying different classifiers
e) If matching is done user details are fetched from database
f) The predicted detail of user is sent to the admin via mail. This
will also help to catch the criminals who travel from one country to
another and detect if the traveler having a loan from the bank, then
traveler's detailed information will be sent for verification to the police
station.
[3] In this paper, the authors had discussed that an attendance monitoring system is
very important in the teaching and learning process. The student who is entering in
classroom his/her image is captured. Preprocessing and Face region extraction take
place using that captured image for further process. They are using a face recognition
algorithm for marking present if the student came to school or absent if the student is
not coming to school. They are capturing the student’s image using a camera and after
preprocessing comparing with their student database and marking attendance.
5
[4] This paper consists of four steps, the first one is real-time image training and the
second one is Harr-classifier using for face detection. The third step is the comparison
of Surveillance camera captured images with real-time images and last, is the result
part based on the comparison. The authors are using the Haar-classifier on Open-CV
for face detection; Haar-cascading is one of the algorithms for face detection. On the
OpenCV platform, face tracking is taken with help of Harr-like classifiers. More than
one person is identified in this system, and it can be used to find the suspects whom
we are finding. They also told us that we use or Aadhar database we can easily
identify the Indians and foreigners and further can investigate whether a person is a
criminal or not. We can use this system by taking the citizenship database which is
already available.

2.3 Disadvantages of Existing System


There are some disadvantages in the existing system. This system will identify
and detect the faces of the criminals that are present in the database only i.e., the
persons whose details are present in the database are only detected, if the new person
or the person who are not present in the database is not detected.
Usually, to identify and detect the criminals, officials will check
with their fingerprints or the manual checking of the criminals, but with
a growing crimes and criminals these ways are not that efficient, there
are several techniques like mis planting of the thumbprints, not leaving
any thumbprints, and many more, which makes it difficult to catch the
criminals.

2.4 Proposed System


In this application, we want to develop a Facial Recognition
System for Criminal Database by using the Machine Learning Software
Library i.e., OpenCV (Open-Source Computer Vision Library).
We can detect and recognize faces of the criminals in an image and
in a video, stream obtained from a camera in real time.
Using this application, we can detect the suspect of previous
crimes, so that it will helps to avoid some of the criminal activities made
by them with watching his/her activities.

6
Chapter 3
ANALYSIS
3.1 Introduction
This section consists of information regarding the software
dependencies used to build the proposed system and the hardware
capabilities to run them. It also defines the software end user must have
for running the application to check the detection of persons. This
section also describes the architecture of the proposed system. It
explains the algorithms used to build several models and identify the
most accurate model among them to use in the web application. In
total, this chapter defines all the functional and non-functional
requirements that are included in the software requirement
specification to provide a complete description and overview of the
system requirements before the implementation is carried out.

3.2 Software Requirement Specification


The software requirement specification (SRS) describes a
software system to be developed. In this case, we are developing a
machine learning model for crime prevention by Facial Recognition.
The SRS provides functional and non-functional requirements, and it
might include a set of use cases that define interactions that software
should provide. Here, we will be providing a Folder of images i.e.,
Dataset for users to Compare and detect the criminal faces that are

7
found in the crime scene. Then the application will provide output
about whether the person is a criminal or not.
The difference between functional and non-functional
requirements is that the functional requirements describe what the
system should do, whereas the non-functional requirements describe
how the system works.

3.2.1 User Requirements


The user requirements specification is a document that usually
specifies the functionality of the software that the user expects. Once
the URD is documented then the customer can’t demand to add extra
features or features not mentioned in URD, while the developer cannot
claim that the product is completed if doesn’t meet at least one feature
of URD.
For this project, the user requirements are as simple as
providing a better user interface for the end user to check whether the
person image is detected or not.
The user requires a PC with a Visual Studio Code Editor or any
python editor like PyCharm and check with the images present in the
dataset and the images we want to detect. Then at last we get the result
about the detection of the persons.

3.2.2 Software Requirements


The software requirements comprise of the software required to
fulfill the user requirements mentioned above. The software
technologies used in building the proposed system includes:
• Operation system: Windows 10 or higher
• Programming Languages: Python version 3.7 or higher
• IDE: Visual Studio Code, Jupyter Notebook
• Libraries: OpenCV, SimpleFaceRec, Face Recognition

3.2.3 Hardware Requirements

8
The hardware requirements specify the hardware capability
required to develop the proposed system along with support for the
above provided software:
• RAM: A minimum ram of 4GB
• Processor: Dual core processor with minimum 3.0 GHz clock speed.
• Storage Capacity (ROM): A minimum of 100GB

3.3 Architecture of the System

Figure 3.1.
System
Architecture
9
According to Figure 3.1, the first step is to create face databases
as the match template for the system. A face database is created by
acquiring collection of people photos. The photo should be half body
photo where the face is facing front. In the process of verification of id
for an image, the image which is captured using digital camera will be
processed. The image will be detected and extracted and ready for the
next stage. The next stage is pre-processing, where unnecessary
features are eliminated. This is to reduce unnecessary processing effort.
In the feature extraction, the images are collected from the crime scene,
and it has been compared with the images in the database. Then the
libraries that is OpenCV, SimpleFaceRec and Face Recognition that we
installed will be used in recognition phase where it tries to match with
the correct image in the database. If matched, the identification of the
image will be verified, else it will stop.

3.4 Algorithms and Flowcharts


In Machine Learning Classification is defined as a problem of
identifying which set of categories an observation belongs to. Usually,
Machine Learning algorithms are of following types:
1.Supervised Learning:
It is a machine learning paradigm for problems where the
datasets are labelled, implying that each data point contains features
and an associated label. The goal of these algorithms is to learn a
function that maps feature vectors to labels.
Supervised learning is further divided into regression and
classification problems. Regression involves a set of statistical
processes for estimating the relationships between dependent variables
and one or more independent variables.

10
2.Unsupervised Learning:
Unsupervised learning algorithms take a set of data that
contains only inputs, and finds the patterns in the data, like grouping or
clustering points. Hence, these algorithms learn from the data that isn’t
labelled, categorized, or classified. This implies that the unsupervised
algorithms identify the commonalities between the data and react based
upon the presence or absence of such patterns in the new data.
3.Semi-supervise Learning:
Semi-supervised learning falls in between supervised and
unsupervised learning. Some of the training examples are missing
labels. It is observed by several researchers that when unlabeled data is
used simultaneously with little labeled data produce considerable
improvement in the accuracy of the models.
4.Reinforcement Learning:
Reinforcement learning is the area of machine learning
associated with taking appropriate actions to maximize the cumulative
reward. It is used by several software and machines to identify the best
behavior or path that should be followed in a specific situation.

3.4.1 Face Recognition


Recognize and manipulate faces from Python or from the
command line with
the world’s simplest face recognition library.
Built using dlib’s state-of-the-art face recognition
built with deep learning. The model has an accuracy of 99.38% on the
Labeled Faces in the Wild benchmark.
This also provides a simple face_recognition command line tool that lets
you do face recognition on a folder of images from the command line!

3.4.2 OpenCV
OpenCV-Python is a library of Python bindings designed to
solve computer vision problems.
OpenCV (Open-Source Computer Vision Library) is an open-
source computer vision and machine learning software library.

11
OpenCV was built to provide a common infrastructure for computer
vision applications and to accelerate the use of machine perception in
the commercial products.
cv2 is the module import name for OpenCV-python, "Unofficial pre-built CPU-
only OpenCV packages for Python". The traditional OpenCV has many complicated
steps involving building the module from scratch, which is unnecessary.

Chapter 4
DESIGN
4.1 Introduction
The design phase of the proposed ML model consists of the
UML diagrams that were created to plan out the implementation. These

12
diagrams were useful in identifying the components and modules that
were used to build the proposed system. The proposed system consists
of a web application along with the classifier model. The design phase
also incorporates the relevant modules and UML diagrams to
implement this application.

4.2 UML Diagrams


UML Diagrams are a set of graphical notations used by
developers to describe and design software systems. These models help
us to visualize a system as it is or as we want it to be, they permit us to
specify the structure or behavior of a system, they give a template that
guides us in constructing a system. They are also used to document the
decisions we have made.
UML is a language for visualizing i.e., we can visualize a
system before development with the help of graphical notations. UML
language is also used for specifying implying that the building blocks
are precise, unambiguous, and complete. The UML language is also
used for construction which means it is not only a visual language but
it's models can be directly connected to a variety of programming
languages. Generation of code from a UML model into a programming
language is called forward engineering. The reverse is also possible.
The UML language is also used for documentation i.e., it produces
artifacts in addition to raw executable code like requirements,
architecture, design, tests, prototypes, and releases.

4.2.1 Data Flow Diagram


The data flow diagram (DFD) provides the flow of information
through the system and the activities that are processed through this
information. These DFD diagrams distinctly indicate the system scope
and boundaries. It is a visual representation of the flow of information
for a process or a system. It also gives insight into the inputs and
outputs of each entity and the process itself.

13
Face Recognition System DFD Level 0

The context diagram is an alternative name for the Face


Recognition System DFD Level 0. Users, the main process, and data
flow make up its parts. Also, the project concept is demonstrated using
the single process visualization.
DFD Level 0 shows the entities that interact with a system and
defines the border between the system and its environment

Figure 4.1: Level 0 DFD Diagram of the


system
Face Recognition System DFD Level 1

The “detonated view” of the context diagram is Face


Recognition System DFD Level 1. Its function is to deepen the
concept derive from the context diagram.

14
Specifically, level 1 shows the broader details of Face
Recognition System DFD Level 0. This is to clarify the paths (flow) of
data and its transformation from input to output.

Figure 4.2: Level 1 DFD Diagram of the


system

Face Recognition System DFD Level 2

Face Recognition System DFD Level 2 is also the highest


abstraction of the data flow diagram. This level also broadens the idea
from the DFD level 1. It includes the sub-processes from level 1 as
15
well as the data that flows.

Figure 4.3: Level 2 DFD Diagram of the


system

4.2.2 Use Case Diagram


A use case is a description of set of sequences of actions that a
system performs to yield a result of value to actor. Use case diagrams
address the static use case view of the system. Use case diagrams
represent what a system does but not how it is going to be done. It is

16
also possible to organize use cases by specifying generalization,
include and extend relationships among them.
• An <<include>> relationship between use cases means that the base
use case explicitly incorporates the behavior of another use case at a location specified
in the base.
• An <<extend>> relationship between use cases implies that the base
use case implicitly incorporates the behavior of another use case at a location
specified indirectly by the extending use case.

Figure 4.4: Use case Diagram of the system

4.2.3 Class Diagram


Class diagrams define the set of classes, interfaces and
collaborations and their relationships. Class diagrams are used to model
the static design view of the system. It involves modeling the
vocabulary of the system, modeling collaborations, or schemas. These
diagrams are not for visualizing, specifying, and documenting
17
structural models but also for constructing executable systems through
forward and reverse engineering.

Figure 4.5: class Diagram of the system

4.2.4 Sequence Diagram


A sequence diagram simply depicts interaction between objects
in a sequential order i.e., the order in which these interactions take
place. We can also use the terms event diagrams or event scenarios to
refer to a sequence diagram. Sequence diagrams describe how and in
what order the objects in a system function. These diagrams are widely
used by businessmen and software developers to document and
understand requirements for new and existing systems.

18
Figure 4.6: Sequence Diagram of the
system

4.3 Module Design and Organization


For the proposed system of Criminal Prevention and
corresponding offline application the modules were segregated based
on the tasks that need to be performed.
• User Module:
This module refers to the Offline application that takes the input values I.e.,
Images from the user regarding attributes of details of the criminals and passes
them to the classifier, which is in built in libraries that are installed and returns
the results generated by the model to the user.
o Running the code
o Loading, Updating and Deleting the Data
o Gets the Result
• System Controller Module:
o Takes the Input as Images & also as videos too
o Retrieves the data from the folder

19
o Comparison of images is done and gives the result
• Folder Module:
o Contains the data of the Images
o Creating, Deleting, Updating, Retrieving is done in
this module only

20
Chapter 5
IMPLEMENTATION AND RESULTS
5.1 Introduction
Implementation phase determines the code that is used to build
the proposed system i.e., the supervised model and the web application
for the end user to utilize to predict the output. The implementation
phase also describes the exploratory data analysis (EDA) regarding the
dataset. EDA provides us with ample visualizations to understand the
patterns and spot anomalies and process initial investigations. The
results section of this chapter provides information about several
evaluation measures and performance analysis of the model. This
chapter also defines the end user screens and the process of interaction
between end user and the web form.

5.2 Explanation of Key Functions


There are several functions from data pre-processing to building
the model, from reading input from end user to rendering the output
screen. The code used for Identification and detection is mentioned
below:

5.2.1 Installation
For convenience, we have downloaded the package with all
the codes and photos i.e., required images from google, and then we
proceed with the installation of the basic libraries.
The first library to install is OpenCV-python, as always run
the command from the terminal.
pip install opencv-python
then proceed with face_recognition, this too installs with pip
pip install face_recognition

5.2.2 Face Recognition on Images


To make face recognition work, we need to have a dataset of
photos also composed of a single image per character and
21
comparison photo. For example, in our example, we have a dataset
consisting of 1 photo each of Elon Musk, Jeff Bezos, Lionel Messi,
Yashwanth G and many more.

Screen 5.1: Dataset

and in the comparison, we will use the photo of Jeff Bezoz.

Figure 5.1: Input Image from


Dataset

#Call the Libraries


The first step is always to recall the libraries we have
installed OpenCV and face_recognition in our project.

22
import cv2
import face_recognition

#Face encoding first image


With the usual OpenCV procedure, we extract the image, in
this case, Jeff Bezoz1.webp, and convert it into RGB color format.
Then we do the “face encoding” with the functions of the Face
recognition library.
img = cv2.imread("Jeff bezoz1.webp")
rgb_img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_encoding = face_recognition.face_encodings(rgb_img)[0]

#Face Encoding Second image


Same procedure for the second image, we only change the name of
the variables and obviously the path of the second image, in this
case: images/Jeff Bezoz2.webp
img2 = cv2.imread("images/Jeff Bezoz2.webp")
rgb_img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB)
img_encoding2 = face_recognition.face_encodings(rgb_img2)[0]

Screen 5.2: Comparison of images

#Comparison the images


With a single line, we make a simple face comparison and
23
print the result. If the images are the same, it will print True
otherwise False.
result = face_recognition.compare_faces([img_encoding], img_encoding2)
print ("Result: ", result)

#Encode all faces in the dataset


Now we must encode all the images in our database, so that
through the webcam video stream if it finds the match it shows the
name otherwise it says, “name not found”.
This is a function of the file I have prepared, and it simply
takes all the images contained in the images/ folder and encodes
them. In our case, there are 23 images.
# Encode faces from a folder
sfr = SimpleFacerec()
sfr.load_encoding_images("images/")

5.2.3 Face Recognition in real-time on a webcam


Even for face recognition in real-time, the procedure is like
that of a single image but with something more. As a first step
remember to download the files that are stated above and among the
various Python files you will also find simple_facerec.py, remember
that this is not a library so it has not included as library in the project,
we should put this file in the same folder, and these are the correct
lines of code to start
import cv2
from simple_facerec import SimpleFacerec

#Take Webcam Stream


With a simple Opencv function, We take the webcam stream
and loop it

# Load Camera

cap = cv2.VideoCapture(2)

while True:

24
ret, frame = cap.read()

Screen 5.3: Loading of camera

#Face Location and Face Recognition


Now we identify the face passing the frame of the webcam to
this function detect_known_faces(frame). It will give us the name
of the person and an array with the position at each moment of the
movement.

# Detect Faces

face_locations, face_names = sfr.detect_known_faces(frame)

for face_loc, name in zip (face_locations, face_names):

y1, x2, y2, x1 = face_loc [0], face_loc [1], face_loc [2], face_loc [3]

As an intermediate step, I have shown the output in the image

25
Screen 5.4: Finding Face Locations

#Show Name and Rectangle


cv2.putText(frame, name,(x1, y1 - 10), cv2.FONT_HERSHEY_DUPLEX, 1, (0, 0, 200), 2)

cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 0, 200), 4)

cv2.imshow("Frame", frame)

key = cv2.waitKey(1)

if key == 27:

break

26
Screen 5.5: To show Name and
Rectangle
#SimpleFaceRec File
import face_recognition
import cv2
import os
import glob
import numpy as np

class SimpleFacerec:
def __init__(self):
self.known_face_encodings = []
self.known_face_names = []

# Resize frame for a faster speed


self.frame_resizing = 0.25

def load_encoding_images(self, images_path):


"""
Load encoding images from path
: param images_path:
: return:
"""
# Load Images
images_path = glob.glob(os.path.join(images_path, "*.*"))

27
print ("{} encoding images found.".format(len(images_path)))

# Store image encoding and names


for img_path in images_path:
img = cv2.imread(img_path)
rgb_img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

# Get the filename only from the initial file path.


basename = os.path.basename(img_path)
(filename, ext) = os.path.splitext(basename)
# Get encoding
img_encoding = face_recognition.face_encodings(rgb_img)[0]

# Store file name and file encoding


self.known_face_encodings.append(img_encoding)
self.known_face_names.append(filename)
print ("Encoding images loaded")

def detect_known_faces(self, frame):


small_frame = cv2.resize(frame, (0, 0), fx=self.frame_resizing, fy=self.frame_resizing)
# Find all the faces and face encodings in the current frame of video
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition
uses)
rgb_small_frame = cv2.cvtColor(small_frame, cv2.COLOR_BGR2RGB)
face_locations = face_recognition.face_locations(rgb_small_frame)
face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)

face_names = []
for face_encoding in face_encodings:
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(self.known_face_encodings, face_encoding)
name = "Unknown"

# # If a match was found in known_face_encodings, just use the first one.


# If True in matches:
# first_match_index = matches.index(True)
# name = known_face_names[first_match_index]

# Or instead, use the known face with the smallest distance to the new face
face_distances = face_recognition.face_distance(self.known_face_encodings, face_encoding)

28
best_match_index = np.argmin(face_distances)
if matches[best_match_index]:
name = self.known_face_names[best_match_index]
face_names.append(name)

# Convert to numpy array to adjust coordinates with frame resizing quickly


face_locations = np.array(face_locations)
face_locations = face_locations / self.frame_resizing
return face_locations.astype(int), face_names

5.3 Method of Implementation


The implementation consists of a offline application with a
constrained code in the Visual Studio Code which gives the desired
output. Then the model will Detect the input and pass it to the client-
side/User screen, which then displays the output.

5.3.1 Output Screens

Screen 5.6: Output 1: Image is detected

29
Screen 5.7: Output 2: Image is detected

5.3.2 Result Analysis


We have proposed a Criminal Detection system for Face
Images and Videos. Images Found in the CCTV Cameras are used for
continuous capturing of the video and images; we will get the
information on our main screen that which image from the database is
matching. When the database image matches with CCTV captured
image then on the main screen the name of the criminal with the
criminal found message will be displayed.
Entering the description about new face when a new criminal
face is detected and added to the database. The authorized user enters
this information about criminal along with the source of input. This can
further be used to create datasets to train the model.

30
Chapter 6
TESTING AND VALIDATION
6.1 Introduction
Testing is defined as a process of analyzing a software item to
detect the differences between the required conditions (i.e., defects,
errors, or bugs) and to evaluate the features of the software item. The
importance of the testing is to know the accuracy and the working
functionality of the model developed.
In machine learning, it is a common task to study and
construct models that can learn from and make predictions on data.
Such algorithms work by making data-driven predictions or decisions,
through building a mathematical model from input data. The data used
to build the final model usually comes from multiple datasets. Four
datasets are commonly used in different stages of creation of the
model.
Software testing is a critical element of software quality
assurance and represents the ultimate review of specification, design,
and code generation.
The purpose of testing is to discover errors. Testing is the
process of trying to discover every conceivable fault or weakness in a
work product. It provides a way to check the functionality of
components, subassemblies, assemblies and/or a finished product It is
the process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does
not fail in an unacceptable manner. There are various types of tests.
Each test type addresses a specific testing requirement.

Testing Objectives:
The goals and objectives of software testing are numerous,
which when achieved help developers build a defect less and error free
software and application that has exceptional performance, quality,
31
effectiveness, security, among other things. Though the objective of
testing can vary from company to company and project to project, there
are some goals that are similar for all. These objectives are Verification
is the process of evaluating work-products of a development phase to
determine whether they meet the specified requirements

6.2 Design of Test Cases and Scenarios


Testing methodologies in software engineering are testing
strategies, approaches or methods used to test a specific product to
ensure its usability. It makes sure that product works as per the give
specifications and has no side effects when used outside the design
parameters. Software testing methodologies encompass everything
from unit testing to integration testing and specialized form of testing
like security or performance testing.
 Functional Testing involves application testing against business requirements
that includes multiple test types designed to guarantee that each part of the
software behaves in the same way as expected by the users.
 Unit Testing is a software testing methodology which makes sure that
individual components of software at the code level are working perfectly for
which purpose they are designed to.
 Integration Testing Once each unit is tested thoroughly, it is integrated with
other units to create modules or components that are designed to perform
specific activities or tasks.
 System testing is the black box testing method used to evaluate the integrated
system as a whole and ensures it meets all specific requirements.
 Non-Functional Testing methods incorporate different test types focused on
the operational aspects of a piece of software.
 Performance testing is the non-functional testing technique used to determine
how an application will behave under different conditions.
 Security testing the goal of security testing is finding loopholes and security
risks in the system.
 Usability testing is a way to see how easy to use something is by testing it with
32
real users.

6.3 Validation
Validation is the process of checking whether the output of the
test cases validated with the original expected results. In this case of a
machine learning model, the validation of the model with the testing
dataset can be done by calculating the score of the model by comparing
the predicted output with the original output of the dataset. Validation
step helps in identifying the right classifier model with highest
accuracy to be used in the web application.
Usually, validation of a software is done during testing
processes like feature testing, integration testing, system testing, load
testing, compatibility testing etc., Validation step is performed at the
end of the development process and is carried out after verifications are
completed.

S A I E A
L C N X C
. T P P T
N I U E U
O O T C A
N T L
E O
D U
O T
U P
T U
P T
U
T
1 R P R G
u h e e
n o c t
n t o s

33
i o g
n s n t
g , i h
v z e
a i e
p d d N
p e ( a
l o P m
i h e
c o
a t
t o
i
o i
n s

I
d
e
n
t
i
f
i
e
d
)
2 R P N D
u h o o
n o t n
n t ’
i o R t
n s e

34
g , c g
o e
a v g t
p i n
p d i t
l e z h
i o e e
c d
a ( N
t P a
i h m
o o e
n t
o

i
s

n
o
t

i
d
e
n
t
i
f
i
e
d
)

35
3 R P R G
u h e e
n o c t
n t o s
i o g
n s n t
g , i h
z e
a v e
p i d N
p d ( a
l e P m
i o h e
c o
a t
t o
i
o i
n s

I
d
e
n
t
i
f
i
e
d
)

36
Chapter 7
CONLUSION AND FUTURE WORK
7.1 Conclusion
This upgraded version of the criminal detecting system not only
provides a huge convenience to the Police in the identification of
criminals but also saves time for them as processes are automated in
the system. The novelty of this Mini Project is face detection done by
using Face Encodings.

7.2 Future Work


For future work, we can add the Alarms to the criminal
detection system. It will range only when matches are found so that if
anyone is not there to keep watch in the CCTV room, they will come to
know that someone is found from the database in that public place.
This Project presents a surveillance system that will give us alerts when

any controversy, fight, or intruder is detected by using CCTV footage.


we can also enhance this project by adding the additional details of the
person instead of the name itself i.e., we can add age, the crime that
was made by that criminal, present status of criminal, etc. these
attributes can be inserted into the details of the images along with name

37
then it will be easier for the officials to catch them.

Chapter-8
REFERENCES
[1] P. Kowsalya, J. Pavithra, G. Sowmiya and C.K. Shankar’s “ATTENDANCE
MONITORING SYSTEM USING FACE DETECTION & FACE RECOGNITION”,
International Research Journal of Engineering and Technology (IRJET), Volume: 06
Issue: 03 | Mar 2021.

[2] Alireza Chevelewalla, Ajay Gurav, Sachin Desai, Prof.Sumitra Sadhukhan: “Criminal
Recognition System”, pp.47-50, vol.4, Issue.03, Month & Year of Publication: March &
2021

[3] KH Teoh, RC Ismail, SZM Naziri , R Hussin , MNM Isa and MSSM Basir ”FACE
RECOGNITION AND IDENTIFICATION USING DEEP LEARNING
APPROACH”,pp.1-10,5th International Conference on Electronic Design (ICED)
2020, IOP Publishing.

[4] Apoorva. P, Impana. H.C, Siri. S.L, Varshitha. M.R and Prof. Ramesh. B’s
“AUTOMATED CRIMINAL IDENTIFICATION BY FACE RECOGNITION USING

38
OPEN COMPUTER VISION CLASSIFIERS”, Third International Conference on Computing
Methodologies and Communication (ICCMC 2020), DOI: 10.1109/ICCMC.2019.8819850

[5].R. Prashanth Kumar, Abdul Majeed, Farhan Pasha A Sujith: “REAL-TIME


CRIMINAL IDENTIFICATION SYSTEM BASED ON FACE RECOGNITION”,
pp.320-328, Vol.26, NUM 05, Month & Year of Publication: May/2020.

[6] Vikram Mohanty, David Thames, Sneha Mehta, and Kurt Luther, “Photo Sleuth:
Combining Human Expertise and Face Recognition to Identify Historical Portraits”,
Conference: the 24th International Conference, March 2019,
https://doi.org/10.1145/3301275.3302301

[7].Nurul Azma Abdullah, Md. Jamri Saidi, Nurul Hidayah Ab Rahman: “Face
Recognition for Criminal Identification: An implementation of Principal Component
Analysis for face recognition”, pp.1–7, AIP Conference Proceedings 1891,020002
(2019),Published Online: 03 October 2017.

[8] Kian Raheem Qasim and Sara Salman Qasim’s, “Force Field Feature Extraction
Using Fast Algorithm for Face Recognition Performance”, Iraqi Academics Syndicate
International Conference for Pure and Applied Sciences,
https://doi.org/10.1088/1742- 6596/1818/1/012195

39
40

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy