0% found this document useful (0 votes)
492 views

File 4

This document describes a student attendance system that uses machine learning. It was created by 4 students at JSS Academy of Technical Education under the supervision of Mr. S.S. Rawat. The system aims to address the vision of sparking imagination in computer science students and empowering them with problem solving skills. It also maps the course outcomes of the student project to the program outcomes and program specific outcomes of the computer science program.

Uploaded by

AVADHESH CHAMOLA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
492 views

File 4

This document describes a student attendance system that uses machine learning. It was created by 4 students at JSS Academy of Technical Education under the supervision of Mr. S.S. Rawat. The system aims to address the vision of sparking imagination in computer science students and empowering them with problem solving skills. It also maps the course outcomes of the student project to the program outcomes and program specific outcomes of the computer science program.

Uploaded by

AVADHESH CHAMOLA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

STUDENT ATTENDANCE SYSTEM

Using

MACHINE LEARNING

By

Avadhesh Chamola (1509110036)


Harshal Kumar (1509110050)
Manish Verma (1509110063)
Jayant Mawai (1509110057)

Under the Supervision of

Mr. S.S. Rawat

Department of Computer Science &Engineering

JSS Academy of Technical Education

C-20/1, Sector-62, NOIDA

Uttar Pradesh-201301, India

May, 2019
VISION OF THE DEPARTMENT:

To spark the imagination of the Computer Science Engineers with values, skills and creativity to
solve the real-world problems.

MISSION OF THE DEPARTMENT:


Mission 1: To inculcate creative thinking and problem-solving skills through effective teaching,
learning and research.
Mission 2: To empower professionals with core competency in the field of Computer Science
and Engineering.
Mission 3: To foster independent and life-long learning with ethical and social responsibilities.

PROGRAMME EDUCATIONAL OBJECTIVES:


PEO1: To empower students with effective computational and problem-solving skills.

PEO2: To enable students with core skills for employment and entrepreneurship.

PEO3: To imbibe students with ethical values and leadership qualities.

PEO4: To foster students with research-oriented ability which helps them in analysing and
solving real life problems and motivate them for pursuing higher studies.

PROGRAMME OUTCOMES:
Engineering Graduates will be able to:

PO1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex engineering
problems.
PO2. Problem analysis: Identify, formulate, review research literature, and analyse complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.

1
PO3. Design/development of solutions: Design solutions for complex engineering problems
and design system components or processes that meet the specified needs with
appropriate consideration for the public health and safety, and the cultural, societal, and
environmental considerations.
PO4. Conduct investigations of complex problems: Use research-based knowledge and
research methods including design of experiments, analysis and interpretation of data,
and synthesis of the information to provide valid conclusions.
PO5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modelling to complex
engineering activities with an understanding of the limitations.
PO6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the professional engineering practice.
PO7. Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and
need for sustainable development.
PO8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities
and norms of the engineering practice.
PO9. Individual and team work: Function effectively as an individual, and as a member or
leader in diverse teams, and in multidisciplinary settings.
PO10. Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and
write effective reports and design documentation, make effective presentations, and give
and receive clear instructions.
PO11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member
and leader in a team, to manage projects and in multidisciplinary environments.
PO12. Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological
change

2
PROGRAMME SPECIFIC OUTCOMES:

PSO1: An ability to apply foundation of Computer Science and Engineering, algorithmic


principles and theory in designing and modelling computation-based systems.

PSO2: The ability to demonstrate software development skills.

COURSE OUTCOMES

C410.1: Identify, formulate, design and analyse a research based/web-based problem to


address societal and environmental issues.

C 410.2: Communicate effectively in verbal and written form.

C 410.3: Apply appropriate computing, engineering principle and management skills for
obtaining effective solution to the formulated problem within a stipulated time.

C 410.4: Work effectively as a part of team in multi-disciplinary areas.

CO-PO-PSO MAPPING

Cos PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2

C410.1 3 3 3 3 2 3 3 3 3 3 2 3 3 3

C410.2 2 2 2 2 2 2 2 2 2 3 2 3 2 2

C410.3 3 3 3 3 3 3 2 3 3 3 3 3 3 3

C410.4 3 3 3 3 2 3 2 3 3 3 3 3 3 3

3
DECLARATION

We hereby declare that this submission is our own work and that, to the best of our
knowledge and belief, it contains no material previously published or written by another
person nor material which to a substantial extent has been accepted for the award of any
other degree or diploma of the university or other institute of higher learning, except
where due acknowledgment has been made in the text.

Signature: Signature:

Name : Avadhesh Chamola Name : Harshal Kumar

Roll No.: 1509110036 Roll No.: 1509110050

Date : Date :

Signature: Signature:

Name : Manish Verma Name : Jayant Mawai

Roll No.: 1509110063 Roll No.: 1509110057

Date : Date :

4
CERTIFICATE

This is to certify that Project Report entitled “Student Attendance System Using Machine

Learning” which is submitted by Manish Verma, Avadhesh Chamola, Harshal Kumar,

Jayant Mawai in partial fulfillment of the requirement for the award of degree B. Tech. in

Department of Computer Science and Engineering of Dr. APJ Abdul Kalam Technical

University, Uttar Pradesh, Lucknow is a record of the candidates’ own work carried out by them

under my/our supervision. The matter embodied in this thesis is original and has not been

submitted for the award of any other degree.

Mr S.S. Rawat

(Asst. Professor)

Supervisor

Date

5
ACKNOWLEDGEMENT

It gives us a great sense of pleasure to present the report of the B. Tech Project undertaken
during B. Tech. Final Year. We owe special debt of gratitude to Professor S.S Rawat, Department
of Computer Science & Engineering, JSSATE, Noida for his constant support and guidance
throughout the course of our work. His sincerity, thoroughness and perseverance have been a
constant source of inspiration for us. It is only his cognizant efforts that our endeavours have
seen light of the day.

We also take the opportunity to acknowledge the contribution of Professor Vikram Bali, Head,
Department of Computer Science & Engineering, JSSATE, Noida for his full support and
assistance during the development of the project.

We also do not like to miss the opportunity to acknowledge the contribution of all faculty
members of the department for their kind assistance and cooperation during the development of
our project. Last but not the least, we acknowledge our friends for their contribution in the
completion of the project.

Signature: Signature:

Name : Avadhesh Chamola Name : Harshal Kumar

Roll No.: 1509110036 Roll No.: 1509110050

Date : Date :

Signature: Signature:

Name : Manish Verma Name : Jayant Mawai

Roll No.: 1509110063 Roll No.: 1509110057

Date : Date :

6
ABSTRACT

Authentication is one of the significant issues in the era of information system. Among other
things, human face recognition is one of known techniques which can be used for user
authentication. As an important branch of biometric verification, human face recognition has
been widely used in many applications, such as video monitoring/surveillance system, human-
computer interaction and door access control system and network security.
In this synopsis we introduce a system for student attendance in Classes and any Lecture hall by
using the OpenCV (Open source Computer Vision) with some face detection and face recognition
algorithms. The aim is to take the number of people present in the class and take attendance to
each of them using face detection algorithms and face recognition algorithms to determine the
actual identification of persons which of them are present. When compared to traditional
attendance marking this system saves the time and also helps to monitor the students.

7
TABLE OF CONTENTS

Page

DECLARATION ………………………………………………………………………....... 4

CERTIFICATE ……………………………………………………………………………. 5

ACKNOWLEDGEMENT ………………………………………………………………… 6

ABSTRACT ………………………………………………………………………………. 7

TABLE OF CONTENTS ………………………………………………………………….. 8

LIST OF FIGURES ……………………………………………………………………….. 11

LIST OF TABLES ……………………………………………………………………….. 12

CHAPTER 1: INTRODUCTION ………………………………………………………. 13-16

1.1 INTRODUCTION TO AUTOMATIC CLASS ATTENDENCE ……………….. 13

1.2 MOTIVATION AND PROBLEM DEFINATION ……………………………… 14

1.3 PROJECT AIMS AND OBJECTIVES ………………………………………….. 15

1.4 SCOPE OF THE PROJECT ……………………………………………………… 15

1.5 JUSTIFICATION ………………………………………………………………… 16

CHAPTER 2: LITERATURE SURVEY ………………………………………………. 17-35

2.1 DIGITAL IMAGE PROCESSING ……………………………………………… 17

2.1.1 HUMAN PERCEPTION ………………………………………………… 17

2.1.2 MACHINE VISION APPLICATION …………………………………… 17

2.2 IMAGE REPRESENTATION IN A DIGITAL COMPUTER ……………………. 18

2.3 STEPS IN DIGITAL IMAGE PROCESSING ………………………………… 18

2.4 DEFINITION OF TERMS AND HISTORY ………………………………….. 19

2.4.1 FACE DETECTION …………………………………………………….. 19

8
2.4.2 FACE RECOGNITION …………………………………………………. 19

2.4.3 DIFFERENCE BETWEEN FACE DETECTION AND FACE RECOGNITION …. 20

2.5 FACE DETECTION …………………………………………………………………....... 20

2.6 HAAR-CASCADES …………………………………………………………………..... 20

2.7 HOW THE HAAR LIKE FEATURE WORK ……………………………………….. 22

2.7.1 CASCADED CLASSIFIER …………………………………………………... 23

2.7.2 INTEGRAL IMAGE ………………………………………………………….. 23

2.8 IMPROVING FACE DETECTION ………………………………………………… 25

2.8.1 SCALE INCREASE RATE …………………………………………………... 25

2.8.2 MINIMUM NEIGHBORS THRESOLD ……………………………………... 25

2.8.3 CANNY PRUNING FLAG ………………………………………………….. 26

2.8.4 MINIMUM DETECTION SCALE ………………………………………….. 26

2.8.5 EXPECTED OUTPUT OF FACE DETECTOR ON TEST IMAGES ………. 26

2.9 WHY BIOMETRIC IDENTIFICATION? …………………………………………. 27

2.10 WHY FACE RECOGNITION IN LIEU OF OTHER BIOMETRIC METHODS? … 27

2.11 HISTORY OF FACE RECOGNITION …………………………………………… 28

2.12 FACE RECOGNITION CONCEPTS ……………………………………………… 29

2.13 FACE RECOGNITION-LBPH ALGORITHM ………………………………… 30

2.13.1 INTRODUCTION ………………………………………………………… 31

2.13.2 STEP-BY-STEP …………………………………………………………… 31

CHAPTER 3: METHODOLOGY AND DESIGN …………………………………………. 36-42

3.1 SYSTEM DESIGN ………………………………………………………………… 36

3.2 GENERAL OVERVIEW …………………………………………………………… 36

3.3 TRAINING SET MANAGER SUB SYSTEM ……………………………………... 37


9
3.4 FACE RECOGNITION SUBSYSTEM …………………………………………….. 37

3.5 SYSTEM ARCHITECTURE ………………………………………………………. 37

3.6 FUNCTIONS OF TWO SUB-SYSTEM …………………………………………… 38

3.7 FULL SYSTEMS LOGICAL DESIGN …………………………………………… 39

3.8 FEASIBILITY STUDY …………………………………………………………… 39

3.9 USER CASE DIAGRAM …………………………………………………………. 40

3.9.1 THE USER STORY ………………………………………………………… 40

3.10 PROJECT PLAN ………………………………………………………………… 42

CHAPTER 4: FACILITIES REQUIRED FOR PROPOSED WORK ……………………. 43-43

4.1 HARDWARE REQUIREMENTS ………………………………………………… 43

4.2 SOFTWARE REQUIREMENTS ………………………………………………… 43

CHAPTER 5: RESULTS AND ANALYSIS ……………………………………………… 44-47

5.1 USER INTERFACE OF THE SYSTEM ………………………………………….. 44

5.2 TRAINING SET COLLECTION …………………………………………………. 44

5.3 FACE RECOGNIZER ……………………………………………………………. 46

5.4 ATTENDENCE SHEET …………………………………………………………. 47

CHAPTER 6: CONCLUSION AND RECOMMENDATION …………………………… 48-48

REFERENCES …………………………………………………………………………….. 49

APPENDIX ………………………………………………………………………………… 50-58

REVIEW PAPER …………………………………………………………………………… 50

CODES ……………………………………………………………………………………… 55

• FRONTPAGE.PY ………………………………………………………….. 55
• RECOGNISER.PY ………………………………………………………… 56
• DATASET_CAPTURE.PY ……………………………………………….. 57
• TRAINING_DATASET.PY ………………………………………………. 58
10
LIST OF FIGURES

Page
2.1 STEP IN DIGITAL IMAGE PROCESSING 19
2.2 HAAR LIKE FEATURE 21
2.3 HAAR LIKE FEATURE WITH DIFFERENT SIZE AND VARIATION 21
2.4 HOW THE HAAR LIKE FEATURE CAN BE USED TO SCALE THE EYES 22
2.5 SEVERAL CLASSIFIERS COMBINED TO ENHANCE FACE DETECTION 23
2.6 PIXEL COORDINATE OF AN INTEGRAAL IMAGE 23
2.7 INTEGRAL IMAGE CALCULATION 24
2.8 VALUES OF THE INTEGRAL IMAGE ON A RECTANGLE 24
2.9 LENA’S IMAGE SHOWING LIST OF RECTANGLES 25
2.10 EXPECTED RESULT OF FACE DETECTION 26
2.11 LBP OPERATION ON AN IMAGE 32
2.12 BILINEAR INTERPOLATION 33
2.13 EXTRACTING THE HISTOGRAMS 34
3.1 SEQUENCE OF EVENTS IN THE CLASS ATTENDANCE SYSTEM 36
3.2 THE LOGICAL DESIGN OF THE DESKTOP MODULE SUBSYSTEMS 37
3.3 BLOCK DIGRAM SHOWING FUNCTION OF COMPONENTS 38
3.4 A LOGICAL DESIGN OF THE WHOLE SYSTEM 39
3.5 USER CASE DIAGRAM TO ILLUSTRATE OF SYSTEM BEHAVIOR 41
3.6 GANTT CHAT SHOWING PROJECT PLAN 42
5.1 USER INTERFACE OF THE SYSTEM 44
5.2 MAINTAINING ID OF STUDENT 45
5.3 FACE DETECTION AND DATASET CAPTURE 45
5.4 TRAINING DATASET 46
FACE RECOGNITION AND MARKING ATTENDANCE IN ATTENDANCE
5.5 46
SHEET
5.6 MAINTAINING ATTENDANCE IN SPREADSHEET 47

11
LIST OF TABLES

1.1 PROJECT AIM AND OBJECTIVE 15


2.1 BRIEF HISTORY OF EXISTING FACE RECOGNITION TECHNIQUES 28

12
CHAPTER 1

INTRODUCTION

1.1 Introduction to Automatic Class Attendance


Face recognition is an important application of Image processing owing to its use in many fields.
Identification of individuals in an organization for the purpose of attendance is one such
application of face recognition. Maintenance and monitoring of attendance records plays a vital
role in the analysis of performance of any organization. The purpose of developing attendance
management system is to computerize the traditional way of taking attendance. Automated
Attendance Management System performs the daily activities of attendance marking and analysis
with reduced human intervention. The prevalent techniques and methodologies for detecting and
recognizing face fail to overcome issues such as scaling, pose, illumination, variations, rotation,
and occlusions. The proposed system aims to overcome the pitfalls of the existing systems and
provides features such as detection of faces, extraction of the features, detection of extracted
features, and analysis of students' attendance. The system integrates techniques such as image
contrasts, integral images, colour features and cascading classifier for feature detection. The
system provides an increased accuracy due to use of a large number of features (Shape, Colour,
LBP, wavelet, Auto-Correlation) of the face. Better accuracy is attained in results as the system
takes into account the changes that occur in the face over the period of time and employs suitable
learning algorithms
In Face Detection and Recognition systems, the flow process starts by being able to detect and
recognise frontal faces from an input device i.e. mobile phone. In today’s world, it has been
proven that students engage better during lectures only when there is effective classroom control.
The need for high level student engagement is very important. An analogy can be made with that
of pilots as described by Mundschenk et al (2011 p101)” Pilots need to keep in touch with an air
traffic controller, but it would be annoying and unhelpful if they called in every 5 minutes”. In
the same way students need to be continuously engaged during lectures and one of the ways is to
recognise and address them by their names. Therefore, a system like this will improve classroom
control. In my own view based on experience, during my time as a teacher, I realised calling a
student by his/her name gives me more control of the classroom and this draws the attention of
the other students in the classroom to engage during lectures.

13
Face detection and recognition is not new in our society we live in. The capacity of the human
mind to recognize particular individuals is remarkable. It is amazing how the human mind can
still persist in identification of certain individuals even through the passage of time, despite slight
changes in appearance.

Anthony (2014 p1) reports that, due to the remarkable ability of the human mind to generate near
positive identification of images and facial recognition of individuals, this has drawn
considerable attention for researchers to invest time in finding algorithms that will replicate
effective face recognition on electronic systems for use by humans.

Wang et al (2015 p318) states that” the process of searching a face is called face detection. Face
detection is to search for faces with different expressions, sizes and angles in images in
possession of complicated light and background and feeds back parameters of face”.

Face recognition processes images and identifies one or more faces in an image by analysing
patterns and comparing them. This process uses algorithms which extracts features and compare
them to a database to find a match. Furthermore, in one of most recent research, Nebel (2017, p.
1), suggest that DNA techniques could transform facial recognition technology, by the use of
video analysis software which can be improved thanks to a completely advance in research in
DNA analysis. By so doing, camera-based surveillance systems software to analyze DNA
sequences, by treating a video as a scene that evolves the same way DNA does, to detect and
recognize human face.

1.2 Motivation and Problem Definition:


This project is being carried out due to the concerns that have been highlighted on the methods
which lectures use to take attendance during lectures. The use of clickers, ID cards swiping and
manually writing down names on a sheet of paper as a method to track student attendants has
prompted this project to be carried out. This is not in any way to criticize the various methods
used for student attendance, but to build a system that will detect the number of faces present in
a classroom as well as recognizing them. Also, a teacher will be able to tell if a student was
honest as these methods mentioned can be used by anyone for attendance records, but with the
face detection and recognition system in place, it will be easy to tell if a student is actually present
in the classroom or not.

14
This system will not only improve classroom control during lectures, it will also possibly detect
faces for student attendance purposes. I will use MATLAB to build and implement this system.

1.3 Project Aims and Objectives:

The aim and objectives of this project has been acquired after meeting with the client.

AIM OBJECTIVES

To develop a  The system should be able to detect students’ frontal faces in a classroom
prototype that within 30% accuracy.
will facilitate
 The system should be able to automatically reveal the number of students
classroom
present on a GUI.
control and
attendance by  Recognise student stored on a database of faces by matching them to
face detection images on a database with an accuracy within 30%.
and recognition
of students faces  The system should be able to match detected students faces cropped from

in a digital an image to those on a database on the system.

image taken by a
 The system should be able to process an image within 10 minutes to be
mobile phone
able to achieve the objective of recognition by the end of a lecture. i.e. 5
camera.
names per hour per lecture.

 The algorithm implemented for the system’s functionality will achieve


system accuracy within 20%.

 The positive prediction should be within 20%.

 The system designed will be user friendly with a Graphical User


Interphase that will serve as an access to the functionalities of the system.

15
1.4 Scope of the project.

We are setting up to design a system comprising of two modules. The first module (face detector) is
a mobile component, which is basically a camera application that captures student faces and stores
them in a file using computer vision face detection algorithms and face extraction techniques. The
second module is a desktop application that does face recognition of the captured images (faces) in
the file, marks the students register and then stores the results in a database for future analysis.

1.5 Justification.

This project serves to automate the prevalent traditional tedious and time wasting methods of
marking student attendance in classrooms. The use of automatic attendance through face
detection and recognition will increase the effectiveness of attendance monitoring and
management
This method could also be extended for use in examination halls to curb cases of impersonation
as the system will be able to single out the imposters who won’t have been captured during the
enrollment process. Applications of face recognition are widely spreading in areas such as
criminal identification, security systems, image and film processing. The system could also find
applications in all authorized access facilities.

16
CHAPTER 2

LITERATURE REVIEW

2.1 Digital Image Processing.


Digital Image Processing is the processing of images which are digital in nature by a digital
computer. Digital image processing techniques are motivated by three major applications
mainly:

 Improvement of pictorial information for human perception


 Image processing for autonomous machine application
 Efficient storage and transmission.

2.1.1 Human Perception

This application employs methods capable of enhancing pictorial information for human
interpretation and analysis. Typical applications include; noise filtering, content enhancement
mainly contrast enhancement or deblurring and remote sensing.

2.1.2 Machine Vision Applications

In this, the interest is on the procedures for extraction of image information suitable for computer
processing. Typical applications include;

 Industrial machine vision for product assembly and inspection.

 Automated target detection and tracking.

 Finger print recognition.

 Machine processing of aerial and satellite imagery for weather prediction and crop
assessment.
Facial detection and recognition falls within the machine vision application of digital image
processing.

17
2.2 Image Representation in a Digital Computer.

An image is a 2-Dimensional light intensity function

𝐟 (𝐱, 𝐲) = 𝐫 (𝐱, 𝐲) × 𝐢 (𝐱, 𝐲)

Where, 𝐫 (𝐱, 𝐲) is the reflectivity of the surface of the corresponding image point.

𝐢 (𝐱, 𝐲) Represents the intensity of the incident light.

A digital image f(x, y) is discretized both in spatial co-ordinates by grids and in brightness by
quantization. Effectively, the image can be represented as a matrix whose row, column indices
specify a point in the image and the element value identifies grey level value at that point. These
elements are referred to as pixels or pels.

Typically following image processing applications, the image size which is used is𝟐𝟓𝟔 × 𝟐𝟓𝟔,
elements, 𝟔𝟒𝟎 × 𝟒𝟖𝟎 pixels or 𝟏𝟎𝟐𝟒 × 𝟏𝟎𝟐𝟒 pixels. Quantization of these matrix pixels is
done at 8 bits for black and white images and 24 bits for coloured images (because of the three
colour planes Red, Green and Blue each at 8 bits.

2.3 Steps in Digital Image Processing.


Digital image processing involves the following basic tasks;

 Image Acquisition - An imaging sensor and the capability to digitize the signal produced
by the sensor.
 Pre-processing – Enhances the image quality, filtering, contrast enhancement etc.

 Segmentation – Partitions an input image into constituent parts of objects.

 Description/feature Selection – extracts the description of image objects suitable for


further computer processing.
 Recognition and Interpretation – Assigning a label to the object based on the information
provided by its descriptor. Interpretation assigns meaning to a set of labelled objects.

18
 Knowledge Base – This helps for efficient processing as well as inter module
cooperation.

Figure 2-1. A diagram showing the steps in digital image processing

2.4 Definition of Terms and History

2.4.1 Face Detection

Face detection is the process of identifying and locating all the present faces in a single image
or video regardless of their position, scale, orientation, age and expression. Furthermore, the -
detection should be irrespective of extraneous illumination conditions and the image and video
content.

2.4.2 Face Recognition

Face Recognition is a visual pattern recognition problem, where the face, represented as a
three dimensional object that is subject to varying illumination, pose and other factors,
needs to be identified based on acquired images.

Face Recognition is therefore simply the task of identifying an already detected face as a
known or unknown face and in more advanced cases telling exactly whose face it is.

19
2.4.3 Difference between Face Detection and Face Recognition

Face detection answers the question, Where is the face? It identifies an object as a “face” and
locates it in the input image. Face Recognition on the other hand answers the question who is
this? Or whose face is it? It decides if the detected face is someone known or unknown based
on the database of faces it uses to validate this input image.

It can therefore be seen that face detections output (the detected face) is the input to the face
recognizer and the face Recognition’s output is the final decision i.e. face known or face
unknown.

2.5 Face Detection

A face Detector has to tell whether an image of arbitrary size contains a human face and if so,
where it is.

Face detection can be performed based on several cues: skin colour (for faces in colour images
and videos, motion (for faces in videos), facial/head shape, facial appearance or a combination
of these parameters. Most face detection algorithms are appearance based without using other
cues.

An input image is scanned at all possible locations and scales by a sub window. Face detection
is posed as classifying the pattern in the sub window either as a face or a non-face. The
face/nonface classifier is learned from face and non-face training examples using statistical
learning methods.

Most modern algorithms are based on the Viola Jones object detection framework, which is
based on Haar Cascades.

2.6 Haar – Cascades.

Haar like features are rectangular patterns in data. A cascade is a series of “Haar-like features”
that are combined to form a classifier. A Haar wavelet is a mathematical function that produces
square wave output.

20
Figure 2-2. Haar like Features

Figure 2.2 shows Haar like features, the background of a template like (b) is painted grey to
highlight the pattern’s support. Only those pixels marked in black or white are used when the
corresponding feature is calculated.

Since no objective distribution can describe the actual prior probability for a given image to
have a face, the algorithm must minimize both the false negative and false positive rates in order
to achieve an acceptable performance. This then requires an accurate numerical description of
what sets human faces apart from other objects. Characteristics that define a face can be
extracted from the images with a remarkable committee learning algorithm called Adaboost.
Adaboost (Adaptive boost) relies on a committee of weak classifiers that combine to form a
strong one through a voting mechanism. A classifier is weak if, in general, it cannot meet a
predefined classification target in error terms. The operational algorithm to be used must also
work with a reasonable computational budget. Such techniques as the integral image and
attentional cascades have made the Viola-Jones algorithm highly efficient: fed with a real time
image sequence generated from a standard webcam or camera, it performs well on a standard
PC.

Figure 2-3. Haar-like features with different sizes and orientation

The size and position of a pattern’s support can vary provided its black and white rectangles
have the same dimension, border each other and keep their relative positions. Thanks to this
21
constraint, the number of features one can draw from an image is somewhat manageable: a 24
× 24 image, for instance, has 43200, 27600, 43200, 27600 and 20736 features of category (a),
(b), (c), (d) and (e) respectively as shown in Figure 2.3, hence 162336 features in all.

In practice, five patterns are considered. The derived features are assumed to hold all the
information needed to characterize a face. Since faces are large and regular by nature, the use
of Haar-like patterns seems justified.

2.7 How the Haar – like Features Work.


A scale is chosen for the features say 24 × 24 pixels. This is then slid across the image. The
average pixel values under the white area and the black area are then computed. If the difference
between the areas is above some threshold then the feature matches.

In face detection, since the eyes are of different colour tone from the nose, the Haar feature (b)
from Figure 2.3 can be scaled to fit that area as shown below,

Figure 2-4. How the Haar like feature of Figure 2.3 can be used to scale the eyes

One Haar feature is however not enough as there are several features that could match it (like
the zip drive and white areas at the background of the image of Figure 2.4). A single classifier
therefore isn’t enough to match all the features of a face, it is called a “weak classifier.” Haar
cascades, the basis of Viola Jones detection framework therefore consist of a series of weak
classifiers whose accuracy is at least 50% correct. If an area passes a single classifier, it moves
to the next weak classifier and so on, otherwise, the area does not match.

22
2.7.1 Cascaded Classifier

Figure 2-5. several classifiers combined to enhance face detection

From Figure 2.5, a 1 feature classifier achieves 100% face detection rate and about 50% false
positive rate. A 5 feature classifier achieves 100% detection rate and 40% false positive rate (20%
cumulative). A 20 feature classifier achieves 100% detection rate with 10% false positive rate
(2% cumulative).Combining several weak classifiers improves the accuracy of detection.

A training algorithm called Adaboost, short for adaptive boosting, which had no application
before Haar cascades, was utilized to combine a series of weak classifiers in to a strong classifier.
Adaboost tries out multiple weak classifiers over several rounds, selecting the best weak
classifier in each round and combining the best weak classifier to create a strong classifier.
Adaboost can use classifiers that are consistently wrong by reversing their decision. In the design
and development, it can take weeks of processing time to determine the final cascade sequence.

After the final cascade had been constructed, there was a need for a way to quickly compute the
Haar features i.e. compute the differences in the two areas. The integral image was instrumental
in this.

2.7.2 Integral Image

The Integral image also known as the “summed area table” developed in 1984 came in to

Figure 2-6. Pixel Coordinates of an integral image

23
widespread use in 2001 with the Haar cascades. A summed area table is created in a single pass.
This makes the Haar cascades fast, since the sum of any region in the image can be computed
using a single formula.

Figure 2-7 Integral image calculation.

The integral image computes a value at each pixel (x, y) as is shown in Figure 2.6, that is the
sum of the pixel values above and to the left of (x, y), inclusive. This can quickly be computed
in one pass through the image.

Let A, B, C D be the values of the integral image at the corners of a rectangle as shown in Figure
2.7.

The sum of original image values within the rectangle can be computed.

𝑆𝑢𝑚 = 𝐴 − 𝐵 − 𝐶 + 𝐷 - (2.1)

Only three additions are required for any size of rectangle. This face detection approach
minimizes computation time while achieving high detection accuracy. It is now used in many
areas of computer vision.

. Figure 2-8. Values of the integral Image on a rectangle

24
2.8 Improving Face Detection
Face detection can be improved by tuning the detectors parameters to yield satisfactory results.
The parameters to be adjusted are explained as follows.

2.8.1 Scale Increase Rate.

The scale increase rate specifies how quickly the face detector function should increase the scale
for face detection with each pass it makes over an image. Setting the scale increase rate high
makes the detector run faster by running fewer passes. If it is set too high it may jump quickly
between the scales and miss the faces. The default increase rate in OpenCV is 1.1. This implies
that the scale increases by a factor of 10 % each pass.

The parameters assume a value of 1.1, 1.2, 1.3 or 1.4.

2.8.2 Minimum Neighbours Threshold

The minimum neighbour’s threshold sets the cut-off level for discarding or keeping rectangle
groups as either faces or not. This is based on the number of raw detections in the group and its
values ranges from zero to four.

When the face detector is called behind the scenes, each positive face region generates many
hits from the Haar detector as in Figure 2.8. The face region itself generates a large cluster of
rectangles that to a large extend overlap. The isolated detections are usually false detections and
are discarded. The multiple face region detections are then merged in to a single detection. The
face detection function does all this before returning the list of the detected faces. The merge step
groups rectangles that contain a large number of overlaps and then finds the average rectangle
for the group. It then replaces all the rectangles in the group with the average rectangle.

Figure 2-9. Lena’s image showing the list of rectangles


25
2.8.3 Canny Pruning Flag

The Canny Pruning flag detection parameter is a flag variable that when set enables the face
detector to skip regions in the image that are unlikely to contain a face. The regions to be skipped
are usually identified by running an edge detector i.e. the canny edge detector over the image
before running the face detector. This greatly reduces computational overhead and eliminates
false positives. The choice of setting the flag or not is usually a trade-off between speed and
detecting more faces.

2.8.4 Minimum Detection Scale

This detection parameter sets the size of the smallest face that can be searched in the input image.
The most commonly used size is 24 × 24. Depending on the resolution of the input image, the
small size may be a small portion of the input image. This would then not be helpful as its
detection would take up Central Processing Unit (CPU) cycles that could have been utilized for
other purposes.

2.8.5 Expected Output of Face Detector on Test Images.

Figure 2.9. Shows the expected results after a successful face detection using the Viola Jones
face classifier.

Figure 2-10. Expected result on images from the CMU – MIT faces database

26
2.9 Why Biometric Identification?
Human identification is a basic societal requirement for proper functioning of a nation. By
recognizing a face, you could easily detect a stranger or identify a potential breach of security.

In today’s larger, more complex society it isn’t that simple with all the growing electronic
interactions. So it becomes even more important to have an electronic verification of a person’s
identity.

Until recently, electronic verification was done either based on something the person had in their
possession like an ID card, or on something they knew, like a password. The major problem is
that these forms of electronic identification are not very secure as they can be faked by hackers,
maliciously given away, stolen or even lost.

Therefore, the ultimate form of electronic verification of a person’s identity is biometrics. That
is using a physical attribute of a person to make an affirmative identification. This is because
such attributes like finger print, Iris or face of a person cannot be lost, given away, stolen or
forged by hackers.

2.10 Why Face Recognition in lieu of other Biometric Methods?


While traditional biometric methods of identification such as fingerprints, Iris scans and voice
recognition are viable, they are not always the best suited depending on where they will be used.

In applications such as Surveillance and monitoring of public places for instance, such methods
would end up failing because they are time consuming and inefficient especially in situations
where there are many people involved. The cost of implementation is also a hindrance as some
components often have to be imported. This would lead to the setup of the system being
expensive.

In general, we cannot ask everyone to line up and put their finger on a slide or an eye in front of
a camera or do something similar. Thus the intuitive need for an affordable and mobile system
much similar to the human eye to identify a person.

27
2.11 History of Face Recognition
Table 2.1. A table showing the brief history of the existing face recognition techniques

Year Authors Method

1973 Kanade First Automated System

1987 Sirovich & Kirby Principal Component Analysis (PCA)

1991 Turk & Pentland Eigenface

1996 Etemad & Chellapa Fisher face

2001 Viola & Jones Adaboost + Haar Cascade

2007 Naruniec & Skarbek Gabor Jets

Takeo Kanade is a Japanese computer scientist and one of the world's foremost researchers in
computer vision came up with a program which extracted face feature points (such as nose, eyes,
ears and mouth) on photographs. These were then compared to reference data.

A major milestone that reinvigorated research was the PCA method by Sirovich and Kirby in
1987. The Principal Component Analysis is a standard linear algebra technique, to the face
recognition problem, which showed that less than one hundred values were required to
accurately code a suitably aligned and normalized face image.

Turk and Pentland discovered that while using the Eigen faces technique, the residual error could
be used to detect faces in images, a discovery that enabled reliable, real time automatic face
recognition systems.

Although this approach was somehow constrained by environmental factors, it nonetheless


created significant interest in furthering development of automated face recognition techniques.

The viola Jones Adaboost and Haar cascade method brought together new algorithms and
insights to construct a framework for robust and extremely rapid visual detection. This system
was most clearly distinguished from previous approaches in its ability to detect faces extremely
rapidly. Operating on 384 y 288-pixel images, faces were detected at 15 frames per second on a
700MHz Intel Pentium 3 Processor.

28
All identification or authentication technologies operate using the following four stages:

• Capture: A physical or behavioural sample is captured by the system during Enrolment


and also in identification or verification process.
• Extraction: unique data is extracted from the sample and a template is created.
• Comparison: the template is then compared with a new sample.
• Match/non-match: the system decides if the features extracted from the new Samples are
a match or a non-match.

2.12 Face Recognition Concepts


Although different approaches have been tried by several groups of people across the world to
solve the problem of face recognition, no particular technique has been discovered that yields
satisfactory results in all circumstances.

The different approaches of face recognition for still images can be categorized in to three main
groups namely:

 Holistic Approach – In this, the whole face region is taken as an input in face detection
system to perform face recognition.
 Feature-based Approach – where the local features on the face such as the noise and eyes
are segmented and then fed to the face detection system to ease the task of face
recognition.
 Hybrid Approach – In hybrid approach, both the local features and the whole face are
used as input to the detection system, this approach is more similar to the behaviour of
human beings in recognizing faces.

There are two main types of face Recognition Algorithms:

 Geometric – this algorithm focuses at distinguishing features of a face.


 Photometric – a statistical approach that distils an image into values and comparing the
values with templates to eliminate variances.

The Most Popular algorithms are:

1. Principal Component Analysis based Eigenfaces.


2. Linear Discriminate Analysis.

29
3. Elastic Bunch Graph Matching using the fisher face algorithm.
4. The Hidden Markov Model
5. Neuronal Motivated Dynamic Link Matching.
6. Local Binary Patterns Histograms (LBPH)

2.13 Face Recognition: LBPH Algorithm

Human beings perform face recognition automatically every day and practically with no effort.

Although it sounds like a very simple task for us, it has proven to be a complex task for a computer,
as it has many variables that can impair the accuracy of the methods, for example: illumination
variation, low resolution, occlusion, amongst other.

In computer science, face recognition is basically the task of recognizing a person based on its
facial image. It has become very popular in the last two decades, mainly because of the new
methods developed and the high quality of the current videos/cameras.

Note that face recognition is different of face detection:

 Face Detection: it has the objective of finding the faces (location and size) in an image and
probably extract them to be used by the face recognition algorithm.

 Face Recognition: with the facial images already extracted, cropped, resized and usually
converted to grayscale, the face recognition algorithm is responsible for finding
characteristics which best describe the image.

The face recognition systems can operate basically in two modes:

 Verification or authentication of a facial image: it basically compares the input facial


image with the facial image related to the user which is requiring the authentication. It is
basically a 1x1 comparison.

 Identification or facial recognition: it basically compares the input facial image with all
facial images from a dataset with the aim to find the user that matches that face. It is basically
a 1xN comparison.

30
Each method has a different approach to extract the image information and perform the matching
with the input image. However, the methods Eigenfaces and Fisher faces have a similar approach
as well as the SIFT and SURF methods.

2.13.1 Introduction
Local Binary Pattern (LBP) is a simple yet very efficient texture operator which labels the pixels
of an image by thresholding the neighbourhood of each pixel and considers the result as a binary
number.

It was first described in 1994 (LBP) and has since been found to be a powerful feature for texture
classification. It has further been determined that when LBP is combined with histograms of
oriented gradients (HOG) descriptor, it improves the detection performance considerably on some
datasets.

Using the LBP combined with histograms we can represent the face images with a simple data
vector.
As LBP is a visual descriptor it can also be used for face recognition tasks, as can be seen in the
following step-by-step explanation.

2.13.2 Step-by-Step

Now that we know a little more about face recognition and the LBPH, let’s go further and see the
steps of the algorithm:

1. Parameters: the LBPH uses 4 parameters:

 Radius: the radius is used to build the circular local binary pattern and represents the
radius around the central pixel. It is usually set to 1.
 Neighbours: the number of sample points to build the circular local binary pattern.
Keep in mind: the more sample points you include, the higher the computational cost.
It is usually set to 8.
 Grid X: the number of cells in the horizontal direction. The more cells, the finer the
grid, the higher the dimensionality of the resulting feature vector. It is usually set to 8.
 Grid Y: the number of cells in the vertical direction. The more cells, the finer the grid,
the higher the dimensionality of the resulting feature vector. It is usually set to 8.
31
2. Training the Algorithm: First, we need to train the algorithm. To do so, we need to use a
dataset with the facial images of the people we want to recognize. We need to also set an ID
(it may be a number or the name of the person) for each image, so the algorithm will use this
information to recognize an input image and give you an output. Images of the same person
must have the same ID. With the training set already constructed, let’s see the LBPH
computational steps.

3. Applying the LBP operation: The first computational step of the LBPH is to create an
intermediate image that describes the original image in a better way, by highlighting the facial
characteristics. To do so, the algorithm uses a concept of a sliding window, based on the
parameter’s radius and neighbours.

The image below shows this procedure:

Figure 2-11 LBP operation on an image

Based on the image above, let’s break it into several small steps so we can understand it easily:

 Suppose we have a facial image in grayscale.


 We can get part of this image as a window of 3x3 pixels.
 It can also be represented as a 3x3 matrix containing the intensity of each pixel (0~255).
 Then, we need to take the central value of the matrix to be used as the threshold.
 This value will be used to define the new values from the 8 neighbours.
 For each neighbour of the central value (threshold), we set a new binary value. We set 1 for
values equal or higher than the threshold and 0 for values lower than the threshold.
 Now, the matrix will contain only binary values (ignoring the central value). We need to
concatenate each binary value from each position from the matrix line by line into a new

32
binary value (e.g. 10001101). Note: some authors use other approaches to concatenate the
binary values (e.g. clockwise direction), but the final result will be the same.
 Then, we convert this binary value to a decimal value and set it to the central value of the
matrix, which is actually a pixel from the original image.
 At the end of this procedure (LBP procedure), we have a new image which represents better
the characteristics of the original image.

Note: The LBP procedure was expanded to use a different number of radius and neighbours,
it is called Circular LBP.

FIGURE 2-12 bilinear interpolation

It can be done by using bilinear interpolation. If some data point is between the pixels, it uses
the values from the 4 nearest pixels (2x2) to estimate the value of the new data point.

4. Extracting the Histograms: Now, using the image generated in the last step, we can use
the Grid X and Grid Y parameters to divide the image into multiple grids, as can be seen
in the following image:

33
Figure 2-13 Extracting the Histograms

Based on the image above, we can extract the histogram of each region as follows:

 As we have an image in grayscale, each histogram (from each grid) will contain only 256
positions (0~255) representing the occurrences of each pixel intensity.

 Then, we need to concatenate each histogram to create a new and bigger histogram.
Supposing we have 8x8 grids, we will have 8x8x256=16.384 positions in the final histogram.
The final histogram represents the characteristics of the image original image.

5. Performing the face recognition: In this step, the algorithm is already trained. Each histogram
created is used to represent each image from the training dataset. So, given an input image, we
perform the steps again for this new image and creates a histogram which represents the image.

 So to find the image that matches the input image we just need to compare two histograms
and return the image with the closest histogram.

 We can use various approaches to compare the histograms (calculate the distance between
two histograms), for example: Euclidean distance, chi-square, absolute value, etc. In this
example, we can use the Euclidean distance (which is quite known) based on the following
formula:

 So the algorithm output is the ID from the image with the closest histogram. The algorithm
should also return the calculated distance, which can be used as a ‘confidence’ measurement.

34
Note: don’t be fooled about the ‘confidence’ name, as lower confidences are better because
it means the distance between the two histograms is closer.

 We can then use a threshold and the ‘confidence’ to automatically estimate if the algorithm
has correctly recognized the image. We can assume that the algorithm has successfully
recognized if the confidence is lower than the threshold defined.

 LBPH is one of the easiest face recognition algorithms.

 It can represent local features in the images.

 It is possible to get great results (mainly in a controlled environment).

 It is robust against monotonic grey scale transformations.

 It is provided by the OpenCV library (Open Source Computer Vision Library).

35
Chapter 3
METHODOLOGY AND DESIGN
3.1 System Design
In this design, several related components in terms of functionality have been grouped to form
sub-systems which then combine to make up the whole system. Breaking the system down to
components and sub-systems informs the logical design of the class attendance system.

3.2 General Overview


The flow diagram of Figure 3.1 depicts the systems operation.

Figure 3-1. Sequence of events in the class attendance system.

From Figure 3.1, it can be observed that most of the components utilized are similar;( the Image
acquisition component for browsing for input images, the face detector and the faces database

36
for storing the face label pairs) only that they are employed at the different stages of the face
recognition process.

3.3 Training Set Manager Sub System


The logical design of the training set management sub-system is going to consist of an image
acquisition component, a face detection component and a training set management component.
Together, these components interact with the faces database in order to manage the training set.
These are going to be implemented in a windows application form.

3.4 Face Recognizer Sub System.


The logical design of the Face Recognizer will consist of the image acquisition component, face
recognizer and face detection component all working with the faces database. In this the image
acquisition, and face detection component are the same as those in the Training set manager sub
system as the functionality is the same. The only difference is the face recognizer component
and its user interface controls. This will load the training set again so that it trains the recognizer
on the faces added and show the calculated eigenfaces and average face. It should then show the
recognized face in a picture box.

3.5 System Architecture.


The Figure below shows the logical design and implementation of the three desktop subsystems.

Figure 3-2. The logical design of the Desktop Module Subsystems

37
3.6 Functions of the two Sub –Systems

Image Acquisition
TRAINING ( Gets the input
Face Recognizer
SET image with the
Recognises the
MANAGER
detected faces from
Connects to faces Face Detector the trained data
Database Component
Trains the recognizer
Loads the training ( detects faces and
on the training set
set to display
Loads the training set
present faces
Database of Shows the calculated
Deletes a face
Faces (This avera ge face and the
from the training
contains the eigenfaces
set
training set)
Connects to the faces
Updates a face in

Figure 3-3 A block diagram showing functions of the components.

The functionalities of the components are depicted in the block diagrams of Figure 3.4. The face
recognizer system will consist of two major components i.e. the training set manager and the face
recognizer. These two components will share the Faces database, the image acquisition and the
face detector components; as they are common in their functionality.

We will therefore partition the system in to two subsystems and have their detailed logical
designs to be implemented.

38
3.7 Full Systems Logical Design

Image Acquisition Face


Image
Training Component
Acquisition Recognizer
Set Component
Face Detector
Manager Face Detector Component
Component ( detects faces)
( detects faces)

Faces

Database

Figure 3-4. A logical design of the whole system

3.8 Feasibility Study

Three key considerations are involved in the feasibility analysis:


1. Economic Feasibility
2. Technical Feasibility
3. Social Feasibility
Considering the above keys, feasibility of this project can be understood from the following
points:
1. Reduces manual effort.
2. Keeps track of a student’s attendance correctly and gives the result.
3. Implementation of camera and sensors make this project totally automated.
4. Easy to be implemented in educational or commercial institutes.
5. Real time operations are done.
39
6. Images that are to be compared with the snaps taken by the camera can be easily stored
in the database.
7. On the basic of this method, results such as defaulters list, students lecture wise n total
attendance in percentage and count can be calculated and access to these results can be
made available for teachers as well as students to keep track of their respective
attendance.

3.9 Use Case Diagram:


A use case diagram is a representation of a set of description of actions (use cases) that the system
can perform in collaboration with an external factor (user/actor).

3.9.1 The User Story:


As a user, the client wants a system where he/she can load an image that will automatically detect
the number of faces on the image. The client also requested that, this system should have the
option to capture an image using a mobile phone or an inbuilt webcam on a laptop. As a user, the
system should be able to crop the faces on an image after detection and store them on a
folder/dataset that will be used for recognition purposes in the second phase of the system. The
system should be able to automatically count the number of faces detected on the image.

As a user, the client requests the second phase of the system to be able to match faces stored on
a dataset against input images which are either detected from the first phase or captured by an
input device (camera).

The user will start the software used to build this system. On this system, there will be buttons
where the user can click to facilitate interaction between certain task as requested. Because the
system has two phases, the second phase of the system will involve the training of images on a
dataset that are to be used for recognition.

40
The proposed system behavior has been captured by the use case diagram in Figure 3.2 below.

Figure 3-5 User Case Diagram to Illustrate of System Behavior


41
3.10 Project Plan
This project has been planned and followed using the Gantt chat below. The detail layout of the
plan is attached to the Appendix A2 of this report. The detailed layout shows the phases of
implementation using the DSDM methodology chosen for this project. The reader can follow
these phases on the detailed layout to understand how iteratively this project has been developed.

Gantt Chat 9-Oct 28-Nov 17-Jan 8-Mar 27-Apr

Initiate Project Proposal Start Date Days To Complete


Prototyping and Initial…
Resource Implimentation
Review and Evaluation…
Review and Evaluation…
Review work with…
Evaluation and Analysis of…
Review of Final…
Implementation new…
Review Project Report
Project Report Alongside…
Project Viva

Figure 3-6 Gantt Chat showing project plan

42
CHAPTER 4
Facilities required for proposed work

The project is developed on Python and is supported by a SQL database to store user specific
details.
4.1 Hardware Requirements:
● A standalone computer needs to be installed in the office room where the system is to be
deployed.
● Camera must be positioned in the office room to obtain the snapshots.
● Optimum Resolution: 512 by 512 pixels.
● Secondary memory to store all the images and database.

4.2 Software Requirements:

 OpenCV current version of OpenCV is 3.4.6 (Open source computer vision) is a library
of programming functions mainly aimed at real-time computer vision. Originally
developed by Intel, it was later supported by Willow Garage then Itseez (which was later
acquired by Intel. The library is cross-platform and free for use under the open-source
BSD license.

 PyCharm currently version is 2019.1.1

PyCharm is an integrated development environment (IDE) used in computer


programming, specifically for the Python language. It is developed by the Czech
company JetBrains. It provides code analysis, a graphical debugger, an integrated unit
tester, integration with version control systems (VCSes), and supports web development
with Django.3.9.2 Desktop Tools

 TensorFlow 1.6

TensorFlow is a free and open-source software library for dataflow and


differentiable programming across a range of tasks. It is a symbolic math library, and is
also used for machine learning applications such as neural networks.[5] It is used for both
research and production at Google.

 Python version 3 or Higher  Windows -7 or above

43
CHAPTER 5
Results and Analysis

5.1 User Interface of the system.

Figure 5-1 User interface of the system

5.2 Training set collection


The faces database editor adds faces in the training set. The image is acquired from the panel as
shown in Figure 5.2 . The Regions of Interest (ROI) i.e. face in the image will then automatically
be detected by drawing a light green rectangular box. We give the extracted grayscale face from
the image a face label and then add them to the training set

44
Figure 5-2 Maintaining id of student

Figure 5-3 Face detection and dataset capture

45
Figure 5-4 Training dataset
5.3 The Face Recognizer.

The face recognizer compares the input face in the image captured with the faces captured during
enrolment. If it is a match it then retrieves the name associated with the input face

Figure 5-5 Face recognition and marking attendance in attendance sheet

46
5.4 Attendance sheet

Figure 5-6 Maintaining attendance in spreadsheet

47
CHAPTER 6
Conclusion and Recommendation.

It can be concluded that a reliable, secure, fast and an efficient class attendance management
system has been developed replacing a manual and unreliable system. This face detection and
recognition system will save time, reduce the amount of work done by the administration and
replace the stationery material currently in use with already existent electronic equipment.

There is no need for specialized hardware for installing the system as it only uses a computer
and a camera. The camera plays a crucial role in the working of the system hence the image
quality and performance of the camera in real time scenario must be tested especially if the
system is operated from a live camera feed.

The system can also be used in permission based systems and secure access authentication
(restricted facilities) for access management, home video surveillance systems for personal
security or law enforcement.

The major threat to the system is Spoofing. For future enhancements, anti- spoofing techniques
like eye blink detection could be utilized to differentiate live from static images in the case
where face detection is made from captured images from the classroom. From the overall
efficiency of the system i.e. 83.1% human intervention could be called upon to make the system
fool proof. A module could thus be included which lists all the unidentified faces and the lecturer
is able to manually correct them.

Future work could also include adding several well-structured attendance registers for each class
and the capability to generate monthly attendance reports and automatically email them to the
appropriate staff for review.

48
REFERENCES

[1] P. Viola and M. Jones, “Rapid Object Detection using a Boosted Cascade of Simple,” 2001.
[2] v. kumar and k. raja, “Face Recognition Based Student Attendance System with OpenCV,”
2016.
[3] S. V and R. Swetha, “Attendance automation using face recognition biometric
authentication,” 2017.
[4] http://bitsearch.blogspot.com/2012/12/overview-of-viola-jones-detector-in.html
[5] https://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework
[6] https://dokumen.tips/documents/face-recognition-based-student-attendance-system-
with-recognition-based-student.html
[7] https://towardsdatascience.com/face-recognition-how-lbph-works-90ec258c3d6b
[8] Ahonen, Timo, Abdenour Hadid, and Matti Pietikainen. “Face description with local binary
patterns: Application to face recognition.” IEEE transactions on pattern analysis and
machine intelligence 28.12 (2006): 2037–2041
[9] Ojala, Timo, Matti Pietikainen, and Topi Maenpaa. “Multiresolution gray-scale and
rotation invariant texture classification with local binary patterns.” IEEE Transactions
on pattern analysis and machine intelligence24.7 (2002): 971–987
[10] Ahonen, Timo, Abdenour Hadid, and Matti Pietikäinen. “Face recognition with local
binary patterns.” Computer vision-eccv 2004 (2004): 469–481
[11] https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html#local-
binary-patterns-histograms
[12] http://www.scholarpedia.org/article/Local_Binary_Patterns
[13] Y.-Q. Wang, “An Analysis of the Viola-Jones Face Detection Algorithm,” Image Process. Line,
vol. 4, pp. 128–148, Jun. 2014.
[14] Y. Freund, R. Schapire, and N. Abe, “A short introduction to boosting,” J.-Jpn. Soc. Artif.
Intell., vol. 14, no. 771–780, p. 1612, 1999.
[15] P. Viola and M. J. Jones, “Robust real-time face detection,” Int. J. Comput. Vis., vol. 57, no. 2,
pp. 137–154, 2004.
[16] M. Fuzail, H. M. F. Nouman, M. O. Mushtaq, B. Raza, A. Tayyab, and M. W. Talib, “Face
Detection System for Attendance of Class’ Students.”
[17] Y. Freund and R. E. Schapire, “A desicion-theoretic generalization of on-line learning and
an application to boosting,” in Computational learning theory, 1995, pp. 23–37.

49
APPENDIX

REVIEW PAPER

Attendance System Using Machine Learning

Abstract— Daily attendance marking is a common and important activity in schools and colleges for checking
the performance of students. Manual Attendance maintaining is difficult process, especially for large group of
students. Some automated systems developed to overcome these difficulties, have drawbacks like cost, fake
attendance, accuracy, intrusiveness. To overcome these drawbacks, there is need of smart and automated
attendance system. Traditional face recognition systems employ methods to identify a face from the given
input but the results are not usually accurate and precise as desired. The system described in this we aims to
deviate from such traditional systems and introduce a new approach to identify a student using a face
recognition system, the generation of a facial Model. This describes the working of the face recognition system
that will be deployed as an Automated Attendance System in a classroom environment.

Keywords— LBPH, Viola Jones, HAAR features, HOG features, OpenCV, Webcam

I. INTRODUCTION for recognition of the faces. Recognition of face we


need training data sets. Instances taking camera
Let us take an example of application to the theory we
capture now check that image to database Images.
are proposing here. Taking attendance in the schools
Face recognition of different peoples based on the
and colleges is being a waste of time and effort for
related images of that person image we need take
both the students and lectures as well. Now a days
images for before face recognition. In case if the
biometric is more usage they are finger print
image is not in data base then we store that image as
recognition facial recognition iris scanning recognition
new person in database. Next time same image of that
voice recognition signature recognition etc. One of
new image person appears in image and recognition
that biometric categories is face detection and
the face or else taking as new image and storing in
recognition. Based on the image we take security
database process is repeating. In this paper we are
safety, attendances and some time it useful for
selecting of the face recognition and detection giving
decision also. Mostly this facial detection and
result using MATLAB. This requires a high-end
recognition is decreasing the manual work for human.
specification of a system in order get the better
Image capturing from camera or cc camera sometime
results. It won’t run on all the small specification
this is also a streaming video from camera. Form that
systems. So, this can run only small database and
offline or online data, we capture the image after that
compare them with the face required.
applying the face detection techniques. Face
detection is detecting the face location and presence
of face in images. In this face detection we mostly see 2. LITERATURE SURVEY
the nose, hair, ears, mouth, eyes and also different
pose of faces in images. So many Face detection In [15] the author proposed that different types of
techniques, few of them is Viola Jones Face Detection face detection for detecting faces in different pose.
Algorithm, (LBP), and Ada-Boost for Face Detection, Detecting face in different pattern based on
SMQT Features and SNOW Classifier Method. After techniques. Basic pattern for detecting face is nose,
applying face detection techniques, we detected the eyes, hair, ears and some time it based on tone of
faces or objects in image and crop that image apply skin. Face detection is detecting face based on
Face recognition technique. So many ways to location of face and presences of face in images.
recognition the faces by applying Hog features, Haar Different types of detecting the face techniques
features, Machine learning, deep leaning, they are Ada-Boost Algorithm for Face Detection,
classification techniques some other tech also used Viola Jones Face Detection

50
Algorithm, SMQT Features and SNOW Classifier 100 samples per person and store it in the dataset.
Method, Local Binary Pattern (LBP). Each have We can store the innumerable person’s samples. We
advantages and disadvantages discussed in that will give one id number to each person in the dataset
paper. [6].

2. Face detection
Xiang-Yu Li [16] the author proposed that
The next step is the face detection .the faces
recognition face using hog features and pca
are detected by using the Viola-Jones face detection
algorithms. By applying 0recognition algorithm to
algorithm. It involves four steps:
cropped faces images from that we get similarity
i. Haar-like features
b/w taken image and database image. In this paper
ii. Integral image
PAC algorithm used for face detection and
iii. Adaboost training
recognition. Arun Katara[17] the author shows that
iv. Cascading classifier
face recognition of facial of different person or
student .from recognition attendances is upload to
database using face detection and recognition of i. Haar feature selection
student or workers. From this manual work is
decrease by human and automatically attendance All human faces share some comparable properties.
system based on faces process done. Haar like features are used to detect difference in the
black and white portion of the image. These
In [14] authors have considered a system based on regularities may be detected using Haar Features.
real time face recognition which is fast & which Some of the types are shown in fig 3.
needs improvisation of images in various lighting
environments.

3. METHODOLOGY
Fig 3: Different type of Haar features

Face recognition system encompasses three main


We use the two rectangle Haar like feature. Some
phases which are face detection, feature extraction,
face recognition. properties common to human faces are:
1) Face Detection: Face acquisition and 1. The eye region is darker than the upper-
localisation from an image is detecting with cheeks.
Viola-Jones Algorithm. preprocessing of 2. The nose bridge region is brighter than the
human faces are separated from the objects eyes.
present in an image.
2) Feature Extraction: From the detected face we
Composition of properties forming match able
are extracting the features through Local
Binary pattern Histogram. In LBPH, first we facial features:
compute the local binary pattern images and • Location and size: eyes, mouth, bridge of
then histograms are created. nose
3) Face Recognition: The extracted features are • Value: oriented gradients of pixel intensities
fed to the classifier which recognizes or
classifies by using Machine Learning We take the image and convert it into 24X24
algorithm. The classifier compares the test
window and smear each Haar feature to that window
image with the images saved in the database
can be done with supervised machine learning pixel by pixel. Each feature is related to a specific
classifier. location in the sub-window. The value is calculated
by applying Haar features is
1. Creating the dataset
The datasets are created using the webcam Value = Σ (pixels in black area) - Σ (pixels in
or the camera attached to the computer. We take the white area)

51
shown completely in fig 5.The LBPH feature vector, is
ii. Integral images computed which is given below:
 Divide the examined window into cells (e.g.
The second step of the Viola-Jones face detection 8x8 pixels for each cell).
algorithm is to convert an input image into an integral  For each pixel in a cell keeping center pixel
image is shown in fig 4. The integral image at the value as the reference, compare the pixel to
location (x,y) contains the sum of the pixels to the each of its 8 neighbors (on its left-top, left-
above and to the left of (x ,y)[7]. middle, left-bottom, right top, etc.).
 When the center pixel's value is greater than
the neighbor's value, assign "1". Otherwise,
assign "0". This gives an 8-digit binary number
(which is usually converted to decimal).
 Compute the histogram, over the cell, of the
frequency of each "number" occurring (i.e.,
each combination of which pixels are smaller
(a) (b) and which are greater than the center)[9].
Fig 4: (a) input image (b) integral image

This makes the calculation of the addition to


the entire pixels within any specified rectangle using
only four values. In the integral image, these values are
the pixels that resemble with the edges of the rectangle
in the input image.

iii. Adaboost machine learning method

Viola-jones algorithm[5] uses a 24X24


window as the base window size to evaluate all the
features in an image. It will results in a 160,000+ Fig 5: Local Binary Pattern Histogram
features, all the features are not important to us. So we
eliminate the features which are not important. 4. Classification (Face Recognition)
Adaboost is a machine learning algorithm
The extracted features are fed to the
which helps in umpiring only the most outstanding
classifier. For classification we use the K-nearest
features from 160,000+ features. After these features
neighbor classifier. The simplest K-nearest neighbor
forms a weighted arrangement of all the features
classifier is the Euclidean distance classifier. The
which are used in gaging and deciding any given
Euclidean distance is calculated by comparing the
window has face or not. These features are called
test image features with features stored in the dataset.
weak classifiers [8].
The minimum distance value between the test image
feature value and feature values stored in the dataset
iv. Face Detection
gives the recognition rate [10].
The cascading classifier is an assembly of
stages that contains a strong classifier. The work of
this stage is to combine the weak classifiers and
extraction.

3. Feature Extraction IV. RESULTS AND DISCUSSION


From the detected face, we extract the features by While creating the dataset, we assign id number to
using Local Binary Pattern Histogram (LBPH) is each person along with his/her name. During

52
recognition, if the test person is found in the dataset
the classifier classifies shows the name of the person
along with the recognition rate as shown in fig. 6

(a)

Fig 6: Recognizing face with accuracy

5
.
The feature extraction by using the LBPH
, the different illuminations of faces are captured in a
certain time period[11].

(b)
Fig 8: a) Chart for recognition rate Vs number of persons
b) Graph for Comparison of different algorithms

From the above figure 8 a) which represents the


Fig 7: A detected image, LBP image, Histogram.
chart to deploy recognition rate Vs number of
persons for performance analysis and b), it is clear
Fig 7 shows detected image though a full picture, later that the proposed LBP algorithm gave better results
in training stage the detected image is converted into and confidence compared to other algorithms.
gray then system generates histogram which is easy to Although other algorithms will not give accurate
calculate by through this method [12]. measures and confidence, so we have chosen the
From this proposed system the results are of accuracy LBP algorithm [13].
85%-95% recognition rate even under the different
conditions like illumination, background, rotation, V. CONCLUSION AND FUTURE SCOPE
different poses, presence of beard or glasses etc. In this project face detection is carried out
by using Viola- Jones face detection algorithm,
feature extraction by using Local Binary Patterns
Histograms and classification by using Euclidean
distance classifier. The proposed system is
implemented using OpenCV and anaconda. From

53
the above graphs I can conclude that LBPH and University Journal of Information Communication
Euclidean distance has better recognition rate. The Technology, pp 46–50, 2012.
results achieving for this methods is of accuracy [9] Idelette Laure Kambi Beli and Chunsheng Guo,
85%-95%. ―Enhancing Face Identification Using Local Binary
Patterns and K-Nearest Neighbors‖, Journal of Imaging,
On the base of analysis carried out in the present
2017.
examine, the following suggestions are accessible for
[10] Keerti V, Dr. Sarika Tale, ―Implementation Of
further research and it is valuable to concentrate on Preprocessing And Efficient Blood Vessel Segmentation In
the following issues in future work. The proposed Retinopathy Fundus Image‖. IJRITCC, Volume 3, Issue 6,
system is modified to support in all the conditions 2015.
such as brightness, wearing goggles, beard, in case of [11] Panchakshari P, Dr. Sarika Tale, ―Performance
twins by minimum distance classifier. The proposed Analysis of Fusion method for EAR Biometrics‖. IEEE
method is tailored in the evolution of genetic Conference on recent trends in Electronics, Information and
properties for facial expressions is studied for Communication, May 2016.
different security metrics is helpful in securing [12] M Madhurekha, Dr. Sarika Raga, ―CT Brain image
government confidential databases and criminal Enhancement using DWT-SVD gamma correction method
and diagnosis of anomalous using CNN‖. IEEE, 2018.
detection.
[13] Anitha N E, Dr. Sarika Raga, ―Lung Carcinoma
Identification Through Support Vector Machines And
REFERENCES Convolutional Neural Network‖. JETIR,Volume 5 ,Issue 9,
2018.
[1] Dr. Eyad I. Abbas, Mohammed E. Safi- MIEEE, Dr. [14] Adam Schmidt, Andrzej Kasinski, “The Performance
Khalida S. Rijab, ― Face Recognition Rate Using Different of the Haar Cascade Classifiers Applied to the Face and
Classifier Methods Based on PCA‖, International Eyes Detection”, Computer Recognition Systems 2
Conference on Current Research in Computer Science and [15]A Study of Various Face Detection Methods, Ms.Varsha
Information Technology (ICCIT),2017 pp 37-40. Gupta1 , Mr. Dipesh Sharma2,ijarcce vol.3
[2] Priyanka Dhoke, M.P. Parsai, ―A MATLAB based Face https://www.ijarcce.com/upload/2014/may/IJARCCE7G%
Recognition using PCA with Back Propagation Neural 20%20a%20varsha%20A%20Study%20of%20Various%20
network‖, 2014. Face.pdf
[3] Nawaf Hazim Barnouti ,Sinan Sameer Mahmood Al- [16] Face Recognition Based on HOG and Fast PCA
Dabbagh ,Wael Esam Matti ,Mustafa Abdul Sahib Naser, Algorithm Xiang-Yu Li(&) and Zhen-Xian Lin
―Face Detection and Recognition Using Viola-Jones with [17] Attendance System Using Face Recognition and Class
PCA-LDA and Square Euclidean Distance‖ International Monitoring System, Arun Katara1, Mr. Sudesh2,
Journal of Advanced Computer Science and Applications, V.Kolhe3http://www.ijritcc.org/download/browse/Volum
Vol. 7, No. 5, 2016. e_5_Issues/February_17_Volume_5_Issue_2/1489565866
[4] Ningthoujam Sunita Devi, K. Hemachandran, ―Face _ 15-03-2017.pdf
Recognition Using Principal Component Analysis‖,
International Journal of Computer Science and Information
Technologies, Vol. 5 (5), 2014,pp 6491-6496.
[5] Nisha, Maitreyee Dutta, ―Improving the Recognition of
Faces using LBP and SVM Optimized by PSO Technique‖,
International Journal of Engineering Development and
Research, Volume 5, Issue 4,2017 , ISSN: 2321-9939,pp
297-303.
[6] Paul Viola, Michael Jones, ―Rapid object Detection
using a Boosted cascade of simple features‖, Conference on
Computer vision and Pattern recognition 2001.
[7] Prabhjot Singh and Anjana Sharma, ―Face Recognition
Using Principal Component Analysis in MATLAB‖,
International Journal of Scientific Research in Computer
Science and Engineering, Vol-3(1), pp (1-5) Feb 2015.
[8] Ali, A. Hussain, S. Haroon, F Hussain, S. Khan, M.F.
―Face Recognition with Local Binary Patterns‖. Bahria

54
CODES

FRONTPAGE.PY
#import module from tkinter for UI
from tkinter import *
from playsound import playsound
import os
from datetime import datetime;
#creating instance of TK
root=Tk()

root.configure(background="white")

#root.geometry("300x300")

def function1():
os.system("py dataset_capture.py")

def function2():
os.system("py training_dataset.py")

def function3():
os.system("py recognizer.py")
playsound('sound.mp3')

def function5():
os.startfile(os.getcwd()+"/developers/diet1frame1first.html");

def function6():
root.destroy()

def attend():

os.startfile(os.getcwd()+"/firebase/attendance_files/attendance"+str(datet
ime.now().date())+'.xls')

#stting title for the window


root.title("AUTOMATIC ATTENDANCE MANAGEMENT USING FACE RECOGNITION")

#creating a text label


Label(root, text="FACE RECOGNITION ATTENDANCE SYSTEM",font=("times new
roman",20),fg="white",bg="maroon",height=2).grid(row=0,rowspan=2,columnspa
n=2,sticky=N+E+W+S,padx=5,pady=5)

#creating first button


Button(root,text="Create Dataset",font=("times new
roman",20),bg="#0D47A1",fg='white',command=function1).grid(row=3,columnspa
n=2,sticky=W+E+N+S,padx=5,pady=5)

#creating second button


Button(root,text="Train Dataset",font=("times new
roman",20),bg="#0D47A1",fg='white',command=function2).grid(row=4,columnspa
n=2,sticky=N+E+W+S,padx=5,pady=5)

55
#creating third button
Button(root,text="Recognize + Attendance",font=('times new
roman',20),bg="#0D47A1",fg="white",command=function3).grid(row=5,columnspa
n=2,sticky=N+E+W+S,padx=5,pady=5)

#creating attendance button


Button(root,text="Attendance Sheet",font=('times new
roman',20),bg="#0D47A1",fg="white",command=attend).grid(row=6,columnspan=2
,sticky=N+E+W+S,padx=5,pady=5)

Button(root,text="Developers",font=('times new
roman',20),bg="#0D47A1",fg="white",command=function5).grid(row=8,columnspa
n=2,sticky=N+E+W+S,padx=5,pady=5)

Button(root,text="Exit",font=('times new
roman',20),bg="maroon",fg="white",command=function6).grid(row=9,columnspan
=2,sticky=N+E+W+S,padx=5,pady=5)

root.mainloop()

1. RECOGNISER.PY

import cv2, numpy as np;


import xlwrite;
from firebase import firebase as fire;
import time
import sys
from playsound import playsound
start=time.time()
period=8
face_cas = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(0);
recognizer = cv2.face.LBPHFaceRecognizer_create();
recognizer.read('trainer/trainer.yml');
flag = 0;
id=0;
filename='filename';
dict = {
'item1': 1
}
#font = cv2.InitFont(cv2.cv.CV_FONT_HERSHEY_SIMPLEX, 5, 1, 0, 1, 1)
font = cv2.FONT_HERSHEY_SIMPLEX
while True:
ret, img = cap.read();
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY);
faces = face_cas.detectMultiScale(gray, 1.3, 7);
for (x,y,w,h) in faces:
roi_gray = gray[y:y + h, x:x + w]
cv2.rectangle(img, (x,y), (x+w, y+h), (255,0,0),2);
id,conf=recognizer.predict(roi_gray)
if(conf < 50):
if(id==1):
id='Avadhesh'
if((str(id)) not in dict):
filename=xlwrite.output('attendance','class1',1,id,'yes');

56
dict[str(id)]=str(id);

elif(id==2):
id = 'Shantanu'
if ((str(id)) not in dict):
filename =xlwrite.output('attendance', 'class1', 2, id,
'yes');
dict[str(id)] = str(id);

elif(id==3):
id = 'Mr Sur Singh Rawat'
if ((str(id)) not in dict):
filename =xlwrite.output('attendance', 'class1', 3, id,
'yes');
dict[str(id)] = str(id);

else:
id = 'Unknown, can not recognize'
flag=flag+1
break

cv2.putText(img,str(id)+" "+str(conf),(x,y-
10),font,0.55,(120,255,120),1)

#cv2.cv.PutText(cv2.cv.fromarray(img),str(id),(x,y+h),font,(0,0,255));
cv2.imshow('frame',img);
#cv2.imshow('gray',gray);
if flag == 10:
playsound('transactionSound.mp3')
print("Transaction Blocked")
break;
if time.time()>start+period:
break;
if cv2.waitKey(100) & 0xFF == ord('q'):
break;

cap.release();
cv2.destroyAllWindows();

2. DATASET_CAPTURE.PY

# Import OpenCV2 for image processing


import cv2
import os

def assure_path_exists(path):
dir = os.path.dirname(path)
if not os.path.exists(dir):
os.makedirs(dir)
face_id=input('enter your id')
# Start capturing video
vid_cam = cv2.VideoCapture(0)
vid_cam.set(3,640)
vid_cam.set(4,480)

57
# Detect object in video stream using Haarcascade Frontal Face
face_detector =
cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

# Initialize sample face image


count = 0

assure_path_exists("dataset/")

# Start looping
while(True):

# Capture video frame


_, image_frame = vid_cam.read()

# Convert frame to grayscale


gray = cv2.cvtColor(image_frame, cv2.COLOR_BGR2GRAY)

# Detect frames of different sizes, list of faces rectangles


faces = face_detector.detectMultiScale(gray, 1.3, 5)

# Loops for each faces


for (x,y,w,h) in faces:

# Crop the image frame into rectangle


cv2.rectangle(image_frame, (x,y), (x+w,y+h), (255,0,0), 2)

# Increment sample face image


count += 1

# Save the captured image into the datasets folder


cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) +
".jpg", gray[y:y+h,x:x+w])

# Display the video frame, with bounded rectangle on the person's


face
cv2.imshow('frame', image_frame)

# To stop taking video, press 'q' for at least 100ms


if cv2.waitKey(100) & 0xFF == ord('q'):
break

# If image taken reach 100, stop taking video


elif count>=30:
print("Successfully Captured")
break

# Stop video
vid_cam.release()

# Close all started windows


cv2.destroyAllWindows()

3. TRAINING_DATASET.PY

import os,cv2;

58
import numpy as np
from PIL import Image;

recognizer = cv2.face.LBPHFaceRecognizer_create()
detector= cv2.CascadeClassifier("haarcascade_frontalface_default.xml");

def getImagesAndLabels(path):
#get the path of all the files in the folder
imagePaths=[os.path.join(path,f) for f in os.listdir(path)]
#create empth face list
faceSamples=[]
#create empty ID list
Ids=[]
#now looping through all the image paths and loading the Ids and the
images
for imagePath in imagePaths:
#loading the image and converting it to gray scale
pilImage=Image.open(imagePath).convert('L')
#Now we are converting the PIL image into numpy array
imageNp=np.array(pilImage,'uint8')
#getting the Id from the image
Id=int(os.path.split(imagePath)[-1].split(".")[1])
# extract the face from the training image sample
faces=detector.detectMultiScale(imageNp)
#If a face is there then append that in the list as well as Id of
it
for (x,y,w,h) in faces:
faceSamples.append(imageNp[y:y+h,x:x+w])
Ids.append(Id)
return faceSamples,Ids

faces,Ids = getImagesAndLabels('dataSet')
s = recognizer.train(faces, np.array(Ids))
print("Successfully trained")
recognizer.write('trainer/trainer.yml')

59

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy