0% found this document useful (0 votes)
170 views106 pages

Vidya Jyothi Institute of Technology: in Partial Fulfillment of The Requirements For The Award of The Degree of

This document is a major project report on developing an attendance management system using face recognition. It was submitted by three students - Md Adil, P. Anirudh, and Mohit Chokda - to Vidya Jyothi Institute of Technology in partial fulfillment of the requirements for their Bachelor of Technology degree in Information Technology. The system aims to automate attendance tracking using computer vision and facial recognition techniques.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
170 views106 pages

Vidya Jyothi Institute of Technology: in Partial Fulfillment of The Requirements For The Award of The Degree of

This document is a major project report on developing an attendance management system using face recognition. It was submitted by three students - Md Adil, P. Anirudh, and Mohit Chokda - to Vidya Jyothi Institute of Technology in partial fulfillment of the requirements for their Bachelor of Technology degree in Information Technology. The system aims to automate attendance tracking using computer vision and facial recognition techniques.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 106

A MAJOR PROJECT REPORT ON

“Attendance Management through Face Recognition”

In partial fulfillment of the requirements for the award of the degree of

BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY

Submitted by
Md Adil P.Anirudh Mohit Chokda

16911A1236 16911A1247 16911A1237

Under the Esteemed Guidance of


Mr. B ESWAR BABU
Assoc. Professor

DEPARTMENT OF INFORMATION TECHNOLOGY


VIDYA JYOTHI INSTITUTE OF TECHNOLOGY
(Accredited by NBA, Approved by AICTE, Affiliated to JNTU Hyderabad)
Aziz Nagar Gate, C.B.Post, Chilkur Road, Hyderabad – 500075
2019 - 2020
VIDYA JYOTHI INSTITUTE OF TECHNOLOGY
(Accredited by NBA, Approved by AICTE, Affiliated to JNTU Hyderabad)
Aziz Nagar Gate, C.B.Post, Chilkur Road, Hyderabad - 500075

DEPARTMENT OF INFORMATION TECHNOLOGY

CERTIFICATE

This is to certify that the Project Report on “Attendance Management through Face
Recognition” is a bonafide work by P. Anirudh (16911A1247), Md Adil (16911A1236), and
Mohit Chokda (16911A1237) in partial fulfillment of the requirement for the award of the degree
of Bachelor of Technology in “INFORMATION TECHNOLOGY” JNTU Hyderabad during
the year 2019 - 2020.

Project Guide Head of the department


Mr. B Eswar Babu, Mr. B. Srinivasulu,
M.Tech, M.E.,
Associate Professor. Professor.

External Examiner
VIDYA JYOTHI INSTITUTE OF TECHNOLOGY
(Accredited by NBA, Approved by AICTE, Affiliated to JNTU Hyderabad)
Aziz Nagar Gate, C.B.Post, Chilkur Road, Hyderabad - 500075
2019 - 2020

DECLARATION

We, P.Anirudh(16911A1247), Md Adil(16911A1236), Mohit Chokda(16911A1237)


hereby declare that Project Report entitled “Attendance Management System Through Face
Recognition”, is submitted in the partial fulfillment of the requirement for the award of Bachelor
of Technology in Information Technology to Vidya Jyothi Institute of Technology, affiliated to
JNTU - Hyderabad, is an authentic work and has not been submitted to any other university or
institute for the degree.

Md Adil(16911A236)
P.Anirudh (16911A1247)
Mohit Chokda (16911A1237)
Attendance Using Face Recognition

ACKNOWLEDGEMENT

It is a great pleasure to express our deepest sense of gratitude and indebtedness to our
internal guide Mr. B Eswar Babu, Associate Professor, Department of IT, VJIT, for having been a
source of constant inspiration, precious guidance and generous assistance during the project work. We
deem it as a privilege to have worked under his guidance. Without his close monitoring and valuable
suggestions this work wouldn’t have taken this shape. We feel that this help is un-substitutable and
unforgettable.
We wish to express our sincere thanks to Dr. P. Venugopal Reddy, Director VJIT, for
providing the college facilities for the completion of the project. We are profoundly thankful to
Mr.B.Srinivasulu, Professor and Head of Department of IT, for his cooperation and encouragement.
Finally, we thank all the faculty members, supporting staff of IT Department and friends for their kind
co-operation and valuable help for completing the project.

TEAM MEMBERS

Md Adil(16911A1236)
P. Anirudh (16911A1247)
Mohit Chokda (16911A1237)
Attendance Using Face Recognition

Table of Contents Pg. No.

Abstract iii

1. Introduction 1
1.1 Problem Statement 2
1.2 Objective 2
1.3 Existing System 2
1.4 Proposed System 2
1.5 Modules 3
1.6 System Architecture 4
2. Literature survey 5
2.1.History of JAVA 6
2.2.History of Open CV 6
2.3.Operations using JAVA CV 7
2.4.Key features of JAVA 9
2.5.Open CV Machine Learning Libraries 9
3. System Analysis 10
3.1.Introduction to project 11
3.2. Hardware Requirements 14
3.3.Software Requirements 14
3.4. Implementation 16
4. System Study 17
4.1.Feasibility study 18
4.2.Economic Feasibility 18
4.3.Technical Feasibility 18
4.4.Social Feasibility 19
5. System Design 20
5.1.UML Diagrams 21

i
Attendance Using Face Recognition

5.2. Process Model used with Justification 31


5.3. Diagrams 39
5.3.1 Class Diagram 39
5.3.2 Use Diagram 40
5.3.3 Activity Diagram 41
5.3.4 Sequence Diagram 42
5.3.5 Collaboration Diagram 43
6. Implementation and Result 44
6.1. Coding 45
6.2. Test Cases 70
6.2. Sample Screens 70
7. System Testing 71
7.1. Testing 72
7.2. Levels of Testing 79
7.3. Black box testing 80
7.4. White Box testing 82
7.5. System Testing 84
7.6. Software Testing 85
8. Conclusion 86
8.1. Conclusion 87
8.2. Future Enhancement 87
9. Bibliography 88

ii
Attendance Using Face Recognition

ABSTRACT

Attendance Management through Face Recognition

Object Recognition is refers to a collection of related tasks for identifying objects in photographs
as well as in real time. Region-Based Convolution Neural Networks, or R-CNNs, are a family of
techniques for addressing object localization and recognition tasks, designed for model
performance. You Only Look Once, or YOLO, is a second family of techniques for object
recognition designed for speed and real-time use. Object Detection applications are easier to
develop than ever before. Tensor Flow’s Object Detection API is an open source framework built
on top of Tensor Flow that makes it easy to construct, train and deploy object detection models.
Tensor Flow is a powerful framework that functions by implementing a series of processing nodes,
each node representing a mathematical operation, with the entire series of nodes being called a
"graph". Tensor Flow as simple as possible and it's configured to work with Python without any
major modifications or configuration.

iii
Attendance Using Face Recognition

1. INTRODUCTION

1
Attendance Using Face Recognition

CHAPTER 1
INTRODUCTION

1.1 Problem Statement:


In the old approach method that is used in schools and colleges it is there that the professor calls
the student name and then the attendance for the students marked.

1.2 Objective:
Main objective of this application is to develop a system using python programming. . The
proposed method is used to resolve the problems such as lightning of the images, noise from the
camera, and the direction of the student faces.

1.3 Existing System:


Maintenance of student attendance is the most difficult task in various institutions. Every
institution has its own method of taking attendance such as using attendance sheet or by using
some biometric methods. But these methods consumes a lot of time. Mostly student attendance is
taken with the help of attendance sheet given to the faculty members. This consumes a lot of work
and time. We do not know whether the authenticated student is responding or not. Calculation of
consolidated attendance is another major task which may cause manual errors. In some other cases
the attendance sheet may become lost or stolen by some of the students. To overcome such troubles
we are in need of automated attendance management system.

1.4 Proposed System:


These disadvantages are overcome with the help of automated attendance management which does
not consume time and the data is not lost until we erase the data. This method is most efficient in
these days. In face detection the face of images are marked with the help of rectangle. The face
detected after background subtraction is accurate as compared to the face detected from an image
which is not background subtracted. The detected face is then cropped. Finally all the face of

2
Attendance Using Face Recognition

individuals are detected and cropped from the image. Each cropped image is taken for the
comparison of images in database and finally the attendance is marked.

1.5 Modules

Face detection
The image after background subtraction is used for face detection. In face detection the face of
images are marked with the help of rectangle or circle. The face detected after Background
subtraction is accurate as compared to the face detected from an image which is not background
subtracted
Face recognition
Face recognition is used to identify the detected faces. There are many methods available for face
detection. But the eigen value method is the more suitable method. This method is more suitable
because of its speed. Hence here we are going to eigen value method to recognize the faces
Marking and Managing Attendance

After the detection and recognition of the face, the respective individual is given attendance for
the particular class. Managing of attendance of the students is also made possible by logged in
faculty. This allows for any further customization or access to the students’ attendance. The faculty
can also manage attendance between two dates and view the percentage of the desired students.

3
Attendance Using Face Recognition

1.6 System Architecture:

Block Diagram of Proposed System

4
Attendance Using Face Recognition

2. Literature Survey

5
Attendance Using Face Recognition

CHAPTER-2
LITERARTURE SURVEY

2.1. History of Java

Java is an object-oriented programming language developed by James Gosling and colleagues at


Sun Microsystems in the early 1990s. Unlike conventional languages which are generally designed
either to be compiled to native (machine) code, or to be interpreted from source code at runtime,
Java is intended to be compiled to a byte code, which is then run (generally using JIT compilation)
by a Java Virtual Machine.

Java is an object-oriented programming language developed by James Gosling and colleagues at


Sun Microsystems in the early 1990s. Unlike conventional languages which are generally designed
either to be compiled to native (machine) code, or to be interpreted from source code at runtime,
Java is intended to be compiled to a byte code, which is then run (generally using JIT compilation)
by a Java Virtual Machine.Java was started as a project called "Oak" by James Gosling in June
1991. Gosling's goals were to implement a virtual machine and a language that had a familiar C-
like notation but with greater uniformity and simplicity than C/C++. The first public
implementation was Java 1.0 in 1995. It made the promise of "Write Once, Run Anywhere", with
free runtimes on popular platforms. It was fairly secure and its security was configurable, allowing
for network and file access to be limited.

Sun distinguishes between its Software Development Kit (SDK) and Runtime Environment (JRE)
which is a subset of the SDK, the primary distinction being that in the JRE the compiler is not
present.Sun distinguishes between its Software Development Kit (SDK) and Runtime
Environment (JRE) which is a subset of the SDK, the primary distinction being that in the JRE the
compiler is not present.

2.2. History of OpenCV

OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at
real-time computer vision. Originally developed by Intel, it was later supported by

6
Attendance Using Face Recognition

Willow Garage then Itseez (which was later acquired by Intel. The library is cross-platform and
free for use under the open-source BSD license.

OpenCV supports some models from deep learning frameworks like Tensor Flow, Torch, PyTorch
and Caffe according to a defined list of supported layers. It promotes OpenVisionCapsules, which
is a portable format, compatible with all other formats.

Computer Vision overlaps significantly with the following fields :-

a) Image Processing − It focuses on image manipulation.


b) Pattern Recognition − It explains various techniques to classify patterns.
c) Photogrammetry − It is concerned with obtaining accurate measurements from images.

2.3. Operations using JavaCV

Core Functionality:

This module covers the basic data structures such as Scalar, Point, Range, etc.
Name:org.opencv.core

Image Processing:

This module covers various image processing operations such as image filtering, geometrical
image transformations, color space conversion, histograms, etc. In the Java library of OpenCV
Name org.opencv.imgproc

Video

This module covers the video analysis concepts such as motion estimation, background
subtraction, and object tracking. In the Java library of OpenCV

Name: org.opencv.video

7
Attendance Using Face Recognition

Video I/O

This module explains the video capturing and video codecs using OpenCV library. In the Java
library of OpenCV, this module is included as a package with the

Name: org.opencv.videoio.

calib3d

This module includes algorithms regarding basic multiple-view geometry algorithms, elements of
3D reconstruction. In the Java library of OpenCV

Name: org.opencv.calib3d.

features2d

This module includes the concepts of feature detection and description. In the Java library of
OpenCV

Name: org.opencv.features2d.

Objdetect

This module includes the detection of objects and instances of the predefined classes such as faces,
eyes, mugs, people, cars, etc. In the Java library of OpenCV

Name: org.opencv.objdetect.

Highgui

This is an easy-to-use interface with simple UI capabilities. In the Java library of OpenCV Names:
org.opencv.imgcodecs and org.opencv.videoio.

8
Attendance Using Face Recognition

2.4. Key Features of JAVA

● Easy to Learn and Use. JAVA is easy to learn and use. ...
● Expressive Language. JAVA language is more expressive means that it is more understandable
and readable.
● Compiled Language. ...
● Platform-Independent Language. ...
● Free and Open Source. ...
● Object-Oriented Language. ...
● Extensible. ...
● Large Standard Library
● Automatic garbage collection

2.5. OpenCv Machine Learning Libraries

• Boosting
• Decision tree learning
• Gradient boosting trees
• Expectation-maximization algorithm
• k-nearest neighbor algorithm
• Naive Bayes classifier
• Artificial neural networks
• Random forest
• Support vector machine (SVM)
• Deep neural networks (DNN)

9
Attendance Using Face Recognition

3. System Analysis

10
Attendance Using Face Recognition

CHAPTER-3
SYSTEM ANALYSIS

3.1. Introduction to Project

Image Processing is a type of processing a signal for which the requirements are photograph, an
image. There are two types of Image processing Analog and digital processing. Analogue image
processing is an image processing technique which can be used for hard copies such as
photographs and Printouts. Now a days Student or attendance plays a significant role in many
college, universities and schools. Automated attendance system will excerpt the image when
person comes in the classroom and will accordingly mark the attendance. On the other hand,
manual attendance system will verify and manage each and every record of student in paper which
requires more time and effort of the faculty or staff and also chances of proxies are also more in
manual attendance . This system will be efficient and more user friendly as it can be run on devices
which everyone has now a day. This study is the first attempt to provide an automated attendance
system that identifies students using face recognition technology through an image or video stream
for recording attendance in any classroom environment or and estimating the efficiency
accordingly. Through constantly detecting of facial info, this method will resolve less efficiency
of technologies which already exist, and advance the accurateness of recognition of faces [8][6].
We studied and planned a technique or way that mark the presence or attendance using face
recognition constructed on non-stop surveillance. In this proposed method or paper, our aim and
purpose is to gain the images or video of the students face, their position and attendance which are
beneficial info in the lecture or classroom environment.

11
Attendance Using Face Recognition

Project SDLC
• Project Requisites Accumulating and Analysis
• Application System Design
• Practical Implementation
• Manual Testing of My Application
• Application Deployment of System
• Maintenance of the Project

Requisites Accumulating and Analysis

It’s the first and foremost stage of the any project as our is a an academic leave for requisites
amassing we followed of IEEE Journals and Amassed so many IEEE Relegated papers and final
culled a Paper designated “Individual web revisitation by setting and substance importance input

12
Attendance Using Face Recognition

and for analysis stage we took referees from the paper and did literature survey of some papers
and amassed all the Requisites of the project in this stage

System Design
In System Design has divided into three types like GUI Designing, UML Designing with avails in
development of project in facile way with different actor and its utilizer case by utilizer case
diagram, flow of the project utilizing sequence, Class diagram gives information about different
class in the project with methods that have to be utilized in the project if comes to our project our
UML Will utilizable in this way The third and post import for the project in system design is Data
base design where we endeavor to design data base predicated on the number of modules in our
project
Implementation
The Implementation is Phase where we endeavor to give the practical output of the work done in
designing stage and most of Coding in Business logic lay coms into action in this stage its main
and crucial part of the project

Testing

Unit Testing
It is done by the developer itself in every stage of the project and fine-tuning the bug and module
predicated additionally done by the developer only here we are going to solve all the runtime errors
Manual Testing
As our Project is academic Leave we can do any automatic testing so we follow manual testing
by endeavor and error methods
Deployment of System
Once the project is total yare we will come to deployment of client system in genuinely world as
its academic leave we did deployment i our college lab only with all need Software’s with having
Windows OS

Maintenance

13
Attendance Using Face Recognition

The Maintenance of our Project is one time process only

3.2 Hardware requirements:

Processer : Any Update Processer


RAM : Min 1 GB
Hard Disk : Min 100 GB

3.3. Software requirements:

Operating System : Windows family


Technology : Java, OpenCV, JavaCV
IDE : Net Beans IDE

Non-Functional Requisites

Expanded System admin security: overseer to eschew the abuse of the application by PC ought
to be exceptionally secured and available.

Compactness: The Presentation of this application is facile to utilize so it is looks simple for the
using client to comprehend and react to identically tantamount.

Unwavering quality: and the functionalities accessible in the application this substructure has
high probability to convey us the required inquiries.

Time take for Reaction: The time taken by the application to culminate an undertaking given
by the client is very fast.

Multifariousness: Our application can be stretched out to incorporate the vicissitudes done by
applications present now to enhance the performance of the item. This is implicatively insinuated
for the future works that will be done on the application.

14
Attendance Using Face Recognition

Vigor: The project is blame tolerant concerning illicit client/beneficiary sources of info. Blunder
checking has been worked in the platforms to avert platforms disappointment.

15
Attendance Using Face Recognition

3.4. Implementation
System Architecture

Block Diagram of Proposed System

Methods Used
RegisterFace():
For loading the image
TrainLBHPFaceRecognizer Model ():
For extract the face features
FaceRecognizer ():
For finding matches faces
CvHaarClassifierCascade()
equalizedImg = IplImage.create(new CvSize(sizedImg.width(), sizedImg.height()), 8, 1);
// Create an empty greyscale image
cvEqualizeHist(sizedImg, equalizedImg);

16
Attendance Using Face Recognition

4. System Study

17
Attendance Using Face Recognition

CHAPTER-
4 SYSTEM
4.1. FEASIBILITY STUDY
STUDY
The feasibility of the project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system analysis the feasibility
study of the proposed system is to be carried out. This is to ensure that the proposed system is not
a burden to the company. For feasibility analysis, some understanding of the major requirements
for the system is essential.

Three key considerations involved in the feasibility analysis are


♦ ECONOMICAL FEASIBILITY
♦ TECHNICAL FEASIBILITY
♦ SOCIAL FEASIBILITY

4.2. ECONOMICAL FEASIBILITY


This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development of
the system is limited. The expenditures must be justified. Thus the developed system as well within
the budget and this was achieved because most of the technologies used are freely available. Only
the customized products had to be purchased.

4.3. TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical requirements of the
system. Any system developed must not have a high demand on the available technical resources.
This will lead to high demands on the available technical resources. This will lead to high demands
being placed on the client. The developed system must have a modest requirement, as only minimal
or null changes are required for implementing this system.

18
Attendance Using Face Recognition

4.4. SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user. This includes the
process of training the user to use the system efficiently. The user must not feel threatened by the
system, instead must accept it as a necessity. The level of acceptance by the users solely depends
on the methods that are employed to educate the user about the system and to make him familiar
with it. His level of confidence must be raised so that he is also able to make some constructive
criticism, which is welcomed, as he is the final user of the system.

19
Attendance Using Face Recognition

5. System Design

20
Attendance Using Face Recognition

CHAPTER-5
SYSTEM DESIGN

5.1. UML DIAGRAMS


The System Design Document describes the system requirements, operating environment,
system and subsystem architecture, files and database design, input formats, output layouts,
human-machine interfaces, detailed design, processing logic, and external interfaces.

Global Use Case Diagrams:

Identification of actors:

Actor: Actor represents the role a user plays with respect to the system. An actor interacts with,
but has no control over the use cases.

Graphical representation:

<<Actor name>>

An actor is someone or something that:

Interacts with or uses the system.

● Provides input to and receives information from the system.


● Is external to the system and has no control over the use cases.

Actors are discovered by examining:

● Who directly uses the system?


● Who is responsible for maintaining the system?

21
Attendance Using Face Recognition

● External hardware used by the system.


● Other systems that need to interact with the system.

Questions to identify actors:

● Who is using the system? Or, who is affected by the system? Or, which
groups need help from the system to perform a task?
● Who affects the system? Or, which user groups are needed by the system
to perform its functions? These functions can be both main functions and
secondary functions such as administration.
● Which external hardware or systems (if any) use the system to perform
tasks?
● What problems does this application solve (that is, for whom)?
● And, finally, how do users use the system (use case)? What are they doing
with the system?

The actors identified in this system are:

a. System Administrator
b. Customer
c. Customer Care

Identification of usecases:

1. Usecase: A use case can be described as a specific way of using the system from a user’s
(actor’s) perspective.

Graphical representation:

A more detailed description might characterize a use case as:

22
Attendance Using Face Recognition

● Pattern of behavior the system exhibits


● A sequence of related transactions performed by an actor and the system
● Delivering something of value to the actor

Use cases provide a means to:

● capture system requirements


● communicate with the end users and domain experts
● test the system

Use cases are best discovered by examining the actors and defining what the actor will be able
to do with the system.

Guide lines for identifying use cases:

● For each actor, find the tasks and functions that the actor should be able to perform or that
the system needs the actor to perform. The use case should represent a course of events
that leads to clear goal
● Name the use cases.
● Describe the use cases briefly by applying terms with which the user is familiar.

This makes the description less ambiguous

Questions to identify use cases:

● What are the tasks of each actor?


● Will any actor create, store, change, remove or read information in the system?
● What use case will store, change, remove or read this information?
● Will any actor need to inform the system about sudden external changes?
● Does any actor need to inform about certain occurrences in the system?
● What usecases will support and maintains the system?

23
Attendance Using Face Recognition

Flow of Events

A flow of events is a sequence of transactions (or events) performed by the system. They typically
contain very detailed information, written in terms of what the system should do, not how the
system accomplishes the task. Flow of events are created as separate files or documents in your
favorite text editor and then attached or linked to a use case using the Files tab of a model element.

A flow of events should include:

● When and how the use case starts and ends


● Use case/actor interactions
● Data needed by the use case
● Normal sequence of events for the use case
● Alternate or exceptional flows

Construction of Usecase diagrams:

Use-case diagrams graphically depict system behavior (use cases). These diagrams present a high
level view of how the system is used as viewed from an outsider’s (actor’s) perspective. A use-
case diagram may depict all or some of the use cases of a system.

A use-case diagram can contain:

● actors ("things" outside the system)


● use cases (system boundaries identifying what the system should do)
● Interactions or relationships between actors and use cases in the system including the
associations, dependencies, and generalizations.

Relationships in use cases:

24
Attendance Using Face Recognition

1. Communication:

The communication relationship of an actor in a usecase is shown by connecting the actor symbol
to the usecase symbol with a solid path. The actor is said to communicate with the usecase.

2. Uses:

A Uses relationship between the usecases is shown by generalization arrow from the usecase.

3. Extends:

The extend relationship is used when we have one usecase that is similar to another usecase but
does a bit more. In essence it is like subclass.

2. CLASS DIAGRAM:

Identification of analysis classes:

A class is a set of objects that share a common structure and common behavior (the same attributes,
operations, relationships and semantics). A class is an abstraction of real-world items.

There are 4 approaches for identifying classes:

a. Noun phrase approach:


b. Common class pattern approach.
c. Use case Driven Sequence or Collaboration approach.
d. Classes , Responsibilities and collaborators Approach

a. Noun Phrase Approach:

25
Attendance Using Face Recognition

The guidelines for identifying the classes:

● Look for nouns and noun phrases in the usecases.


● Some classes are implicit or taken from general knowledge.
● All classes must make sense in the application domain; Avoid computer

implementation classes – defer them to the design stage.

● Carefully choose and define the class names After identifying the classes we have
to eliminate the following types of classes:

● Adjective classes.

b. Common class pattern approach:

The following are the patterns for finding the candidate classes:

● Concept class.
● Events class.
● Organization class
● Peoples class
● Places class
● Tangible things and devices class.
c.Use case driven approach:

We have to draw the sequence diagram or collaboration diagram. If there is need for
some classes to represent some functionality then add new classes which perform those
functionalities

d.CRC approach:

26
Attendance Using Face Recognition

The process consists of the following steps:

● Identify classes’ responsibilities ( and identify the classes )


● Assign the responsibilities
● Identify the collaborators.

Identification of responsibilities of each class:

The questions that should be answered to identify the attributes and methods of a class respectively
are:

a. What information about an object should we keep track of?


b. What services must a class provide?

Identification of relationships among the classes:

Three types of relationships among the objects are:

Association: How objects are associated?

Super-sub structure: How are objects organized into super classes and sub classes?

Aggregation: What is the composition of the complex classes?

Association:

The questions that will help us to identify the associations are:

a. Is the class capable of fulfilling the required task by itself?


b. If not, what does it need?
c. From what other classes can it acquire what it needs?

Guidelines for identifying the tentative associations:

27
Attendance Using Face Recognition

● A dependency between two or more classes may be an association. Association often


corresponds to a verb or prepositional phrase.
● A reference from one class to another is an association. Some associations are implicit or
taken from general knowledge.

Some common association patterns are:

Location association like part of, next to, contained in…..

Communication association like talk to, order to ……

We have to eliminate the unnecessary association like implementation associations, ternary or n-


ary associations and derived associations.

Super-sub class relationships:

Super-sub class hierarchy is a relationship between classes where one class is the parent class of
another class (derived class).This is based on inheritance.

Guidelines for identifying the super-sub relationship, a generalization are

1. Top-down:

Look for noun phrases composed of various adjectives in a class name. Avoid excessive
refinement. Specialize only when the sub classes have significant behavior.

2. Bottom-up:

Look for classes with similar attributes or methods. Group them by moving the common attributes
and methods to an abstract class. You may have to alter the definitions a bit.

3. Reusability:

Move the attributes and methods as high as possible in the hierarchy.

4. Multiple inheritances:

28
Attendance Using Face Recognition

Avoid excessive use of multiple inheritances. One way of getting benefits of multiple inheritances
is to inherit from the most appropriate class and add an object of another class as an attribute.

Aggregation or a-part-of relationship:

It represents the situation where a class consists of several component classes. A class that
is composed of other classes doesn’t behave like its parts. It behaves very difficultly. The major
properties of this relationship are transitivity and anti symmetry.

The questions whose answers will determine the distinction between the part and whole
relationships are:

● Does the part class belong to the problem domain?


● Is the part class within the system’s responsibilities?
● Does the part class capture more than a single value?( If not then simply include it
as an attribute of the whole class)
● Does it provide a useful abstraction in dealing with the problem domain?

There are three types of aggregation relationships. They are:

Assembly:

It is constructed from its parts and an assembly-part situation physically exists.

Container:

A physical whole encompasses but is not constructed from physical parts.

Collection member:

A conceptual whole encompasses parts that may be physical or conceptual. The container and
collection are represented by hollow diamonds but composition is represented by solid diamond.

2. SEQUENCE DIAGRAMS

29
Attendance Using Face Recognition

A sequence diagram is a graphical view of a scenario that shows object interaction in a time-based
sequence what happens first, what happens next. Sequence diagrams establish the roles of objects
and help provide essential information to determine class responsibilities and interfaces.

There are two main differences between sequence and collaboration diagrams: sequence diagrams
show time-based object interaction while collaboration diagrams show how objects associate with
each other. A sequence diagram has two dimensions: typically, vertical placement represents time
and horizontal placement represents different objects.

Object:

An object has state, behavior, and identity. The structure and behavior of similar objects are
defined in their common class. Each object in a diagram indicates some instance of a class. An
object that is not named is referred to as a class instance.

The object icon is similar to a class icon except that the name is underlined:

An object's concurrency is defined by the concurrency of its class.

Message:

A message is the communication carried between two objects that trigger an event. A message
carries information from the source focus of control to the destination focus of control. The
synchronization of a message can be modified through the message specification. Synchronization
means a message where the sending object pauses to wait for results.

Link:

A link should exist between two objects, including class utilities, only if there is a relationship
between their corresponding classes. The existence of a relationship between two classes
symbolizes a path of communication between instances of the classes: one object may send
messages to another. The link is depicted as a straight line between objects or objects and class
instances in a collaboration diagram. If an object links to itself, use the loop version of the icon.

30
Attendance Using Face Recognition

5.2 PROCESS MODEL USED WITH JUSTIFICATION

SDLC is nothing but Software Development Life Cycle. It is a standard which is used by software
industry to develop good software.

SDLC (Spiral Model):

Stages of SDLC:

Requirement Gathering and Analysis

● Designing
● Coding
● Testing
● Deployment

31
Attendance Using Face Recognition

Requirements Definition Stage and Analysis:

The requirements gathering process takes as its input the goals identified in the high-level
requirements section of the project plan. Each goal will be refined into a set of one or more
requirements. These requirements define the major functions of the intended application, define
operational data areas and reference data areas, and define the initial data entities. Major functions
include critical processes to be managed, as well as mission critical inputs, outputs and reports. A
user class hierarchy is developed and associated with these major functions, data areas, and data
entities. Each of these definitions is termed a Requirement. Requirements are identified by unique
requirement identifiers and, at minimum, contain a requirement title and textual description.

32
Attendance Using Face Recognition

These requirements are fully described in the primary deliverables for this stage: the
Requirements Document and the Requirements Traceability Matrix (RTM). the requirements
document contains complete descriptions of each requirement, including diagrams and references
to external documents as necessary. Note that detailed listings of database tables and fields are not
included in the requirements document. The title of each requirement is also placed into the first
version of the RTM, along with the title of each goal from the project plan. The purpose of the
RTM is to show that the product components developed during each stage of the software
development lifecycle are formally connected to the components developed in prior stages.

In the requirements stage, the RTM consists of a list of high-level requirements, or goals,
by title, with a listing of associated requirements for each goal, listed by requirement title. In this
hierarchical listing, the RTM shows that each requirement developed during this stage is formally
linked to a specific product goal. In this format, each requirement can be traced to a specific
product goal, hence the term requirements traceability. The outputs of the requirements definition
stage include the requirements document, the RTM, and an updated project plan.

Design Stage:

The design stage takes as its initial input the requirements identified in the approved
requirements document. For each requirement, a set of one or more design elements will be
produced as a result of interviews, workshops, and/or prototype efforts. Design elements describe
the desired software features in detail, and generally include functional hierarchy diagrams, screen
layout diagrams, tables of business rules, business process diagrams, pseudo code, and a complete
entity-relationship diagram with a full data dictionary. These design elements are intended to
describe the software in sufficient detail that skilled programmers may develop the software with
minimal additional input.

33
Attendance Using Face Recognition

When the design document is finalized and accepted, the RTM is updated to show that each
design element is formally associated with a specific requirement. The outputs of the design stage
are the design document, an updated RTM, and an updated project plan.

Development Stage:

The development stage takes as its primary input the design elements described in the
approved design document. For each design element, a set of one or more software artifacts will
be produced. Software artifacts include but are not limited to menus, dialogs, data management
forms, data reporting formats, and specialized procedures and functions. Appropriate test cases
will be developed for each set of functionally related software artifacts, and an online help system
will be developed to guide users in their interactions with the software.

34
Attendance Using Face Recognition

The RTM will be updated to show that each developed artifact is linked to a specific design
element, and that each developed artifact has one or more corresponding test case items. At this
point, the RTM is in its final configuration. The outputs of the development stage include a fully
functional set of software that satisfies the requirements and design elements previously
documented, an online help system that describes the operation of the software, an implementation
map that identifies the primary code entry points for all major system functions, a test plan that
describes the test cases to be used to validate the correctness and completeness of the software, an
updated RTM, and an updated project plan.

Integration & Test Stage:

During the integration and test stage, the software artifacts, online help, and test data are
migrated from the development environment to a separate test environment. At this point, all test
cases are run to verify the correctness and completeness of the software. Successful execution of
the test suite confirms a robust and complete migration capability.

35
Attendance Using Face Recognition

During this stage, reference data is finalized for production use and production users are
identified and linked to their appropriate roles. The final reference data (or links to reference data
source files) and production user list are compiled into the Production Initiation Plan.

The outputs of the integration and test stage include an integrated set of software, an online
help system, an implementation map, a production initiation plan that describes reference data and

36
Attendance Using Face Recognition

production users, an acceptance plan which contains the final suite of test cases, and an updated
project plan.

Installation & Acceptance Stage

During the installation and acceptance stage, the software artifacts, online help, and initial
production data are loaded onto the production server. At this point, all test cases are run to verify
the correctness and completeness of the software. Successful execution of the test suite is a
prerequisite to acceptance of the software by the customer.

After customer personnel have verified that the initial production data load is correct and
the test suite has been executed with satisfactory results, the customer formally accepts the delivery
of the software.

37
Attendance Using Face Recognition

The primary outputs of the installation and acceptance stage include a production
application, a completed acceptance test suite, and a memorandum of customer acceptance of the
software. Finally, the PDR enters the last of the actual labor data into the project schedule and
locks the project as a permanent project record. At this point the PDR "locks" the project by
archiving all software items, the implementation map, the source code, and the documentation for
future reference.

38
Attendance Using Face Recognition

5.3 DIAGRAMS

5.3.1 CLASS DIAGRAM

39
Attendance Using Face Recognition

5.3.2 USECASE DIAGRAM

40
Attendance Using Face Recognition

5.3.3 ACTIVITY DIAGRAM

41
Attendance Using Face Recognition

5.3.4 SEQUENCE DIAGRAM

42
Attendance Using Face Recognition

5.3.5 COLLABORATION DIAGRAM

43
Attendance Using Face Recognition

6. Implementation
and Result.

44
Attendance Using Face Recognition

CHAPTER-6
IMPLEMENTATION AND RESULT

6.1 Coding

/*

* To change this template, choose Tools | Templates

* and open the template in the editor.

*/

package Attendence;

import Login.LoginPage;

import

com.googlecode.javacv.OpenCVFrameGrabber;

import com.googlecode.javacpp.FloatPointer;

import com.googlecode.javacpp.Pointer;

import

com.googlecode.javacpp.PointerPointer;

import

com.googlecode.javacv.CanvasFrame;

import

com.googlecode.javacv.FrameGrabber;

import java.io.BufferedReader;

import java.io.InputStreamReader;

import java.util.ArrayList;

import java.util.List;
45
Attendance Using Face Recognition

import java.util.logging.Level;

import

java.util.logging.Logger;

46
Attendance Using Face Recognition

import static

com.googlecode.javacv.cpp.opencv_core.*; import

static com.googlecode.javacv.cpp.opencv_legacy.*;

import static

com.googlecode.javacv.cpp.opencv_imgproc.*; import

static com.googlecode.javacv.cpp.opencv_objdetect.*;

import java.awt.FlowLayout;

import javax.swing.JOptionPane;

/**

* @author test

*/

public class MarkAttendence {

//==================================================

float MaxConfidence = 0.800f;

//==================================================

private static final Logger LOGGER =

Logger.getLogger(MarkAttendence.class.getName()); private int nTrainFaces = 0;

/**

* the training face image array

*/

IplImage[] trainingFaceImgArr;

/**
47
Attendance Using Face Recognition

* the test face image array

*/

IplImage[] FaceImgArr;

/**

* the person number array

*/

IplImage[] testFaceImgArr;

CvMat personNumTruthMat;

/**

* the number of persons

*/

int nPersons;

/**

* the person names

*/

final List<String> personNames = new ArrayList<String>();

/**

* the number of eigenvalues

*/

int nEigens = 0;

/**

* eigenvectors

*/

IplImage[] eigenVectArr;

48
Attendance Using Face Recognition

/**

* eigenvalues

*/

CvMat eigenValMat;

/**

* the average image

*/

IplImage pAvgTrainImg;

/**

* the projected training faces

*/

CvMat

projectedTrainFaceMat;

CvMat trainPersonNumMat;

//Cascade File Name

String faceCascadeFilename =
"./HarrClassiifator/haarcascade_frontalface_alt2.xml";

//face width faceHeight, faceWidth

int faceWidth = 120; // Default dimensions for faces in the face recognition
database.
Added by Shervin.

int faceHeight = 90;

//===========

FrameGrabber grabber;

CvMemStorage storage;

//====== Additional classes==========================

49
Attendance Using Face Recognition

Attendence attendence;

public

MarkAttendence() {

try {

storage = cvCreateMemStorage(0);

attendence = new Attendence();

} catch (Exception e) {

public void

recognizeFromCam() { try {

BufferedReader br = new BufferedReader(new

InputStreamReader(System.in)); float projectedTestFace[];

CvHaarClassifierCascade faceCascade;

FrameGrabber grabber = new

OpenCVFrameGrabber(0);

grabber.setImageWidth(210);

grabber.setImageHeight(210);

grabber.start();

if (loadTrainingData1() == 1) {

faceWidth =

pAvgTrainImg.width();
50
Attendance Using Face Recognition

faceHeight =

pAvgTrainImg.height();

51
Attendance Using Face Recognition

} else {

JOptionPane.showMessageDialog(null,"File Not

Found"); return;

//throw new Exception();

projectedTestFace = new float[nEigens];

//New Frame

CanvasFrame input = new CanvasFrame("Mark Attendence

frame"); input.setLayout(new FlowLayout());

input.setLocation(580,

150);

input.setAlwaysOnTop(tr

ue);

input.setDefaultCloseOperation(CanvasFrame.DISPOSE_ON_CLOSE);

faceCascade = new CvHaarClassifierCascade(cvLoad(faceCascadeFilename));

while (true) {

int iNearest, nearest

= 0; IplImage

camImg; IplImage

greyImg; IplImage

faceImg; IplImage

sizedImg; IplImage

equalizedImg;

IplImage processedFaceImg;

52
Attendance Using Face Recognition

CvRect faceRect;

53
Attendance Using Face Recognition

IplImage

shownImg; float

confidence = 0;

try {

camImg = grabber.grab();

} catch (Exception

e) { continue;

// Make sure the image is greyscale, since the Eigenfaces is only done on
greyscale
image.

greyImg = convertImageToGreyscale(camImg);

// Perform face detection on the input image, using the given Haar

cascade classifier. faceRect = detectFaceInImage(greyImg, faceCascade);

// faceRect.width();

// Make sure a valid face was detected.

if (faceRect.width() > 0) {

faceImg = cropImage(greyImg, faceRect);

// Make sure the image is the same dimensions as the training

images. sizedImg = resizeImage(faceImg, faceWidth, faceHeight);

// Give the image a standard brightness and contrast, in case it was too
dark or low contrast.

equalizedImg = IplImage.create(new CvSize(sizedImg.width(),


sizedImg.height()), 8, 1); // Create an empty greyscale image

54
Attendance Using Face Recognition

cvEqualizeHist(sizedImg, equalizedImg);

processedFaceImg = equalizedImg;

if

(processedFaceImg.isNu

ll()) { continue;

// If the face rec database has been loaded, then try to recognize the
person currently detected.

if (nEigens > 0) {

// project the test image onto the PCA subspace

cvEigenDecomposite(

processedFaceImg,

nEigens,

new

PointerPointer(eigenVectArr),

0, null,

pAvgTrainImg,

projectedTestFac

e);

// Check which person it is most likely to be.

final FloatPointer pConfidence = new

FloatPointer(confidence); iNearest =

55
Attendance Using Face Recognition

findNearestNeighbor(projectedTestFace, new
FloatPointer(pConfidence));

confidence = pConfidence.get();

56
Attendance Using Face Recognition

nearest = trainPersonNumMat.data_i().get(iNearest);

// Show the data on the screen.

shownImg =

cvCloneImage(camImg);

if (faceRect.width() > 0)// Check if a face was detected.

// Show the detected face region.

cvRectangle(shownImg, cvPoint(faceRect.x(), faceRect.y()),


cvPoint(faceRect.x() + faceRect.width() - 1, faceRect.y() + faceRect.height() - 1),
CV_RGB(0, 255, 0), 1, 8, 0);

if (nEigens > 0 && confidence >= MaxConfidence) // Check if the face


recognition database is loaded and a person was recognized.

// Show the name of the recognized person, overlayed on the image


below their
face.

CvFont font = new CvFont();

cvInitFont(font, CV_FONT_HERSHEY_PLAIN, 1.0, 1.0, 0, 1, CV_AA);

CvScalar textColor = CV_RGB(255, 0, 0); // light blue text

String text = "";

String arr[] = personNames.get(nearest - 1).toString().split("&");


57
Attendance Using Face Recognition

String name = "" +

arr[0]; text = name;

cvPutText(shownImg, text, cvPoint(faceRect.x(), faceRect.y() +


faceRect.height()
+ 15), font, textColor);

String pname =

getOrgnlName(name); String std

= arr[1];

boolean val=attendence.MarkAttendence(attendence.getSid(pname,

std)); if(val==false){

JOptionPane.showMessageDialog(null, "No Classes At This Time");

input.dispose();

new

LoginPage().setVisible(true);

break;

System.out.println("Attandance
Markingggggg ......
............................... ");

// Display the image.

input.showImage(shownI

mg);

// KeyEvent key = input.waitKey(10);

}
58
Attendance Using Face Recognition

} catch (Exception e) {

System.out.println("Attendence.MarkAttendence.recognizeFromCam()");

59
Attendance Using Face Recognition

public IplImage resizeImage(IplImage origImg, int newWidth, int

newHeight) { try {

IplImage

outImage; int

origWidth = 0; int

origHeight = 0;

if (!(origImg == null)) {

origWidth =

origImg.width();

origHeight =

origImg.height();

if (newWidth <= 0 || newHeight <= 0 || origWidth <= 0 || origHeight

<= 0) { return origImg;

// Scale the image to the new dimensions, even if the aspect ratio will be
changed.

outImage = IplImage.create(cvSize(newWidth, newHeight),


origImg.depth(), origImg.nChannels());

if (newWidth > origImg.width() && newHeight > origImg.height()) {

// Make the image larger

cvResetImageROI(origImg);
60
Attendance Using Face Recognition

cvResize(origImg, outImage, CV_INTER_LINEAR); //


CV_INTER_CUBIC or CV_INTER_LINEAR is good for enlarging

61
Attendance Using Face Recognition

} else {

// Make the image smaller

cvResetImageROI(origImg);

cvResize(origImg, outImage, CV_INTER_AREA); //


CV_INTER_AREA is good for shrinking / decimation, but bad at enlarging.

return outImage;

} catch (Exception e) {

return null;

//===================================================================
=========
=========

public IplImage cropImage(IplImage img, CvRect

region) { try {

IplImage

imageTmp;

IplImage

imageRGB;

CvSize size = new CvSize(img.width(), img.height());

// size=size.height(img.height());
62
Attendance Using Face Recognition

// size=size.width(img.width());

if (img.depth() !=

IPL_DEPTH_8U) { return img;

// First create a new (color or greyscale) IPL Image and copy contents of img

into it. imageTmp = IplImage.create(size, IPL_DEPTH_8U, img.nChannels());

cvCopy(img, imageTmp);

// Create a new image of the detected region

// Set region of interest to that surrounding the face

cvSetImageROI(imageTmp, region);

// Copy region of interest (i.e. face) into a new iplImage (imageRGB) and

return it size = size.width(region.width());

size = size.height(region.height());

imageRGB = IplImage.create(size, IPL_DEPTH_8U,

img.nChannels()); cvCopy(imageTmp, imageRGB);

//

cvReleaseImage(imag

eTmp); return

imageRGB;

} catch (Exception e) {

e.printStackTrace();

return null;

public CvRect detectFaceInImage(IplImage inputImg, CvHaarClassifierCascade

63
Attendance Using Face Recognition

cascade) {

64
Attendance Using Face Recognition

try {

IplImage detectImg;

IplImage greyImg =

null;

// CvMemStorage storage =

null; CvRect rc;

double t;

CvSeq

rects; int i;

if (storage == null) {

storage = cvCreateMemStorage(0);

cvClearMemStorage(storage);

detectImg = inputImg;

if (inputImg.nChannels() > 1) {

greyImg = IplImage.create(cvGetSize(inputImg), IPL_DEPTH_8U, 1);

cvCvtColor(inputImg, greyImg, CV_BGR2GRAY);

detectImg = greyImg; // Use the greyscale version as the input.

// Detect all the faces.

rects = cvHaarDetectObjects(detectImg, cascade, storage, 1.1, 1, 0);

if (rects.total() > 0) {

65
Attendance Using Face Recognition

rc = new CvRect(cvGetSeqElem(rects, 0));

// System.out.println(rc.width());

} else {

rc = new CvRect(-1, -1, -1, -1);

if (!(greyImg == null)) {

cvReleaseImage(greyImg);

// System.out.println(rc.width());

// cvReleaseMemStorage(storage);

return rc;

} catch (Exception e) {

e.printStackTrace();

return null;

public IplImage convertImageToGreyscale(IplImage

imageSrc) { try {

IplImage imageGrey;

// Either convert the image to greyscale, or make a copy of the existing


greyscale image.

// This is to make sure that the user can always call cvReleaseImage() on the
output, whether it was greyscale or not.

66
Attendance Using Face Recognition

if (imageSrc.nChannels() == 3) {

imageGrey = cvCreateImage(cvGetSize(imageSrc),

IPL_DEPTH_8U, 1); cvCvtColor(imageSrc, imageGrey,

CV_BGR2GRAY);

} else {

imageGrey = cvCloneImage(imageSrc);

return imageGrey;

} catch (Exception e) {

e.printStackTrace();

return null;

private int loadTrainingData1() {

LOGGER.info("loading training

data");

trainPersonNumMat = null; // the person numbers during training

CvFileStorage fileStorage;

int i;

// create a file-storage interface

fileStorage = cvOpenFileStorage(

"data/facedata.xml", //

filename null, // memstorage

CV_STORAGE_READ, //

flags null); // encoding

67
Attendance Using Face Recognition

if (fileStorage == null) {

LOGGER.severe("Can't open training database file

'data/facedata.xml'."); return 0;

// Load the person names.

personNames.clear(); // Make sure it starts as

empty. nPersons = cvReadIntByName(

fileStorage, // fs

null, // map

"nPersons", //

name 0); //

default_value

if (nPersons == 0) {

LOGGER.severe("No people found in the training database

'data/facedata.xml'."); return 0;

} else {

LOGGER.info(nPersons + " persons read from the training database");

// Load each person's

name. for (i = 0; i <

nPersons; i++) {

String sPersonName;

String varname = "personName_" + (i +

1); sPersonName =

68
Attendance Using Face Recognition

cvReadStringByName(

69
Attendance Using Face Recognition

fileStorage, //

fs null, // map

varname,

"");

personNames.add(sPersonName);

LOGGER.info("person names: " + personNames);

// Load the data

nEigens = cvReadIntByName(

fileStorage, // fs

null, //

map

"nEigens"

0); // default_value

nTrainFaces =

cvReadIntByName(

fileStorage,

null, // map

"nTrainFace

s",

0); // default_value

Pointer pointer = cvReadByName(

fileStorage, // fs

null, // map

70
Attendance Using Face Recognition

"trainPersonNumMat", null); //

name trainPersonNumMat = new

CvMat(pointer);

71
Attendance Using Face Recognition

pointer = cvReadByName(

fileStorage, // fs

null, // map

"eigenValMat", null); //

name eigenValMat = new

CvMat(pointer);

pointer = cvReadByName(

fileStorage, // fs

null, // map

"projectedTrainFaceMat", null); // name

projectedTrainFaceMat = new CvMat(pointer);

pointer = cvReadByName(

fileStorage,

null, // map

"avgTrainImg",

null);

pAvgTrainImg = new IplImage(pointer);

eigenVectArr = new

IplImage[nTrainFaces]; for (i = 0; i <

nEigens; i++) {

String varname = "eigenVect_"

+ i; pointer = cvReadByName(

72
Attendance Using Face Recognition

fileStorage,

73
Attendance Using Face Recognition

null, // map

varname, null);

eigenVectArr[i] = new IplImage(pointer);

// release the file-storage interface

cvReleaseFileStorage(fileStorage);

LOGGER.log(Level.INFO, "Training data loaded ({0} training images of {1}


people)", new Object[]{nTrainFaces, nPersons});

final StringBuilder stringBuilder = new StringBuilder();

stringBuilder.append("People: ");

if (nPersons > 0) {

stringBuilder.append("<").append(personNames.get(0)).append(

">");

for (i = 1; i < nPersons; i++) {

stringBuilder.append(", <").append(personNames.get(i)).append(">");

LOGGER.info(stringBuilder.toString());

return 1;

//Nearest Neighbour:

private int findNearestNeighbor(float projectedTestFace[],


FloatPointer pConfidencePointer) {

74
Attendance Using Face Recognition

double leastDistSq =

Double.MAX_VALUE; int i = 0;

int iTrain = 0;

int iNearest =

0;

LOGGER.info(" .......... ");

LOGGER.info("find nearest neighbor from " + nTrainFaces + "

training faces"); for (iTrain = 0; iTrain < nTrainFaces; iTrain++) {

//LOGGER.info("considering training face " + (iTrain

+ 1)); double distSq = 0;

for (i = 0; i < nEigens; i++) {

//LOGGER.debug(" projected test face distance from eigenface " + (i + 1) + "


is " + projectedTestFace[i]);

float projectedTrainFaceDistance = (float)

projectedTrainFaceMat.get(iTrain, i); float d_i = projectedTestFace[i]

- projectedTrainFaceDistance;

distSq += d_i * d_i; // / eigenValMat.data_fl().get(i); // Mahalanobis distance


(might give better results than Eucalidean distance)

// if (iTrain < 5) {

// LOGGER.info(" ** projected training face " + (iTrain + 1) + " distance


from eigenface " + (i + 1) + " is " + projectedTrainFaceDistance);

// LOGGER.info(" distance between them " + d_i);

// LOGGER.info(" distance squared " + distSq);

// }

75
Attendance Using Face Recognition

if (distSq <

leastDistSq) {

leastDistSq =

distSq; iNearest =

iTrain;

LOGGER.info(" training face " + (iTrain + 1) + " is the new best match, least
squared distance: " + leastDistSq);

// Return the confidence level based on the Euclidean distance,

// so that similar images should give a confidence between 0.5 to 1.0,

// and very different images should give a confidence between 0.0 to 0.5.

float pConfidence = (float) (1.0f - Math.sqrt(leastDistSq / (float) (nTrainFaces *


nEigens)) / 255.0f);

pConfidencePointer.put(pConfidence);

LOGGER.info("training face " + (iNearest + 1) + " is the final best match,


confidence " + pConfidence);

return iNearest;

public String modifyName(String

name) { try {

name = name.trim();

76
Attendance Using Face Recognition

name = name.replaceAll("[ ]+",

"_"); return name;

} catch (Exception e) {

return name;

public String getOrgnlName(String

name) { try {

name = name.trim();

name = name.replaceAll("[_]+",

" "); return name;

} catch (Exception e) {

return name;

// public static void main(String[] args) {

// new MarkAttendence().recognizeFromCam();

// }

77
Attendance Using Face Recognition

Data Base Tables

CREATE TABLE `attendence` (


`sid` varchar(10) default NULL,
`present` varchar(10) default NULL,
`attDate` date default NULL,
`entryTime` varchar(100) default NULL,
`subject` varchar(100) default NULL,
`fromtime` varchar(100) default NULL,
`totime` varchar(100) default NULL
)

/*Data for the table `attendence` */


insert into `attendence`(`sid`,`present`,`attDate`,`entryTime`,`subject`,`fromtime`,`totime`)
values ('11111','p','2017-03-21','12.45','MPR','12.00','13.00'),('44444','p','2017-03-
21','12.45','MPR','12.00','13.00'),('22222','p','2017-03-
21','12.45','MPR','12.00','13.00'),('33333','p','2017-03-
21','12.45','MPR','12.00','13.00'),('11111','p','2017-03-21','18.52','GAP','18.00','19.00');

/*Table structure for table `student` */


DROP TABLE IF EXISTS `student`;

CREATE TABLE `student` (


`sid` varchar(100) default NULL,
`Student_Name` varchar(100) default NULL,
`Student_Std` varchar(100) default NULL,
`phoneno` varchar(100) default NULL,
`email` varchar(100) default NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

/*Data for the table `student` */

78
Attendance Using Face Recognition

insert into `student`(`sid`,`Student_Name`,`Student_Std`,`phoneno`,`email`) values


('1414','bgfbbfgb','I.T.(B.E)','8478598598','gnghn@gmail.com');

/*Table structure for table `teacher` */

DROP TABLE IF EXISTS `teacher`;

CREATE TABLE `teacher` (


`username` varchar(100) default NULL,
`password` varchar(100) default NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

/*Data for the table `teacher` */

insert into `teacher`(`username`,`password`) values ('a','a');

/*Table structure for table `timetable` */

DROP TABLE IF EXISTS `timetable`;

CREATE TABLE `timetable` (


`Day` varchar(100) default NULL,
`Subject` varchar(100) default NULL,
`Fromtime` varchar(100) default NULL,
`Totime` varchar(100) default NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;) ENGINE=InnoDB DEFAULT
CHARSET=latin1;

79
Attendance Using Face Recognition

6.2 TEST CASES

6.3 SAMPLE SCREENS

Student Registration

80
Attendance Using Face Recognition

Database Structure

Student Data

ADMIN Login

81
Attendance Using Face Recognition

Face Training

Mark Attendance

82
Attendance Using Face Recognition

Attendance Database

Report

83
Attendance Using Face Recognition

Manage Attendance

Retrive Data

84
Attendance Using Face Recognition

Report Pie Graph

Defaulter Menu

85
Attendance Using Face Recognition

Set Time Table

Time Table for Classes

86
Attendance Using Face Recognition

7. System Testing

87
Attendance Using Face Recognition

CHAPTER-7

SYSTEM TESTING.

7.1. TESTING

Testing is the debugging program is one of the most critical aspects of the computer
programming triggers, without programming that works, the system would never produce an
output of which it was designed. Testing is best performed when user development is asked to
assist in identifying all errors and bugs. The sample data are used for testing. It is not quantity but
quality of the data used the matters of testing. Testing is aimed at ensuring that the system was
accurately an efficiently before live operation commands.

Testing objectives:

The main objective of testing is to uncover a host of errors, systematically and with
minimum effort and time. Stating formally, we can say, testing is a process of executing a program
with intent of finding an error.

A successful test is one that uncovers an as yet undiscovered error.

A good test case is one that has probability of finding an error, if it exists.

The test is inadequate to detect possibly present errors.

The software more or less confirms to the quality and reliable standards.

88
Attendance Using Face Recognition

7.2. Levels of Testing:

In order to uncover present in different phases we have the concept of levels of testing.

The basic levels of Testing:

Client needs acceptance testing

Requirements system testing

Design integration testing

Code unit testing

Levels of Testing

Code testing:

This examines the logic of the program. For example, the logic for updating various sample
data and with the sample files and directories were tested and verified.

89
Attendance Using Face Recognition

Specification Testing:

Executing this specification starting what the program should do and how it should
performed under various conditions. Test cases for various situation and combination of conditions
in all the modules are tested.

Unit testing:

In the unit testing we test each module individually and integrate with the overall system.
Unit testing focuses verification efforts on the smallest unit of software design in the module.
This is also known as module testing. The module of the system is tested separately. This testing
is carried out during programming stage itself. In the testing step each module is found to work
satisfactorily as regard to expected output from the module. There are some validation checks for
fields also. For example the validation check is done for varying the user input given by the user
which validity of the data entered. It is very easy to find error debut the system. The unit testing
is done in the stage of implementation of the project only the error are solved in development
stage some of the error we come across in development are given below

Each Module can be tested using the following two Strategies:

1. Black Box Testing


2. White Box Testing

7.3. BLACK BOX TESTING

What is Black Box Testing?

Black box testing is a software testing techniques in which functionality of the software
under test (SUT) is tested without looking at the internal code structure, implementation
details and knowledge of internal paths of the software. This type of testing is based entirely on
the software requirements and specifications.

90
Attendance Using Face Recognition

In Black Box Testing we just focus on inputs and output of the software system without
bothering about internal knowledge of the software program.

The above Black Box can be any software system you want to test. For example : an
operating system like Windows, a website like Google ,a database like Oracle or even your own
custom application. Under Black Box Testing, you can test these applications by just focusing on
the inputs and outputs without knowing their internal code implementation.

Black box testing - Steps

Here are the generic steps followed to carry out any type of Black Box Testing.

● Initially requirements and specifications of the system are examined.


● Tester chooses valid inputs (positive test scenario) to check whether SUT processes them
correctly. Also some invalid inputs (negative test scenario) are chosen to verify that the
SUT is able to detect them.
● Tester determines expected outputs for all those inputs.
● Software tester constructs test cases with the selected inputs.
● The test cases are executed.
● Software tester compares the actual outputs with the expected outputs.
● Defects if any are fixed and re-tested.

91
Attendance Using Face Recognition

Types of Black Box Testing

There are many types of Black Box Testing but following are the prominent ones -

● Functional testing – This black box testing type is related to functional requirements of a
system; it is done by software testers.
● Non-functional testing – This type of black box testing is not related to testing of a specific
functionality, but non-functional requirements such as performance, scalability, usability.
● Regression testing – Regression testing is done after code fixes , upgrades or any other
system maintenance to check the new code has not affected the existing code.

7.4. WHITE BOX TESTING

White Box Testing is the testing of a software solution's internal coding and infrastructure.It
focuses primarily on strengthening security, the flow of inputs and outputs through the application,
and improving design and usability.White box testing is also known as clear, open, structural,
and glass box testing.

It is one of two parts of the "box testing" approach of software testing. Its counter-part, blackbox
testing, involves testing from an external or end-user type perspective. On the other hand,
Whitebox testing is based on the inner workings of an application and revolves around internal
testing. The term "whitebox" was used because of the see-through box concept. The clear box or
whitebox name symbolizes the ability to see through the software's outer shell (or "box") into its
inner workings. Likewise, the "black box" in "black box testing" symbolizes not being able to see
the inner workings of the software so that only the end-user experience can be tested

What do you verify in White Box Testing ?

White box testing involves the testing of the software code for the following:

● Internal security holes


● Broken or poorly structured paths in the coding processes
● The flow of specific inputs through the code
● Expected output

92
Attendance Using Face Recognition

● The functionality of conditional loops


● Testing of each statement, object and function on an individual basis

The testing can be done at system, integration and unit levels of software development. One of the
basic goals of whitebox testing is to verify a working flow for an application. It involves testing a
series of predefined inputs against expected or desired outputs so that when a specific input does
not result in the expected output, you have encountered a bug.

How do you perform White Box Testing?

To give you a simplified explanation of white box testing, we have divided it into two basic steps.
This is what testers do when testing an application using the white box testing technique:

STEP 1) UNDERSTAND THE SOURCE CODE

The first thing a tester will often do is learn and understand the source code of the application.
Since white box testing involves the testing of the inner workings of an application, the tester must
be very knowledgeable in the programming languages used in the applications they are testing.
Also, the testing person must be highly aware of secure coding practices. Security is often one of
the primary objectives of testing software. The tester should be able to find security issues and
prevent attacks from hackers and naive users who might inject malicious code into the application
either knowingly or unknowingly.

STEP 2) CREATE TEST CASES AND EXECUTE

The second basic step to white box testing involves testing the application’s source code
for proper flow and structure. One way is by writing more code to test the application’s source
code. The tester will develop little tests for each process or series of processes in the application.
This method requires that the tester must have intimate knowledge of the code and is often done
by the developer. Other methods include manual testing, trial and error testing and the use of
testing tools as we will explain further on in this article.

7.5. System testing

93
Attendance Using Face Recognition

Once the individual module testing is completed, modules are assembled and integrated to perform
as a system. The top down testing, which began from upper level to lower level module, was
carried out to check whether the entire system is performing satisfactorily.

There are three main kinds of System testing:

i. Alpha Testing
ii. Beta Testing
iii. Acceptance Testing

Alpha Testing:

This refers to the system testing that is carried out by the test team with the Organization.

Beta Testing:

This refers to the system testing that is performed by a selected group of friendly customers

Acceptance Testing:

This refers to the system testing that is performed by the customer to determine whether or not to
accept the delivery of the system.

Integration Testing:

Data can be lost across an interface, one module can have an adverse effort on the other sub
functions, when combined, may not produce the desired major functions. Integrated testing is the
systematic testing for constructing the uncover errors within the interface. The testing was done
with sample data. The developed system has run successfully for this sample data. The need for
integrated test is to find the overall system performance.

94
Attendance Using Face Recognition

Output testing: After performance of the validation testing, the next step is output testing. The
output displayed or generated by the system under consideration is tested by asking the user about
the format required by system.

7.6. Software testing

Software testing is one of the main stages of project development life cycle to provide our cessation
utilizer with information about the quality of the application and ours, in our Project we have under
gone some stages of testing like unit testing where it’s done in development stage of the project
when we are in implementation of the application after the Project is yare we have done manual
testing with different Case of all the different modules in the application we have even done
browser compatibility testing in different web browsers in market, even we have done Client side
validation testing on our application

95
Attendance Using Face Recognition

8. Conclusion

96
Attendance Using Face Recognition

CHAPTER-8
8. CONCLUSION

8.1. Conclusion
This system has been proposed for maintaining the attendance record. The main motive behind
developing this system is to eliminate all the drawbacks which were associated with manual
Attendance system. Using this method we can replace all the old methods. Efficient and automatic
attendance management is introduced in paper. This method requires only simple hardware for
installation. The management of attendance in this method is simpler and the attendance is taken
more accurately. One difficult task in this system is face recognition. We proposed face recognition
using KNN using Ball Tree.

8.2. Future Enhancement


The system we have developed has successfully, able to accomplish the task of marking the
attendance in the classroom automatically and output is obtained in an excel sheet as desired in
real-time. However, in order to develop a dedicated system which can be implemented in an
educational institution, a very efficient algorithm which is insensitive to the lighting conditions of
the classroom has to be developed. Also a camera of the optimum resolution has to be utilised in
the system. Another important aspect where we can work towards is creating an online database
of the attendance and automatic updating of the attendance into it keeping in mind the growing
popularity of Internet of Things. This can be done by creating a standalone module which can be
installed in the classroom having access to internet, preferably a wireless system. These
developments can greatly improve the applications of the project.

97
Attendance Using Face Recognition

9.Bibliography

98
Attendance Using Face Recognition

BIBLIOGRAPHY

[1] SeemaRao and Prof.K.J.Satoa, “An Attendance Monitoring System Using Biometrics
Authentication”, International Journal of AdvancedResearch in Computer Science and Software
Engineering, Volume 3, Issue 4, April 2013, ISSN:2277 128X
[2] O. Shoewn Development of Attendance Management System using Biometrics. The Pacific
Journal of Science and Technology Volume 13, No 1, May 2012
[3] NirmalyaKar, MrinalKantiDebbarma, AshimSaha, and DwijenRudra Pal. “Study of
Implementing Automated Attendance System Using Face Recognition Technique” International
Journal of Computer and CommunicationEngineering, Vol. 1, No. 2, July 2012
[4] Arulogun O. T., Olatunbosun A., Fakolujo O.A., and Olaniyi O. M, “RFID Based Students
Attendance Management System ” InternationalJournal of Scientific & Engineering Research”,
Volume 4, Issue 2, February-2013; ISSN2229-5518.
[5] UnnatiA.Patel and Dr. SawminarayanPriya R.2014. “Development of a Student Attendance
Management System Using RFID and FaceRecognition: A Review”, International Journal of
Advance Research in Computer Science andManagement Studies, Volume 2, Issue 8, August
2014; Online ISSN: 2321-7782.
[6] R.L. Hsu, Mottalec M.A and A.K.Jain,”Face Detection in colour images”, Proceedings
International Conference on ImageProcessing, Oct 2001,pp. 1046-1049
[7] ToufiqP. Ahmed Egammal and Anuragmittal (2006),”A Framework for feature selection for
Background Subtraction”,in Proceedings of IEEE computer Society Conference on
Computer Vision and Pattern Recognition..
[8] M.H.Yang, N.Ahuja and D.Kriegmao, “Face recognition usingkernel eigenfaces”, IEEE
International Conference on ImageProcessing, vol.1, pp. 10-13, Sept. 2000
[9] Y. Li, Xiang-lin Qi, Yun-jiu Wang, Eye detection by using fuzzy template matching and
feature-parameter based judgement Pattern Recognition Letters 22 (2001) 1111-1124
[10] Anil K. Jain, Arun Ross and SalilPrabhakar, “An introductionto biometric recognition”,
Circuits and Systems for Video Technology, IEEE Transcations on Volume 14, Issu 1, Jan 2004
[11] Kamran Etemad and Rama Chellappa, Discriminant analysis for recognition of human face
images, Optical Society of America, Vol. 14, No. 8/ August 1997.

99

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy