Student Attentiveness
Student Attentiveness
The student attentiveness in the class hours will be the problematic one for the professors
and the college. The lack of attentiveness in class creates to low their marks in the exam
time. Thus an efficient planning out for the detection of student attention using their face
emotion. Here CCTV live steaming in class corners is placed with efficient video
acquisition planning. The system identifies the face of each student and checks their
emotion with their face features. For the exact extraction of the face features with lips,
eyes, cheeks etc. rom the perspective of computer simulation, a framework combining face
expression recognition (FER) algorithm with online courses platforms is proposed in this
work. The cameras in the devices are used to collect students’ face images, and the facial
expressions are analyzed and classified into 8 kinds of emotions by the FER algorithm. A
voice based alert using HMM for the whole class is generated where the system will be
enhanced the student to be more attentive in class hours.
திட்ட சுருக்கம்
INTRODUCTION
1.1 ABOUT THE ORGANIZATION
We offer beautiful web design company based in India, SEO friendly and optional
Google Adwords as well as other Google Search Engine Optimization booster to help you
gain advantage & exposure in the online business industry. We started our passion for
digital marketing since 2014. Our values are transparency, respect, honest communication
& mutual understanding. We have dealt with many kinds of customers, & we assure you
that we listen & will implement the agreement into action at our best.
Technology We Use
Businesses who look for good looking & features of nowadays website with the ability of
maintaining your site by yourself
• PHP / Frameworks
• Wordpress / CMS
• E-Commerce / WooCommerce
• Android / IOS
• SEO / SMO
• Digital marketing
• Tally
MISSION
Give us a Chance to Grow Your Business Online. SEO friendly and optional Google
Adwords as well as other Google Search Engine Optimization booster to help you gain advantage
& exposure in the online business industry
VISION
Our vision is to be the state's most well known carrier supplier enterprise focused to deliver
the maximum to our clients. We consider in the simple not the problematic. We’re additionally
giving equal attention of innovations
SYSTEM ANALYSIS
2.1 INTRODUCTION
The proposed system aims to address the challenge of student attentiveness in classrooms
by leveraging facial emotion recognition technology. With the prevalence of distractions
and diminishing attention spans, maintaining students' focus during class hours is crucial
for academic success. To achieve this, the system integrates CCTV live streaming in class
corners with efficient video acquisition planning to capture students' facial expressions. By
identifying each student's face and analyzing their facial features, including lips, eyes, and
cheeks, the system employs a facial expression recognition (FER) algorithm to classify
emotions. This classification encompasses eight distinct emotional states, providing
insights into students' attentiveness levels. Furthermore, the integration of this technology
with online courses platforms enhances its utility and accessibility within educational
frameworks. Through synchronized data collection and analysis, the system offers real-
time feedback to educators regarding students' engagement and focus. In instances where
low attentiveness is detected, the system triggers voice-based alerts using Hidden Markov
Models (HMM) to prompt the entire class. These alerts serve as proactive interventions to
encourage students to maintain attention and participation during class hours.
The analysis model for this project encompasses various interconnected components
aimed at improving student attentiveness in classroom settings. The primary focus lies on
leveraging facial emotion recognition technology, integrated with CCTV live streaming
and online courses platforms, to detect and address lapses in attention. At the core of the
model is the utilization of CCTV cameras placed strategically in class corners, supported
by efficient video acquisition planning. These cameras capture students' facial
expressions, enabling the system to identify individual faces and analyze their emotional
states. The extraction of specific facial features such as lips, eyes, and cheeks enhances
the accuracy of emotion detection. A key aspect of the model involves the integration of
a face expression recognition (FER) algorithm with online courses platforms. This
integration enables seamless data collection and analysis, allowing educators to monitor
students' attentiveness in real-time. The FER algorithm classifies facial expressions into
eight distinct emotions, providing nuanced insights into students' engagement levels.
Students’ interaction and collaboration using Internet of Things (IoT) based interoperable
infrastructure is a convenient way. Measuring student attention is an essential part of
educational assessment. As new learning styles develop, new tools and assessment
methods are also needed. The focus of this paper is to develop IoT-based interaction
framework and analysis of the student experience of electronic learning (eLearning). The
learning behaviors of students attending remote video lectures are assessed by logging
their behavior and analyzing the resulting multimedia data using machine learning
algorithms. An attention-scoring algorithm, its workflow, and the mathematical
formulation for the smart assessment of the student learning experience are established.
This setup has a data collection module, which can be reproduced by implementing the
algorithm in any modern programming language. Some faces, eyes, and status of eyes are
extracted from video stream taken from a webcam using this module. The extracted
information is saved in a dataset for further analysis. The analysis of the dataset produces
interesting results for student learning assessments. Modern learning management
systems can integrate the developed tool to take student learning behaviors into account
when assessing electronic learning strategies
2.4.1 Disadvantages
• The process use IOT for detection of student attentiveness which is cost effective.
2.5.1 Advantages
• Detection of the face feature by the haar cascade algorithm is very effective.
1. VIDEO STREAMING
The system's main objective is to improve the videos produced by thermal imagers and
provide them the ability to acquire, stream, and analyze videos. In the FPGA's PL section, video
capturing and processing units will be installed. These components will be in charge of PAL video
data reception at 25 frames per second and some video processing, including 2D-FFT/IFFT and
bilateral/Gaussian filtering. PAL video at 25 frames per second generates frames at a raw data rate
of 20 Mbytes per second; Ethernet and USB are unable to handle this high pace. As a result, the
video data must be compressed before being streamed over Ethernet. JPEG compression is what is
utilized and the taken video from the surveillance camera is acquainted.
2. FRAME PROCESSING.
Frame rate conversion is typically used when producing content for devices that use
different standards (e.g. NTSC vs. PAL) or different content playback scenarios (e.g. film at 24 fps
vs. television at 25 fps or 29.97 fps). Frame processing allows for the combination of multiple
captured frames into one recorded frame. The combination occurs before the resulting frame is
encoded. You can select the following frame processing settings: No Frame Processing, Frame
Summing, Frame Averaging.
3. FACE DETECTION.
Face extraction is accomplished by the haar cascade algorithm. Due to the complex
background, it is not a good choice to locate or detect both the eyes in the original image, for this
we will take much more time on searching the whole window with poor results. So firstly, we will
locate the face, and reduce the range in which we will detect both the eyes. After doing this we can
improve the tracking speed and correct rate, reduce the affect of the complex background. Besides,
we propose a very simple but powerful method to reduce the computing complexity.
Feature extraction is the pattern of extracting the feature points from the images. From these
feature points the analysis are made exactly for the recognition. With already trained feature
dataset will be known to identify the helmet from the image.The feature of face like eye
blinking, lips movement, chin and chick movement are extracted Haar cascade is an algorithm
that can detect objects in images, irrespective of their scale in image and location. This
algorithm is not so complex and can run in real-time. We can train a haar-cascade detector to
detect various objects like cars, bikes, buildings, fruits; Haar Cascade Detection is one of the
oldest yet powerful face detection algorithms invented. It has been there since long, long before
Deep Learning became famous. Haar Features were not only used to detect faces, but also for
eyes, lips, license number plates.
5. EMOTION CLASSIFICATION.
Emotion of the student can be classified by the face expression algorithm. The emotion like
yawning, sleeping, laughing, taking can be detected. After detecting the emotion they can be given
as voice alert
6. EMOTION ALERT
The name of the student trained in the system. The emotion of the student is analyzed and
then using NLP processing the voice alert is defined with the student name. this process is done by
using NLP and HMM algorithm. The main core of HMM-based speech recognition systems
is Viterbi algorithm. Viterbi algorithm uses dynamic programming to find out the best alignment
between the input speech and a given speech model.
CHAPTER 3
3.1 Introduction
The Student Attention Detection System is aimed at addressing the challenge of student
attentiveness in classrooms by leveraging facial emotion recognition technology. This SRS
document outlines the functional and non-functional requirements necessary for the development
and implementation of the system.
Functional Requirements:
Face Detection and Feature Extraction: The system shall utilize CCTV live streaming to
detect and track the faces of each student in the classroom. It shall extract facial features including
lips, eyes, and cheeks for precise analysis of emotions.
Facial Expression Recognition (FER) Algorithm: The system shall incorporate a FER
algorithm capable of analyzing facial expressions and classifying them into eight predefined
emotional states. It shall accurately identify emotions such as happiness, sadness, boredom, etc., to
determine students' attentiveness levels.
Integration with Online Courses Platforms: The system shall integrate seamlessly with
existing online courses platforms to facilitate data synchronization and analysis. It shall allow
educators to monitor students' attentiveness in real-time and provide actionable insights for
instructional improvement.
Voice-Based Alert Generation: The system shall generate voice-based alerts using Hidden
Markov Models (HMM) when low attentiveness is detected. Alerts shall be broadcasted to the
entire class, prompting students to refocus and engage actively.
Non-Functional Requirements:
Performance: The system shall be capable of processing live video streams in real-time to
ensure timely detection of student attentiveness. It shall exhibit high accuracy in emotion
recognition to minimize false positives and negatives.
Security and Privacy: The system shall adhere to strict security measures to protect
sensitive student data collected during facial recognition and emotion analysis. It shall comply with
relevant privacy regulations and guidelines to safeguard the privacy rights of students.
Usability: The user interface shall be intuitive and user-friendly, enabling educators to
easily access and interpret student attentiveness data. It shall provide customizable settings for alert
thresholds and frequency to accommodate varying classroom dynamics.
Scalability: The system shall be scalable to accommodate varying class sizes and
configurations. It shall support future enhancements and upgrades to meet evolving educational
needs.
System Constraints:
Hardware Requirements: The system shall require CCTV cameras with high-resolution
capabilities and sufficient coverage of classroom areas. It shall necessitate compatible computing
devices with adequate processing power and memory for real-time video analysis.
Introduction
Python is a widely used high-level programming language first launched in 1991. Since
then, Python has been gaining popularity and is considered as one of the most popular and flexible
server-side programming languages.
Unlike most Linux distributions, Windows does not come with the Python programming
language by default. However, you can install Python on your Windows server or local machine in
just a few easy steps.
PREREQUISITES
The installation procedure involves downloading the official Python .exe installer and running it
on your system.
The version you need depends on what you want to do in Python. For example, if you are
working on a project coded in Python version 2.6, you probably need that version. If you are
starting a project from scratch, you have the freedom to choose.
If you are learning to code in Python, we recommend you download both the latest version of
Python 2 and 3. Working with Python 2 enables you to work on older projects or test new
projects for backward compatibility.
1. Open your web browser and navigate to the Downloads for Windows section of the official
Python website.
2. Search for your desired version of Python. At the time of publishing this article, the latest
Python 3 release is version 3.7.3, while the latest Python 2 release is version 2.7.16.
3. Select a link to download either the Windows x86-64 executable installer or Windows
x86 executable installer. The download is approximately 25MB.
1. Run the Python Installer once downloaded. (In this example, we have downloaded Python
3.7.3.)
2. Make sure you select the Install launcher for all users and Add Python 3.7 to
PATH checkboxes. The latter places the interpreter in the execution path. For older versions of
Python that do not support the Add Python to Path checkbox, see Step 6.
3. Select Install Now – the recommended installation options.
For all recent versions of Python, the recommended installation options include Pip and IDLE.
Older versions might not include such additional features.
4. The next dialog will prompt you to select whether to Disable path length limit. Choosing this
option will allow Python to bypass the 260-character MAX_PATH limit. Effectively, it will
enable Python to use long path names
The Disable path length limit option will not affect any other system settings. Turning it on will
resolve potential name length issues that may arise with Python projects developed in Linux.
Step 4: Verify Python Was Installed On Windows
1. Navigate to the directory in which Python was installed on the system. In our case, it
is C:\Users\Username\AppData\Local\Programs\Python\Python37 since we have installed
the latest version.
2. Double-click python.exe.
3. The output should be similar to what you can see below:
If you opted to install an older version of Python, it is possible that it did not come with
Pip preinstalled. Pip is a powerful package management system for Python software packages.
Thus, make sure that you have it installed.
We recommend using Pip for most Python packages, especially when working in virtual
environments.
We recommend you go through this step if your version of the Python installer does not
include the Add Python to PATH checkbox or if you have not selected that option.
Setting up the Python path to system variables alleviates the need for using full paths. It instructs
Windows to look through all the PATH folders for “python” and find the install folder that
contains the python.exe file.
2. Type sysdm.cpl and click OK. This opens the System Properties window.
3. Navigate to the Advanced tab and select Environment Variables.
5. Click Edit.
6. Select the Variable value field. Add the path to the python.exe file preceded with
a semicolon (;). For example, in the image below, we have added “;C:\Python34.”
By setting this up, you can execute Python scripts like this: Python script.py
MySQL is a popular choice of database for use in web applications, and is a central
component of the widely used LAMP open source web application software stack—LAMP is an
acronym for "Linux, Apache, MySQL, Perl/PHP/Python." Free-software-open source projects that
require a full-featured database management system often use MySQL.
For commercial use, several paid editions are available, and offer additional functionality.
Applications which use MySQL databases include: TYPO3, Joomla, Word Press, phpBB, MyBB,
Drupal and other software built on the LAMP software stack. MySQL is also used in many high-
profile, large-scale World Wide Web products, including Wikipedia, Google (though not for
searches), Imagebook, Twitter, Flickr, Nokia.com, and YouTube.
Inter images
MySQL is primarily an RDBMS and ships with no GUI tools to administer MySQL
databases or manage data contained within the databases. Users may use the included command
line tools, or use MySQL "front-ends", desktop software and web applications that create and
manage MySQL databases, build database structures, back up data, inspect status, and work with
data records. The official set of MySQL front-end tools, MySQL Workbench is actively developed
by Oracle, and is freely available for use.
Graphical
MySQL ships with some command line tools. Third-parties have also developed tools to
manage a MySQL server, some listed below. Maatkit - a cross-platform toolkit for MySQL,
PostgreSQL and Memcached, developed in Perl Maatkit can be used to prove replication is working
correctly, fix corrupted data, automate repetitive tasks, and speed up servers. Maatkit is included
with several GNU/Linux distributions such as CentOS and Debian and packages are available for
Programming. MySQL works on many different system platforms, including AIX, BSDi, FreeBSD,
HP-UX, eComStation, i5/OS, IRIX, Linux, Mac OS X, Microsoft Windows, NetBSD, Novell
NetWare, OpenBSD, OpenSolaris, OS/2 Warp, QNX, Solaris, Symbian, SunOS, SCO Open Server,
SCO UnixWare, Sanos and Tru64. A port of MySQL to OpenVMS also exists.
MySQL is written in C and C++. Its SQL parser is written in yacc, and a home-brewed
lexical analyzer. Many programming languages with language-specific APIs include libraries for
accessing MySQL databases. These include MySQL Connector/Net for integration with Microsoft's
Visual Studio (languages such as C# and VB are most commonly used) and the JDBC driver for
Java. In addition, an ODBC interim age called MyODBCallows additional programming languages
that support the ODBC inter image to communicate with a MySQL database, such as ASP or
ColdFusion. The HTSQL - URL-based query method also ships with a MySQL adapter, allowing
direct interaction between a MySQL database and any web client via structured URLs.
Features
As of April 2009, MySQL offered MySQL 5.1 in two different variants: the open source
MySQL Community Server and the commercial Enterprise Server. MySQL 5.5 is offered under the
same licenses. They have a common code base and include the following features:
Multiple storage engines, allowing one to choose the one that is most effective for each table in the
application (in MySQL 5.0, storage engines must be compiled in; in MySQL 5.1, storage engines
can be dynamically loaded at run time): Native storage engines (My ISAM, Falcon, Merge,
Memory (heap), Federated, Archive, CSV, Blackhole, Cluster, EXAMPLE, Maria, and Inno DB,
which was made the default as of 5.5). Partner-developed storage engines (solid DB, Nitro EDB,
Scale DB, Toku DB, Infobright (formerly Brighthouse), Kickfire, Xtra DB, IBM DB2). Inno DB
used to be a partner-developed storage engine, but with recent acquisitions, Oracle now owns both
MySQL core and Inno DB
CHAPTER 5
SYSTEM DESIGN
5.1 INTRODUCTION
The system architecture for the Student Attention Detection System is designed to
seamlessly integrate various components to effectively monitor and enhance student attentiveness
during class hours. CCTV cameras are strategically positioned in class corners to provide
comprehensive coverage of the classroom environment. These cameras are equipped with high-
resolution capabilities and are connected to a central processing unit for video acquisition and
processing. Live video streams from the CCTV cameras are acquired and processed in real-time
using efficient video acquisition planning techniques. The system employs computer vision
algorithms to detect and track the faces of each student within the classroom. Once the faces are
detected, the system extracts key facial features such as lips, eyes, and cheeks. This precise
extraction ensures accurate analysis of facial expressions and emotions. A sophisticated FER
algorithm is implemented to analyze the extracted facial features and classify them into eight
predefined emotional states. These emotions include happiness, sadness, boredom, etc., providing
valuable insights into students' attentiveness levels. The system seamlessly integrates with existing
online courses platforms, allowing for synchronized data collection and analysis. Educators can
access real-time information regarding students' attentiveness and engagement, facilitating targeted
interventions as needed.
5.2 NORMALIZATION
A system architecture or systems architecture is the conceptual model that defines the structure,
behavior, and more views of a system. An architecture description is a formal description and
representation of a system, organized in a way that supports reasoning about the structures and
behaviors of the system. System architects or solution architects are the people who know what
components to choose for the specific use case, make right trade-offs with awareness of the
bottlenecks in the overall system. Usually, solution architects who have more years of experience
tend to be good at system architecting because system design is an open-ended problem, there is no
one correct solution. Mostly it’s trial and error experiments done with right trade-offs. So,
experience teaches you what components to choose based on the problem at hand but in order to gain
that experience you need to start somewhere. That’s why I am writing this article for the developers
who have never done system designs.
The system architecture is mainly based on the fuzzy nature analytics where the learner
behavior while learning a course and his style of learning is efficiently predicted using the
intutionistic fuzzy logic. The observer system can be designed with the modeling of various
attributes like knowledge creditability, learner aggregation, learner objects and so.
STUDENT ATTENTIVENESS ANALYSIS USING FACE EMOTION
DETECTION
Video streaming
Frame processing
Face detection
Emotion classification
Emotion alert
The relation upon the system is structured through a conceptual ER-Diagram, which not
only specifics the existing entities, but also the standard relations through which the system exists
and the cardinalities that are necessary for the system state to continue. The Entity Relationship
Diagram (ERD) depicts the relationship between the data objects. The ERD is the notation that is
used to conduct, the date modeling activity the attributes of each data object noted, is the ERD can
be described resign a data object description. The set of primary components that are identified by
the ERD are Data object Relationships Attributes Various types of indicator The primary purpose
of the ERD is to represent data objects and their relationships.
A two-dimensional diagram explains how data is processed and transferred in a system. The
graphical depiction identifies each source of data and how it interacts with other data sources to
reach a common output. Individuals seeking to draft a data flow diagram must identify external
inputs and outputs, determine how the inputs and outputs relate to each other, and explain with
graphics how these connections relate and what they result in. This type of diagram helps business
development and design teams visualize how data is processed and identify or improve certain
aspects.
Symbol Description
A data flow.
LEVEL 0
DFD Level 0 is also called a Context Diagram. It’s a basic overview of the whole system or
process being analyzed or modeled. It’s designed to be an at-a-glance view, showing the system as
a single high-level process, with its relationship to external entities. It should be easily understood
by a wide audience, including stakeholders, business analysts, data analysts and developers.
Admin
STUDENT
ATTENTIVENESS
Student ANALYSIS USING Database
FACE EMOTION
DETECTION
LEVEL 1
DFD Level 1 provides a more detailed breakout of pieces of the Context Level Diagram.
You will highlight the main functions carried out by the system, as you break down the high-level
process of the Context Diagram into its sub – processes. A level 1 data flow diagram (DFD) is more
detailed than a level 0 DFD but not as detailed as a level 2 DFD. It breaks down the main processes
into sub processes that can then be analyzed and improved on a more intimate level.
1.0
Student
Video
streaming
2.0
Frame
processing
Database
3.0
Face
detection
6.0
4.0 5.0
Alert
Feature Emotion
identify classify
A UML diagram is a diagram based on the UML (Unified Modeling Language) with the
purpose of visually representing a system along with its main actors, roles, actions, artifacts or
classes, in order to better understand, alter, maintain, or document information about the system.
ACTIVITY DIAGRAM
Admin
Field Type Null Default
OUTPUT SCREENS
CHAPTER 7
TESTING
TESTING
Testing is a series of different tests that whose primary purpose is to fully exercise the
computer based system. Although each test has a different purpose, all work should verify that all
system element have been properly integrated and performed allocated function. Testing is the
process of checking whether the developed system works according to the actual requirement and
objectives of the system. The philosophy behind testing is to find the errors. A good test is one that
has a high probability of finding an undiscovered error. A successful test is one that uncovers the
undiscovered error. Test cases are devised with this purpose in mind. A test case is a set of data that
the system will process as an input.
SYSTEM TESTING
After a system has been verified, it needs to be thoroughly tested to ensure that every
component of the system is performing in accordance with the specific requirements and that it is
operating as it should including when the wrong functions are requested or the wrong data is
introduced.
Testing measures consist of developing a set of test criteria either for the entire system or for
specific hardware, software and communications components. For an important and sensitive
system such as an electronic voting system, a structured system testing program may be established
to ensure that all aspects of the system are thoroughly tested.
• Applying functional tests to determine whether the test criteria have been met
• Applying qualitative assessments to determine whether the test criteria have been
met.
• Conducting tests in “laboratory” conditions and conducting tests in a variety of
“real life” conditions.
• Conducting tests over an extended period of time to ensure systems can perform
consistently.
• Conducting “load tests”, simulating as close as possible likely conditions while
using or exceeding the amounts of data that can be expected to be handled in an
actual situation.
• Applying “non-operating” tests to ensure that equipment can stand up to expected levels of
physical handling.
• Testing “hard wired” code in hardware (firmware) to ensure its logical correctness and that
appropriate standards are followed.
▪ Testing all programs to ensure its logical correctness and that appropriate design,
development and implementation standards have been followed.
▪ Conducting “load tests”, simulating as close as possible a variety of “real life” conditions
using or exceeding the amounts of data that could be expected in an actual situation.
• Verifying that integrity of data is maintained throughout its required manipulation.
Fig 7.1 Register
UNIT TESTING
The first test in the development process is the unit test. The source code is normally
divided into modules, which in turn are divided into smaller units called units. These units have
specific behavior. The test done on these units of code is called unit test. Unit test depends upon the
language on which the project is developed.
Unit tests ensure that each unique path of the project performs accurately to the documented
specifications and contains clearly defined inputs and expected results. Functional and reliability
test in an Engineering environment, producing tests for the behavior of components (nodes and
vertices) of a product to ensure their correct behavior prior to system integration.
INTEGRATION TESTING
Testing is which modules are combined and tested as a group. Modules are typically code
modules, individual applications, source and destination applications on a network, etc. Integration
Testing follows unit testing and precedes system testing. Testing after the product is code complete.
Betas are often widely distributed or even distributed to the public at large in hopes that they will
buy the final product when it is release.
VALIDATION TESTING
Validation testing is testing where tester performed functional and non-functional testing.
Here functional testing includes Unit Testing (UT), Integration Testing (IT) and System Testing
(ST), and non-functional testing includes User acceptance testing (UAT).Validation testing is also
known as dynamic testing, where we are ensuring that "we have developed the product right." And
it also checks that the software meets the business needs of the client. It is a process of checking the
software during or at the end of the development cycle to decide whether the software follow the
specified business requirements. We can validate that the user accepts the product or not.
LOGIN
import tkinter
import ar_master
mm = ar_master.master_flask_code()
window=tkinter.Tk()
window.geometry("700x600")
window.title("student_attentive")
class sample:
name="guru"
image_0=Image.open('static/class.jpg')
bck_end=ImageTk.PhotoImage(image_0)
def login():
text1=entry1.get()
text2=entry2.get()
if(laa):
window.destroy()
import staff_home
def back():
window.destroy()
import main
canvas.pack()
canvas.create_image(-10,-3,anchor=NW,image=bck_end)
entry1.place(x=300,y=210)
entry2.place(x=300,y=285)
txt.place(x=160,y=450)
txt.place(x=380,y=450)
window.mainloop()
MAIN
from tkinter import *
import tkinter
window=tkinter.Tk()
window.geometry("700x600")
window.title("student_attentive")
class sample:
name="guru"
text="Student Attentive"
image_0=Image.open('static/class.jpg')
bck_end=ImageTk.PhotoImage(image_0)
def login():
window.destroy()
import log
def register():
window.destroy()
import register
canvas.pack()
canvas.create_image(-10,-3,anchor=NW,image=bck_end)
txt.place(x=150,y=450)
txt=Button(window,width=10,height=0,text="Register",fg="white",bg="#334CAF",font=('times',
15, ' bold '),command=register)
txt.place(x=400,y=450)
window.mainloop()
MAR
def mouth_aspect_ratio(mouth):
mar = (A + B) / (2.0 * C)
return mar
MODEL
import os
import numpy as np
import random,shutil
return
gen.flow_from_directory(dir,batch_size=batch_size,shuffle=shuffle,color_mode='grayscale',class_
mode=class_mode,target_size=target_size)
BS= 32
TS=(24,24)
SPE= len(train_batch.classes)//BS
VS = len(valid_batch.classes)//BS
print(SPE,VS)
# img,labels= next(train_batch)
# print(img.shape)
model = Sequential([
MaxPooling2D(pool_size=(1,1)),
Conv2D(32,(3,3),activation='relu'),
MaxPooling2D(pool_size=(1,1)),
#again
MaxPooling2D(pool_size=(1,1)),
Dropout(0.25),
Flatten(),
Dense(128, activation='relu'),
Dropout(0.5),
#output a softmax to squash the matrix into output probabilities
Dense(2, activation='softmax')
])
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
model.fit_generator(train_batch, validation_data=valid_batch,epochs=15,steps_per_epoch=SPE
,validation_steps=VS)
model.save('models/cnnCat2.h5', overwrite=True)
FACE DETECT
import smtplib
import imagehash
import cv2
import os
def image_matching(a,b):
i1 = Image.open(a)
i2 = Image.open(b)
if len(i1.getbands()) == 1:
else:
return xx
def match_templates(in_image):
name=[]
values=[]
entries = os.listdir('train/')
folder_lenght= len(entries)
i=0
for x in entries:
val=100
directory=x
name.append(x)
x1="train/"+x
arr = os.listdir(x1)
for x2 in arr:
path=x1+"/"+str(x2)
find=image_matching(path,in_image)
hash0 = imagehash.average_hash(Image.open(path))
hash1 = imagehash.average_hash(Image.open(in_image))
cc1=hash0 - hash1
print(cc1)
find=cc1
if(find<val):
val=find
values.append(val)
values_lenght= len(values)
pos=0
pos_val=100
if values[x]<pos_val:
pos=x
pos_val=values[x]
if(pos_val<20):
print(pos,pos_val,name[pos])
return name[pos]
else:
return "unknown"
cascPath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascPath)
train=True
video_capture = cv2.VideoCapture(0)
name="testing"
if os.path.exists(name):
h=0
else:
os.mkdir(name)
e_mail=0
while True:
# Capture frame-by-frame
######
if (frame is None):
face_cascade = cv2.CascadeClassifier(cascPath)
if (True):
facecnt = len(faces)
i=0
r = max(w, h) / 2
centerx = x + w / 2
centery = y + h / 2
nx = int(centerx - r)
ny = int(centery - r)
nr = int(r * 2)
font = cv2.FONT_HERSHEY_SIMPLEX
str1=name+'\\tt.jpg'
# kk=kk+1
lastimg = cv2.resize(faceimg, (100, 100))
cv2.imwrite(str1, lastimg)
ar=match_templates(str1)
print(ar)
if ar=="unknown":
e_mail=e_mail+1
else:
e_mail=0
#print e_mail
if e_mail>=30:
msg = MIMEMultipart()
s=student
to_mail=s.email
password = "egjuabqhwvktwdqf"
msg['From'] = "serverkey2018@gmail.com"
msg['To'] = to_mail
file = str1
fp = open(file, 'rb')
img = MIMEImage(fp.read())
fp.close()
msg.attach(img)
server.starttls()
server.login(msg['From'], password)
server.quit()
cv2.imshow('Video', frame)
break
video_capture.release()
cv2.destroyAllWindows()
STUDENT REGISTER
import tkinter
import ar_master
mm = ar_master.master_flask_code()
window=tkinter.Tk()
window.geometry("700x600")
window.title("student_attentive")
class sample:
name="guru"
image_0=Image.open('static/class.jpg')
bck_end=ImageTk.PhotoImage(image_0)
def register():
enry1=entry1.get()
enry2 = entry2.get()
enry3 = entry3.get()
maxin = mm.find_max_id("staff_details")
qry = ("insert into staff_details values('" + str(maxin) + "','" + str(enry1) + "','" + str(
result = mm.insert_query(qry)
print(qry)
window.destroy()
import main
canvas.pack()
canvas.create_image(-10,-3,anchor=NW,image=bck_end)
entry1.place(x=320,y=165)
entry2.place(x=320,y=240)
entry3.place(x=320,y=315)
txt=Button(window,width=10,height=0,text="Register",fg="white",bg="#334CAF",font=('times',
15, ' bold '),command=register)
txt.place(x=270,y=400)
window.mainloop()
TEST
import imutils
import argparse
import time
import dlib
import math
import cv2
import numpy as np
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
vs = cv2.VideoCapture(0)
time.sleep(2.0)
frame_width = 1024
frame_height = 576
image_points = np.array([
], dtype="double")
EYE_AR_THRESH = 0.25
MOUTH_AR_THRESH = 0.70
EYE_AR_CONSEC_FRAMES = 3
COUNTER = 0
while True:
ret,frame = vs.read()
size = gray.shape
rects = detector(gray, 0)
if len(rects) > 0:
shape = face_utils.shape_to_np(shape)
leftEye = shape[lStart:lEnd]
rightEye = shape[rStart:rEnd]
leftEAR = eye_aspect_ratio(leftEye)
rightEAR = eye_aspect_ratio(rightEye)
leftEyeHull = cv2.convexHull(leftEye)
rightEyeHull = cv2.convexHull(rightEye)
COUNTER += 1
else:
COUNTER = 0
mouth = shape[mStart:mEnd]
mouthMAR = mouth_aspect_ratio(mouth)
mar = mouthMAR
mouthHull = cv2.convexHull(mouth)
print(mar)
if i == 33:
elif i == 8:
elif i == 36:
elif i == 45:
elif i == 48:
elif i == 54:
else:
for p in image_points:
if head_tilt_degree:
cv2.imshow("Frame", frame)
if key == ord("q"):
break
cv2.destroyAllWindows()
vs.stop()
IMPORT VIDEO
import cv2
import os
import numpy as np
import time
import imagehash
import cv2
import sys
import os
import imutils
import argparse
import time
import dlib
import math
import cv2
import numpy as np
from EAR import eye_aspect_ratio
import pyttsx3
engine = pyttsx3.init()
name="attentive"
if os.path.exists(name):
h=0
else:
os.mkdir(name)
now = datetime.now()
dt_string = now.strftime("%Y_%m_%d")
student_list=[]
def SpeakText(command):
engine = pyttsx3.init()
print(command)
engine.say(command)
engine.runAndWait()
engine.stop()
predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
vs = cv2.VideoCapture(0)
time.sleep(2.0)
frame_width = 1024
frame_height = 576
image_points = np.array([
], dtype="double")
EYE_AR_THRESH = 0.25
MOUTH_AR_THRESH = 0.70
EYE_AR_CONSEC_FRAMES = 3
COUNTER = 0
(mStart, mEnd) = (49, 68)
def SpeakText(command):
print(command)
engine.say(command)
engine.runAndWait()
engine.stop()
def image_matching(a,b):
i1 = Image.open(a)
i2 = Image.open(b)
if len(i1.getbands()) == 1:
else:
return xx
def match_templates(in_image):
name=[]
values=[]
entries = os.listdir('train/')
folder_lenght= len(entries)
i=0
for x in entries:
val=100
directory=x
name.append(x)
x1="train/"+x
arr = os.listdir(x1)
for x2 in arr:
path=x1+"/"+str(x2)
find=image_matching(path,in_image)
hash0 = imagehash.average_hash(Image.open(path))
hash1 = imagehash.average_hash(Image.open(in_image))
cc1=hash0 - hash1
find=cc1
if(find<val):
val=find
values.append(val)
values_lenght= len(values)
pos=0;
pos_val=100
if values[x]<pos_val:
pos=x
pos_val=values[x]
if(pos_val<16):
print(pos,pos_val,name[pos])
return name[pos]
else:
return "unknown"
count=0
score=0
thicc=2
rpred=[99]
lpred=[99]
facepred=[99]
yawning=0
sleeping=0
model = load_model('models/cnnCat2.h5')
faceCascade = cv2.CascadeClassifier(cascPath)
train=True
video_capture = cv2.VideoCapture(0)
name="testing"
if os.path.exists(name):
h=0;
else:
os.mkdir(name)
e_mail=0
while True:
######
if (frame is None):
face_cascade = cv2.CascadeClassifier(cascPath)
faces = face_cascade.detectMultiScale(frame, 1.1, 3, minSize=(100, 100))
if (faces is None):
if (True):
dd=0
facecnt = len(faces)
i=0
r = max(w, h) / 2
centerx = x + w / 2
centery = y + h / 2
nx = int(centerx - r)
ny = int(centery - r)
nr = int(r * 2)
font = cv2.FONT_HERSHEY_SIMPLEX
str1=name+'\\tt.jpg'
# kk=kk+1
ar=match_templates(str1)
if ar=="unknown":
dd=0
else:
height,width = frame.shape[:2]
########################################
size = gray.shape
rects = detector(gray, 0)
if len(rects) > 0:
shape = face_utils.shape_to_np(shape)
leftEye = shape[lStart:lEnd]
rightEye = shape[rStart:rEnd]
leftEAR = eye_aspect_ratio(leftEye)
rightEAR = eye_aspect_ratio(rightEye)
leftEyeHull = cv2.convexHull(leftEye)
rightEyeHull = cv2.convexHull(rightEye)
COUNTER += 1
sleeping+=1
else:
COUNTER = 0
sleeping=0
mouth = shape[mStart:mEnd]
mouthMAR = mouth_aspect_ratio(mouth)
mar = mouthMAR
mouthHull = cv2.convexHull(mouth)
yawning+=1
else:
yawning=0
# if i == 33:
# elif i == 8:
# elif i == 36:
# elif i == 45:
# elif i == 48:
# elif i == 54:
# else:
# cv2.circle(frame, (x, y), 1, (0, 0, 255), -1)
# for p in image_points:
if yawning >= 3:
SpeakText(ar+" is Yawning")
# ar+=":Yawning"
student_list.append(""+ar+" # "+"Yawning")
if sleeping>=3:
SpeakText(ar+" is Sleeping")
# ar += ":Sleeping"
# if head_tilt_degree:
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
print(student_list)
result=""
for x in student_list:
result+=x+"\n"
print(file)
f.writelines(result)
f.close()
break
video_capture.release()
cv2.destroyAllWindows()
CHAPTER 8
SYSTEM SECURITY
8.1 INTRODUCTION
Ensuring the security and privacy of sensitive student data is paramount in the implementation of
the Student Attention Detection System. All data transmitted and stored within the system,
including live video streams and facial recognition data, shall be encrypted using industry-standard
encryption algorithms. This ensures that sensitive information remains confidential and protected
from unauthorized access. Access to the system's functionalities and data shall be restricted based
on role-based access control (RBAC) mechanisms. Only authorized users, such as educators and
administrators, shall have access to specific features based on their roles and permissions.
Communication between system components, including CCTV cameras, central processing units,
and online courses platforms, shall be conducted over secure channels using protocols such as
HTTPS and SSH. This prevents eavesdropping and tampering of data during transmission. Users
accessing the system shall be required to authenticate themselves using strong authentication
methods such as username/password combinations or multi-factor authentication (MFA).
Additionally, authorization mechanisms shall be in place to ensure that users can only access data
and functionalities relevant to their roles. The system shall implement logging and auditing
mechanisms to track user activities and system events. This allows administrators to monitor for
suspicious behavior and detect potential security breaches in real-time. The system shall adhere to
relevant privacy regulations and guidelines, such as GDPR and CCPA, to protect the privacy rights
of students. This includes obtaining explicit consent from students for the collection and processing
of their facial data and ensuring that data handling practices are transparent and compliant.
Security is paramount in the development and deployment of the Student Attention Detection
System to safeguard sensitive student data and ensure the integrity and reliability of system
operations. The following security measures are proposed:
Encryption: Employ encryption techniques to protect data both at rest and in transit. Data collected
from CCTV live streaming and facial recognition processes should be encrypted to prevent
unauthorized access. Secure communication protocols such as HTTPS should be used to encrypt
data transmitted between system components, including CCTV cameras, processing units, and
online courses platforms.
Secure Data Storage: Ensure that collected student face images and emotion analysis data are stored
securely. Data should be stored in encrypted databases or storage systems with access controls in
place to restrict access to authorized personnel only. Regular backups should be performed to
prevent data loss and ensure data availability in the event of a security incident.
Regular Security Audits and Monitoring: Conduct regular security audits and vulnerability
assessments to identify and address potential security vulnerabilities. Implement intrusion detection
and prevention systems (IDPS) to monitor system activities and detect any suspicious behavior or
unauthorized access attempts. Security logs should be generated and monitored to track user
activities and system events for timely response to security incidents.
Compliance with Privacy Regulations: Ensure compliance with relevant privacy regulations such as
GDPR, CCPA, and FERPA to protect student privacy rights. Obtain explicit consent from students
for the collection and processing of their facial data, and clearly communicate data handling
practices to maintain transparency and trust. 6. Secure Software Development Practices:
CHAPTER 9
CONCLUSION
It is clear that students’ attention does vary during lectures, but the literature does not
support the perpetuation of the 10- to 15-min attention estimate. Perhaps the only valid use of this
parameter is as a rhetorical device to encourage teachers to develop ways to maintain student
interest in the classroom. The first point in responding to this question is to emphasize that the
results are consistent with the existing view that attention is not highest near the start of a lesson
(first ten minutes) and that there is not necessarily a drop in attention that takes place throughout a
class; rather, a wave-like pattern is observed. This consistency with other studies also lends
credibility to the results showing that attention is not particularly low near the end of a lesson (from
the final fifteen minutes to the final five minutes), though there is notable increased tuning-out and
low attention after the final five minutes. Thus, the observations from this case study are suggestive
that language classes or studying in a foreign language follow the same patterns as classes in a first
language, at least for level B2 and higher. In terms of interaction type, student-centered other
educators continue to promote such a parameter as an empirically based estimate; they need to
support it with more controlled research. Beyond that, teachers must do as much as possible to
increase students’ motivation to “pay attention” as well as try to understand what students are really
thinking about during class. Thus in our system the attentiveness of the student is evaluated by
using the normal web cam access using their face feature evaluation. Also in the proposed system
the haar cascade algorithm is used which is Security and Authentication is an imperative part of any
industry. In Real time, Human face recognition can be performed in two stages such as, Face
detection and Face recognition.
CHAPTER 10
FUTURE ENHANCEMENT
BOOK REFERENCES
[1] White Belt Mastery · “SQL For Beginners: SQL Guide to understand how to work with a
Data Base” 2nd edition 2020.
[3] Mark Lutz “Python Pocket Reference: Python in Your Pocket”, 5th edition 2023.
WEB REFERENCES
1. https://www.researchgate.net/publication/347555599_Students'_attention_in_class_Patterns
_perceptions_of_cause_and_a_tool_for_measuring_classroom_quality_of_life
2. https://www.irjweb.com/Face%20Recognition%20Using%20Machine%20Learning.pdf