0% found this document useful (0 votes)
7 views86 pages

Student Attentiveness

The document discusses a proposed system for detecting student attentiveness in classrooms using facial emotion recognition technology integrated with CCTV live streaming. The system captures and analyzes students' facial expressions to classify their emotions into eight categories, providing real-time feedback to educators and triggering voice alerts to enhance attentiveness. It outlines the technical requirements, advantages of the system, and the importance of maintaining student focus for academic success.

Uploaded by

Ajaysj Ajaysj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views86 pages

Student Attentiveness

The document discusses a proposed system for detecting student attentiveness in classrooms using facial emotion recognition technology integrated with CCTV live streaming. The system captures and analyzes students' facial expressions to classify their emotions into eight categories, providing real-time feedback to educators and triggering voice alerts to enhance attentiveness. It outlines the technical requirements, advantages of the system, and the importance of maintaining student focus for academic success.

Uploaded by

Ajaysj Ajaysj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

ABSTRACT

The student attentiveness in the class hours will be the problematic one for the professors
and the college. The lack of attentiveness in class creates to low their marks in the exam
time. Thus an efficient planning out for the detection of student attention using their face
emotion. Here CCTV live steaming in class corners is placed with efficient video
acquisition planning. The system identifies the face of each student and checks their
emotion with their face features. For the exact extraction of the face features with lips,
eyes, cheeks etc. rom the perspective of computer simulation, a framework combining face
expression recognition (FER) algorithm with online courses platforms is proposed in this
work. The cameras in the devices are used to collect students’ face images, and the facial
expressions are analyzed and classified into 8 kinds of emotions by the FER algorithm. A
voice based alert using HMM for the whole class is generated where the system will be
enhanced the student to be more attentive in class hours.
திட்ட சுருக்கம்

வகுப் பு நேரத்தில் மாணவர்களின் கவனக்குறைவு


நபராசிரியர்களுக்கும் கல் லூரிக்கும் பிரச்சறனயாக
இருக்கும் . வகுப் பில் கவனக்குறைவு, நதர்வு நேரத்தில்
அவர்களின் மதிப் பபண்கறள குறைக்கிைது. இவ் வாறு
மாணவர்களின் முக உணர்சசி
் றயப் பயன்படுத்தி
கவனத்றதக் கண்டறிவதை் கான திைறமயான திட்டமிடல் .
இங் நக வகுப் பு மூறலகளில் சிசிடிவி நேரடி ஸ்டீமிங்
திைறமயான வீடிநயா றகயகப் படுத்தல் திட்டமிடலுடன்
றவக்கப் பட்டுள் ளது. இே் த அறமப் பு ஒவ் பவாரு மாணவரின்
முகத்றதயும் அறடயாளம் கண்டு, அவர்களின் முக
அம் சங் கறளக் பகாண்டு அவர்களின் உணர்சசி
் கறளச்
சரிபார்க்கிைது. கணினி உருவகப் படுத்துதலின்
கண்நணாட்டத்தில் உதடுகள் , கண்கள் , கன்னங் கள்
நபான்ைவை் றின் முக அம் சங் கறள துல் லியமாக
பிரித்பதடுப் பதை் காக, ஆன்றலன் படிப் புகள்
இயங் குதளங் களுடன் முக பவளிப் பாடு அங் கீகாரம் (FER)
அல் காரிதத்றத இறணக்கும் ஒரு கட்டறமப் பு இே் த
நவறலயில் முன்பமாழியப் பட்டுள் ளது. சாதனங் களில் உள் ள
நகமராக்கள் மாணவர்களின் முகப் படங் கறளச் நசகரிக்கப்
பயன்படுகின்ைன, நமலும் முகபாவங் கள் FER அல் காரிதம்
மூலம் பகுப் பாய் வு பசய் யப் பட்டு 8 வறகயான
உணர்சசி
் களாக வறகப் படுத்தப் படுகின்ைன. முழு
வகுப் புக்கும் HMMஐப் பயன்படுத்தி குரல் அடிப் பறடயிலான
எச்சரிக்றக உருவாக்கப் படுகிைது, அங் கு மாணவர் வகுப் பு
நேரத்தில் அதிக கவனத்துடன் இருக்க கணினி
நமம் படுத்தப் படும் .
CHAPTER 1

INTRODUCTION
1.1 ABOUT THE ORGANIZATION

We offer beautiful web design company based in India, SEO friendly and optional
Google Adwords as well as other Google Search Engine Optimization booster to help you
gain advantage & exposure in the online business industry. We started our passion for
digital marketing since 2014. Our values are transparency, respect, honest communication
& mutual understanding. We have dealt with many kinds of customers, & we assure you
that we listen & will implement the agreement into action at our best.

1.2 OUR SERVICES

Technology We Use

Businesses who look for good looking & features of nowadays website with the ability of
maintaining your site by yourself

• PHP / Frameworks
• Wordpress / CMS
• E-Commerce / WooCommerce
• Android / IOS
• SEO / SMO
• Digital marketing
• Tally

MISSION

Give us a Chance to Grow Your Business Online. SEO friendly and optional Google
Adwords as well as other Google Search Engine Optimization booster to help you gain advantage
& exposure in the online business industry
VISION

Our vision is to be the state's most well known carrier supplier enterprise focused to deliver
the maximum to our clients. We consider in the simple not the problematic. We’re additionally
giving equal attention of innovations

1.2 OVERVIEW OF THE PROJECT


Students’ attention during the class designs the thick lines of the learning trajectory. A good
portion of the lesson’s content may be perceived slightly easy with an active alertness, and
the rest of it requires extended time to analyze the given material outside the classroom-
homework. Active attention equips students with the necessary data and more information
of an addressed lesson during a session; the key concern remains how to keep students’
attention to the maximum in the classroom. The paper will elaborate various facts and
observations, which might contradict each other in numerical aspects. For instance, a great
number of researchers support the idea that a student’s attention lasts continually for ten
minutes, and then is off. Afterward, the mind wanders outside the session’s frame. A certain
group of researchers support the idea that attention lasts for a short period of time, it rests
for a few minutes and then again is active-this process occurs periodically throughout the
session. Others comply with different ideas, which are aligned with strong evidence, too.
On top of everything else, the paper concerns how to keep student’s attentiveness wide-
awake during a learning discussion by describing the main idea in different versions and
keeping high their learning trajectory that reaches its highest points. Students of the 21st
century are moving to a Digital Education, focusing on teachers and students’ relations to
achieve the goal of meaningful, high quality and dynamic education. The advent of
digitization in education has brought drastic changes in education system. However, there
are still some challenges that teachers/instructors are facing. One of the challenges that the
teachers/instructors are facing is to examine how well the students/learner are receiving the
content delivered from the lecture.
CHAPTER 2

SYSTEM ANALYSIS

2.1 INTRODUCTION
The proposed system aims to address the challenge of student attentiveness in classrooms
by leveraging facial emotion recognition technology. With the prevalence of distractions
and diminishing attention spans, maintaining students' focus during class hours is crucial
for academic success. To achieve this, the system integrates CCTV live streaming in class
corners with efficient video acquisition planning to capture students' facial expressions. By
identifying each student's face and analyzing their facial features, including lips, eyes, and
cheeks, the system employs a facial expression recognition (FER) algorithm to classify
emotions. This classification encompasses eight distinct emotional states, providing
insights into students' attentiveness levels. Furthermore, the integration of this technology
with online courses platforms enhances its utility and accessibility within educational
frameworks. Through synchronized data collection and analysis, the system offers real-
time feedback to educators regarding students' engagement and focus. In instances where
low attentiveness is detected, the system triggers voice-based alerts using Hidden Markov
Models (HMM) to prompt the entire class. These alerts serve as proactive interventions to
encourage students to maintain attention and participation during class hours.

2.2 ANALYSIS MODEL

The analysis model for this project encompasses various interconnected components
aimed at improving student attentiveness in classroom settings. The primary focus lies on
leveraging facial emotion recognition technology, integrated with CCTV live streaming
and online courses platforms, to detect and address lapses in attention. At the core of the
model is the utilization of CCTV cameras placed strategically in class corners, supported
by efficient video acquisition planning. These cameras capture students' facial
expressions, enabling the system to identify individual faces and analyze their emotional
states. The extraction of specific facial features such as lips, eyes, and cheeks enhances
the accuracy of emotion detection. A key aspect of the model involves the integration of
a face expression recognition (FER) algorithm with online courses platforms. This
integration enables seamless data collection and analysis, allowing educators to monitor
students' attentiveness in real-time. The FER algorithm classifies facial expressions into
eight distinct emotions, providing nuanced insights into students' engagement levels.

2.3 SYSTEM REQUIREMENTS

HARDWARE AND SOFTWARE SPECIFICATIONS

2.3.1 HARDWARE REQUIREMENTS

• Processor : Intel Core i5 or AMD Ryzen 5 or equivalent


• RAM : 8 GB
• Hard disk : 500 GB
• GPU : Dedicated graphics card with 2GB of vram
• Camera : CCTV camera for capturing the face features of the student

2.3.2 SOFTWARE REQUIREMENTS

• Front End : PYTHON 3.12


• IDE : PyCharm Jet Beans 2024.1.1
• Platform : Windows 7
• Back End : MYSQL 8.4.0

2.4 EXISTING SYSTEM

Students’ interaction and collaboration using Internet of Things (IoT) based interoperable
infrastructure is a convenient way. Measuring student attention is an essential part of
educational assessment. As new learning styles develop, new tools and assessment
methods are also needed. The focus of this paper is to develop IoT-based interaction
framework and analysis of the student experience of electronic learning (eLearning). The
learning behaviors of students attending remote video lectures are assessed by logging
their behavior and analyzing the resulting multimedia data using machine learning
algorithms. An attention-scoring algorithm, its workflow, and the mathematical
formulation for the smart assessment of the student learning experience are established.
This setup has a data collection module, which can be reproduced by implementing the
algorithm in any modern programming language. Some faces, eyes, and status of eyes are
extracted from video stream taken from a webcam using this module. The extracted
information is saved in a dataset for further analysis. The analysis of the dataset produces
interesting results for student learning assessments. Modern learning management
systems can integrate the developed tool to take student learning behaviors into account
when assessing electronic learning strategies

2.4.1 Disadvantages

• The process use IOT for detection of student attentiveness which is cost effective.

• Accuracy is low and implementation is difficult.

2.5 PROPOSED SYSTEM

Attention is best described as the sustained focus of cognitive resources on information,


while ignoring distractions. In the field of education, the terms of sustained attention or
vigilance are used to describe the ability to maintain concentration over prolonged periods
of time, such as during lectures in the classroom. Pedagogical research is often focused on
maintaining student attention (concentration, vigilance) during lectures, because sustained
attention is recognized as an important factor of the learning success. However, tracking
of individual students’ attentive state in the classroom by using self-reports is difficult and
interferes with the learning process, which is also the case for using psychophysical data
sensors. Visual observation is a non-intrusive method, and real-time video recording and
encoding can be used for manual attention coding; however, for long-term observations,
automatic computer vision methods should be applied. The literature is lacking a
consistent definition of the student’s attention in the classroom. The review of video
recordings of students during lecture has shown that students’ attention towards the
lecture is manifested by observable behavior such as gaze, writing, and mimics. In our
system using nlp and hmm the student who are not attentive is recorded and emotion
voice alert is added.

2.5.1 Advantages

• The system is accurate.

• Detection of the face feature by the haar cascade algorithm is very effective.

• Cost efficient process.

2.6 MODULE DESCRIPTION:

1. VIDEO STREAMING

The system's main objective is to improve the videos produced by thermal imagers and
provide them the ability to acquire, stream, and analyze videos. In the FPGA's PL section, video
capturing and processing units will be installed. These components will be in charge of PAL video
data reception at 25 frames per second and some video processing, including 2D-FFT/IFFT and
bilateral/Gaussian filtering. PAL video at 25 frames per second generates frames at a raw data rate
of 20 Mbytes per second; Ethernet and USB are unable to handle this high pace. As a result, the
video data must be compressed before being streamed over Ethernet. JPEG compression is what is
utilized and the taken video from the surveillance camera is acquainted.

2. FRAME PROCESSING.

Frame rate conversion is typically used when producing content for devices that use
different standards (e.g. NTSC vs. PAL) or different content playback scenarios (e.g. film at 24 fps
vs. television at 25 fps or 29.97 fps). Frame processing allows for the combination of multiple
captured frames into one recorded frame. The combination occurs before the resulting frame is
encoded. You can select the following frame processing settings: No Frame Processing, Frame
Summing, Frame Averaging.
3. FACE DETECTION.

Face extraction is accomplished by the haar cascade algorithm. Due to the complex
background, it is not a good choice to locate or detect both the eyes in the original image, for this
we will take much more time on searching the whole window with poor results. So firstly, we will
locate the face, and reduce the range in which we will detect both the eyes. After doing this we can
improve the tracking speed and correct rate, reduce the affect of the complex background. Besides,
we propose a very simple but powerful method to reduce the computing complexity.

4. FACE FEATURE IDENTIFICATION.

Feature extraction is the pattern of extracting the feature points from the images. From these
feature points the analysis are made exactly for the recognition. With already trained feature
dataset will be known to identify the helmet from the image.The feature of face like eye
blinking, lips movement, chin and chick movement are extracted Haar cascade is an algorithm
that can detect objects in images, irrespective of their scale in image and location. This
algorithm is not so complex and can run in real-time. We can train a haar-cascade detector to
detect various objects like cars, bikes, buildings, fruits; Haar Cascade Detection is one of the
oldest yet powerful face detection algorithms invented. It has been there since long, long before
Deep Learning became famous. Haar Features were not only used to detect faces, but also for
eyes, lips, license number plates.
5. EMOTION CLASSIFICATION.

Emotion of the student can be classified by the face expression algorithm. The emotion like
yawning, sleeping, laughing, taking can be detected. After detecting the emotion they can be given
as voice alert

6. EMOTION ALERT

The name of the student trained in the system. The emotion of the student is analyzed and
then using NLP processing the voice alert is defined with the student name. this process is done by
using NLP and HMM algorithm. The main core of HMM-based speech recognition systems
is Viterbi algorithm. Viterbi algorithm uses dynamic programming to find out the best alignment
between the input speech and a given speech model.
CHAPTER 3

SOFTWARE REQUIREMENT SPECIFICATION

3.1 Introduction
The Student Attention Detection System is aimed at addressing the challenge of student
attentiveness in classrooms by leveraging facial emotion recognition technology. This SRS
document outlines the functional and non-functional requirements necessary for the development
and implementation of the system.

Functional Requirements:

Face Detection and Feature Extraction: The system shall utilize CCTV live streaming to
detect and track the faces of each student in the classroom. It shall extract facial features including
lips, eyes, and cheeks for precise analysis of emotions.

Facial Expression Recognition (FER) Algorithm: The system shall incorporate a FER
algorithm capable of analyzing facial expressions and classifying them into eight predefined
emotional states. It shall accurately identify emotions such as happiness, sadness, boredom, etc., to
determine students' attentiveness levels.

Integration with Online Courses Platforms: The system shall integrate seamlessly with
existing online courses platforms to facilitate data synchronization and analysis. It shall allow
educators to monitor students' attentiveness in real-time and provide actionable insights for
instructional improvement.

Voice-Based Alert Generation: The system shall generate voice-based alerts using Hidden
Markov Models (HMM) when low attentiveness is detected. Alerts shall be broadcasted to the
entire class, prompting students to refocus and engage actively.

Non-Functional Requirements:

Performance: The system shall be capable of processing live video streams in real-time to
ensure timely detection of student attentiveness. It shall exhibit high accuracy in emotion
recognition to minimize false positives and negatives.
Security and Privacy: The system shall adhere to strict security measures to protect
sensitive student data collected during facial recognition and emotion analysis. It shall comply with
relevant privacy regulations and guidelines to safeguard the privacy rights of students.

Usability: The user interface shall be intuitive and user-friendly, enabling educators to
easily access and interpret student attentiveness data. It shall provide customizable settings for alert
thresholds and frequency to accommodate varying classroom dynamics.

Scalability: The system shall be scalable to accommodate varying class sizes and
configurations. It shall support future enhancements and upgrades to meet evolving educational
needs.

System Constraints:

Hardware Requirements: The system shall require CCTV cameras with high-resolution
capabilities and sufficient coverage of classroom areas. It shall necessitate compatible computing
devices with adequate processing power and memory for real-time video analysis.

Technical Limitations: The system's performance may be impacted by factors such as


lighting conditions, occlusions, and variations in facial expressions. It shall require periodic updates
and maintenance to address software bugs and ensure optimal functionality.
CHAPTER 4

SYSTEM DEVELOPEMENT ENVIRONMENT


4.1 FRONT END (Python)

Introduction

Python is a widely used high-level programming language first launched in 1991. Since
then, Python has been gaining popularity and is considered as one of the most popular and flexible
server-side programming languages.

Unlike most Linux distributions, Windows does not come with the Python programming
language by default. However, you can install Python on your Windows server or local machine in
just a few easy steps.

PREREQUISITES

• A system running Windows 10 with admin privileges


• Command Prompt (comes with Windows by default)
• A Remote Desktop Connection app (use if you are installing Python on a remote Windows
server)
PYTHON INSTALLATION ON WINDOWS

Step 1: Select Version of Python to Install

The installation procedure involves downloading the official Python .exe installer and running it
on your system.
The version you need depends on what you want to do in Python. For example, if you are
working on a project coded in Python version 2.6, you probably need that version. If you are
starting a project from scratch, you have the freedom to choose.
If you are learning to code in Python, we recommend you download both the latest version of
Python 2 and 3. Working with Python 2 enables you to work on older projects or test new
projects for backward compatibility.

Step 2: Download Python Executable Installer

1. Open your web browser and navigate to the Downloads for Windows section of the official
Python website.

2. Search for your desired version of Python. At the time of publishing this article, the latest
Python 3 release is version 3.7.3, while the latest Python 2 release is version 2.7.16.

3. Select a link to download either the Windows x86-64 executable installer or Windows
x86 executable installer. The download is approximately 25MB.

Step 3: Run Executable Installer

1. Run the Python Installer once downloaded. (In this example, we have downloaded Python
3.7.3.)

2. Make sure you select the Install launcher for all users and Add Python 3.7 to
PATH checkboxes. The latter places the interpreter in the execution path. For older versions of
Python that do not support the Add Python to Path checkbox, see Step 6.
3. Select Install Now – the recommended installation options.

For all recent versions of Python, the recommended installation options include Pip and IDLE.
Older versions might not include such additional features.

4. The next dialog will prompt you to select whether to Disable path length limit. Choosing this
option will allow Python to bypass the 260-character MAX_PATH limit. Effectively, it will
enable Python to use long path names

The Disable path length limit option will not affect any other system settings. Turning it on will
resolve potential name length issues that may arise with Python projects developed in Linux.
Step 4: Verify Python Was Installed On Windows
1. Navigate to the directory in which Python was installed on the system. In our case, it
is C:\Users\Username\AppData\Local\Programs\Python\Python37 since we have installed
the latest version.
2. Double-click python.exe.
3. The output should be similar to what you can see below:

Step 5: Verify Pip Was Installed

If you opted to install an older version of Python, it is possible that it did not come with
Pip preinstalled. Pip is a powerful package management system for Python software packages.
Thus, make sure that you have it installed.

We recommend using Pip for most Python packages, especially when working in virtual
environments.

To verify whether Pip was installed:

1. Open the Start menu and type “cmd.”


2. Select the Command Prompt application.
3. Enter pip -V in the console. If Pip was installed successfully, you should see the following
output:
Step 6: Add Python Path to Environment Variables (Optional)

We recommend you go through this step if your version of the Python installer does not
include the Add Python to PATH checkbox or if you have not selected that option.

Setting up the Python path to system variables alleviates the need for using full paths. It instructs

Windows to look through all the PATH folders for “python” and find the install folder that
contains the python.exe file.

1. Open the Start menu and start the Run app.

2. Type sysdm.cpl and click OK. This opens the System Properties window.
3. Navigate to the Advanced tab and select Environment Variables.

4. Under System Variables, find and select the Path variable.

5. Click Edit.

6. Select the Variable value field. Add the path to the python.exe file preceded with
a semicolon (;). For example, in the image below, we have added “;C:\Python34.”

7. Click OK and close all windows.

By setting this up, you can execute Python scripts like this: Python script.py

Instead of this: C:/Python34/Python script.py

As you can see, it is cleaner and more manageable.

4.2 Backend (MySQL)


MySQL is the world's most used open source relational database management system
(RDBMS) as of 2008 that run as a server providing multi-user access to a number of databases. The
MySQL development project has made its source code available under the terms of the GNU
General Public License, as well as under a variety of proprietary agreements. MySQL was owned
and sponsored by a single for-profit firm, the Swedish company MySQL AB, now owned by Oracle
Corporation.

MySQL is a popular choice of database for use in web applications, and is a central
component of the widely used LAMP open source web application software stack—LAMP is an
acronym for "Linux, Apache, MySQL, Perl/PHP/Python." Free-software-open source projects that
require a full-featured database management system often use MySQL.

For commercial use, several paid editions are available, and offer additional functionality.
Applications which use MySQL databases include: TYPO3, Joomla, Word Press, phpBB, MyBB,
Drupal and other software built on the LAMP software stack. MySQL is also used in many high-
profile, large-scale World Wide Web products, including Wikipedia, Google (though not for
searches), Imagebook, Twitter, Flickr, Nokia.com, and YouTube.

Inter images

MySQL is primarily an RDBMS and ships with no GUI tools to administer MySQL
databases or manage data contained within the databases. Users may use the included command
line tools, or use MySQL "front-ends", desktop software and web applications that create and
manage MySQL databases, build database structures, back up data, inspect status, and work with
data records. The official set of MySQL front-end tools, MySQL Workbench is actively developed
by Oracle, and is freely available for use.

Graphical

The official MySQL Workbench is a free integrated environment developed by MySQL


AB, which enables users to graphically administer MySQL databases and visually design database
structures. MySQL Workbench replaces the previous package of software, MySQL GUI Tools.
Similar to other third-party packages, but still considered the authoritative MySQL frontend,
MySQL Workbench lets users manage database design & modeling, SQL development (replacing
MySQL Query Browser) and Database administration (replacing MySQL Administrator).MySQL
Workbench is available in two editions, the regular free and open source Community Edition which
may be downloaded from the MySQL website, and the proprietary Standard Edition which extends
and improves the feature set of the Community Edition.
Command line

MySQL ships with some command line tools. Third-parties have also developed tools to
manage a MySQL server, some listed below. Maatkit - a cross-platform toolkit for MySQL,
PostgreSQL and Memcached, developed in Perl Maatkit can be used to prove replication is working
correctly, fix corrupted data, automate repetitive tasks, and speed up servers. Maatkit is included
with several GNU/Linux distributions such as CentOS and Debian and packages are available for
Programming. MySQL works on many different system platforms, including AIX, BSDi, FreeBSD,
HP-UX, eComStation, i5/OS, IRIX, Linux, Mac OS X, Microsoft Windows, NetBSD, Novell
NetWare, OpenBSD, OpenSolaris, OS/2 Warp, QNX, Solaris, Symbian, SunOS, SCO Open Server,
SCO UnixWare, Sanos and Tru64. A port of MySQL to OpenVMS also exists.

MySQL is written in C and C++. Its SQL parser is written in yacc, and a home-brewed
lexical analyzer. Many programming languages with language-specific APIs include libraries for
accessing MySQL databases. These include MySQL Connector/Net for integration with Microsoft's
Visual Studio (languages such as C# and VB are most commonly used) and the JDBC driver for
Java. In addition, an ODBC interim age called MyODBCallows additional programming languages
that support the ODBC inter image to communicate with a MySQL database, such as ASP or
ColdFusion. The HTSQL - URL-based query method also ships with a MySQL adapter, allowing
direct interaction between a MySQL database and any web client via structured URLs.
Features
As of April 2009, MySQL offered MySQL 5.1 in two different variants: the open source
MySQL Community Server and the commercial Enterprise Server. MySQL 5.5 is offered under the
same licenses. They have a common code base and include the following features:

• A broad subset of ANSI SQL 99, as well as extensions


• Cross-platform support
• Stored procedures
• Triggers
• Cursors
• Updatable Views
• Information schema
• Strict mode (ensures MySQL does not truncate or otherwise modify data to conform to an
underlying data type, when an incompatible value is inserted into that type)
• X/Open XAdistributed transaction processing (DTP) support; two phase commit as part of
this, using Oracle's InnoDB engine
• Transactions with the InnoDB, and Cluster storage engines
• SSL support
• Query caching
• Sub-SELECTs (i.e. nested SELECTs)
• Replication support (i.e. Master-Master Replication & Master-Slave Replication) with one
master per slave, many slaves per master, no automatic support for multiple masters per slave.
• Full-text indexing and searching using My ISAM engine
• Embedded database library
• Partitioned tables with pruning of partitions in optimizer
• Shared-nothing clustering through MySQL Cluster
• Hot backup (via mysqlhotcopy) under certain conditions

Multiple storage engines, allowing one to choose the one that is most effective for each table in the
application (in MySQL 5.0, storage engines must be compiled in; in MySQL 5.1, storage engines
can be dynamically loaded at run time): Native storage engines (My ISAM, Falcon, Merge,
Memory (heap), Federated, Archive, CSV, Blackhole, Cluster, EXAMPLE, Maria, and Inno DB,
which was made the default as of 5.5). Partner-developed storage engines (solid DB, Nitro EDB,
Scale DB, Toku DB, Infobright (formerly Brighthouse), Kickfire, Xtra DB, IBM DB2). Inno DB
used to be a partner-developed storage engine, but with recent acquisitions, Oracle now owns both
MySQL core and Inno DB
CHAPTER 5
SYSTEM DESIGN
5.1 INTRODUCTION
The system architecture for the Student Attention Detection System is designed to
seamlessly integrate various components to effectively monitor and enhance student attentiveness
during class hours. CCTV cameras are strategically positioned in class corners to provide
comprehensive coverage of the classroom environment. These cameras are equipped with high-
resolution capabilities and are connected to a central processing unit for video acquisition and
processing. Live video streams from the CCTV cameras are acquired and processed in real-time
using efficient video acquisition planning techniques. The system employs computer vision
algorithms to detect and track the faces of each student within the classroom. Once the faces are
detected, the system extracts key facial features such as lips, eyes, and cheeks. This precise
extraction ensures accurate analysis of facial expressions and emotions. A sophisticated FER
algorithm is implemented to analyze the extracted facial features and classify them into eight
predefined emotional states. These emotions include happiness, sadness, boredom, etc., providing
valuable insights into students' attentiveness levels. The system seamlessly integrates with existing
online courses platforms, allowing for synchronized data collection and analysis. Educators can
access real-time information regarding students' attentiveness and engagement, facilitating targeted
interventions as needed.

5.2 NORMALIZATION

Normalization is a crucial process in database design aimed at organizing data efficiently


and ensuring data integrity. Its primary objective is to address issues stemming from data
redundancy, which occurs when the same data is repeated across multiple records in a database. By
eliminating redundancy, normalization helps prevent inconsistencies and anomalies that can arise
during data manipulation operations such as insertion, deletion, and updating. Decomposition, an
integral part of normalization, involves splitting relations (tables) into multiple smaller relations to
eliminate anomalies while maintaining data integrity. This is achieved by adhering to a set of rules
known as normal forms, which provide guidelines for structuring relations in a database. These
normal forms progressively refine the organization of data, starting from the First Normal Form
(1NF) and culminating in higher forms such as the Boyce-Codd Normal Form (BCNF) and Fifth
Normal Form (5NF). Common anomalies addressed by normalization include insertion anomalies,
where data cannot be added due to missing related information; deletion anomalies, resulting in
unintentional loss of data when deleting records; and update anomalies, causing data inconsistency
due to redundancy and partial updates. Overall, normalization ensures that databases are well-
structured, efficient, and free from anomalies, thereby enhancing data reliability and usability.

5.3 SYSTEM ARCHITECTURE DESIGN

A system architecture or systems architecture is the conceptual model that defines the structure,
behavior, and more views of a system. An architecture description is a formal description and
representation of a system, organized in a way that supports reasoning about the structures and
behaviors of the system. System architects or solution architects are the people who know what
components to choose for the specific use case, make right trade-offs with awareness of the
bottlenecks in the overall system. Usually, solution architects who have more years of experience
tend to be good at system architecting because system design is an open-ended problem, there is no
one correct solution. Mostly it’s trial and error experiments done with right trade-offs. So,
experience teaches you what components to choose based on the problem at hand but in order to gain
that experience you need to start somewhere. That’s why I am writing this article for the developers
who have never done system designs.

The system architecture is mainly based on the fuzzy nature analytics where the learner
behavior while learning a course and his style of learning is efficiently predicted using the
intutionistic fuzzy logic. The observer system can be designed with the modeling of various
attributes like knowledge creditability, learner aggregation, learner objects and so.
STUDENT ATTENTIVENESS ANALYSIS USING FACE EMOTION
DETECTION

Video streaming

Frame processing

Face detection

Face feature identification

Emotion classification

Emotion alert

Fig 5.3.1 Architectural Diagram


5.4 E – R DIAGRAM:

The relation upon the system is structured through a conceptual ER-Diagram, which not
only specifics the existing entities, but also the standard relations through which the system exists
and the cardinalities that are necessary for the system state to continue. The Entity Relationship
Diagram (ERD) depicts the relationship between the data objects. The ERD is the notation that is
used to conduct, the date modeling activity the attributes of each data object noted, is the ERD can
be described resign a data object description. The set of primary components that are identified by
the ERD are Data object Relationships Attributes Various types of indicator The primary purpose
of the ERD is to represent data objects and their relationships.

Fig 5.4.1 ER Diagram


5.5 DATA FLOW DIAGRAM:

A two-dimensional diagram explains how data is processed and transferred in a system. The
graphical depiction identifies each source of data and how it interacts with other data sources to
reach a common output. Individuals seeking to draft a data flow diagram must identify external
inputs and outputs, determine how the inputs and outputs relate to each other, and explain with
graphics how these connections relate and what they result in. This type of diagram helps business
development and design teams visualize how data is processed and identify or improve certain
aspects.

Data flow Symbols:

Symbol Description

An entity. A source of data or a destination


for data.

A process or task that is performed by the


system.

A data store, a place where data is held


between processes.

A data flow.
LEVEL 0

DFD Level 0 is also called a Context Diagram. It’s a basic overview of the whole system or
process being analyzed or modeled. It’s designed to be an at-a-glance view, showing the system as
a single high-level process, with its relationship to external entities. It should be easily understood
by a wide audience, including stakeholders, business analysts, data analysts and developers.

Admin

STUDENT
ATTENTIVENESS
Student ANALYSIS USING Database
FACE EMOTION
DETECTION

Fig 5.5.1 level 0 DFD

LEVEL 1

DFD Level 1 provides a more detailed breakout of pieces of the Context Level Diagram.
You will highlight the main functions carried out by the system, as you break down the high-level
process of the Context Diagram into its sub – processes. A level 1 data flow diagram (DFD) is more
detailed than a level 0 DFD but not as detailed as a level 2 DFD. It breaks down the main processes
into sub processes that can then be analyzed and improved on a more intimate level.
1.0
Student
Video
streaming

2.0

Frame
processing

Database

3.0

Face
detection

6.0
4.0 5.0
Alert
Feature Emotion
identify classify

Fig 5.5.2 level 1 DFD


5.6 UML DIAGRAM

A UML diagram is a diagram based on the UML (Unified Modeling Language) with the
purpose of visually representing a system along with its main actors, roles, actions, artifacts or
classes, in order to better understand, alter, maintain, or document information about the system.

USE CASE DIAGRAM


Use case diagrams are usually referred to as behavior diagrams used to describe a set of
actions (use cases) that some system or systems (subject) should or can perform in collaboration
with one or more external users of the system (actors). In software and systems engineering, a use
case is a list of actions or event steps typically defining the interactions between a role (known in
the Unified Modeling Language as an actor) and a system to achieve a goal. The actor can be a
human or other external system. An actor in the Unified Modeling Language (UML) "specifies a
role played by a user or any other system that interacts with the subject." "An Actor models a type
of role played by an entity that interacts with the subject (e.g., by exchanging signals and data), but
which is external to the subject." UML Use Case Include. Use case include is a
directed relationship between two use cases which is used to show that behavior of the included use
case (the addition) is inserted into the behavior of the including (the base) use case.
Fig 5.6.1 use case diagram

ACTIVITY DIAGRAM

An activity diagram is a type of behavioral diagram in Unified Modeling Language (UML)


that visually represents the flow of activities and actions within a system or process. It illustrates the
sequence of actions, decisions, and transitions that occur during the execution of a particular use
case or business process. Activity diagrams are commonly used in software development, business
process modeling, and system analysis to depict the dynamic aspects of a system.
Fig 5.6.2 Activity Diagram
5.7 TABLE DESIGN:

Admin
Field Type Null Default

id int(50) Yes NULL


name varchar(100) Yes NULL
action varchar(100) Yes NULL

Table structure for table staff_details

Field Type Null Default


id int(50) Yes NULL
name varchar(100) Yes NULL
class varchar(100) Yes NULL
email varchar(100) Yes NULL

Student register register

Field Type Null Default

id int(50) Yes NULL

accno varchar(50) Yes NULL

name varchar(50) Yes NULL

gender varchar(50) Yes NULL

address varchar(50) Yes NULL

email varchar(50) Yes NULL

pnumber varchar(50) Yes NULL

uname varchar(50) Yes NULL

password varchar(50) Yes NULL


date varchar(50) Yes NULL

time varchar(50) Yes NULL

balance varchar(50) Yes NULL


CHAPTER 6

OUTPUT SCREENS
CHAPTER 7

SYSTEM TESTING AND IMPLEMENTATION

TESTING

TESTING

Testing is a series of different tests that whose primary purpose is to fully exercise the
computer based system. Although each test has a different purpose, all work should verify that all
system element have been properly integrated and performed allocated function. Testing is the
process of checking whether the developed system works according to the actual requirement and
objectives of the system. The philosophy behind testing is to find the errors. A good test is one that
has a high probability of finding an undiscovered error. A successful test is one that uncovers the
undiscovered error. Test cases are devised with this purpose in mind. A test case is a set of data that
the system will process as an input.

SYSTEM TESTING

After a system has been verified, it needs to be thoroughly tested to ensure that every
component of the system is performing in accordance with the specific requirements and that it is
operating as it should including when the wrong functions are requested or the wrong data is
introduced.

Testing measures consist of developing a set of test criteria either for the entire system or for
specific hardware, software and communications components. For an important and sensitive
system such as an electronic voting system, a structured system testing program may be established
to ensure that all aspects of the system are thoroughly tested.

Testing measures that could be followed include:

• Applying functional tests to determine whether the test criteria have been met
• Applying qualitative assessments to determine whether the test criteria have been
met.
• Conducting tests in “laboratory” conditions and conducting tests in a variety of
“real life” conditions.
• Conducting tests over an extended period of time to ensure systems can perform
consistently.
• Conducting “load tests”, simulating as close as possible likely conditions while
using or exceeding the amounts of data that can be expected to be handled in an
actual situation.

Test measures for hardware may include:

• Applying “non-operating” tests to ensure that equipment can stand up to expected levels of
physical handling.
• Testing “hard wired” code in hardware (firmware) to ensure its logical correctness and that
appropriate standards are followed.

Tests for software components also include:

▪ Testing all programs to ensure its logical correctness and that appropriate design,
development and implementation standards have been followed.
▪ Conducting “load tests”, simulating as close as possible a variety of “real life” conditions
using or exceeding the amounts of data that could be expected in an actual situation.
• Verifying that integrity of data is maintained throughout its required manipulation.
Fig 7.1 Register

UNIT TESTING

The first test in the development process is the unit test. The source code is normally
divided into modules, which in turn are divided into smaller units called units. These units have
specific behavior. The test done on these units of code is called unit test. Unit test depends upon the
language on which the project is developed.
Unit tests ensure that each unique path of the project performs accurately to the documented
specifications and contains clearly defined inputs and expected results. Functional and reliability
test in an Engineering environment, producing tests for the behavior of components (nodes and
vertices) of a product to ensure their correct behavior prior to system integration.

Fig 7.2 Login

INTEGRATION TESTING

Testing is which modules are combined and tested as a group. Modules are typically code
modules, individual applications, source and destination applications on a network, etc. Integration
Testing follows unit testing and precedes system testing. Testing after the product is code complete.
Betas are often widely distributed or even distributed to the public at large in hopes that they will
buy the final product when it is release.

Fig 7.3 User Home

VALIDATION TESTING

Validation testing is testing where tester performed functional and non-functional testing.
Here functional testing includes Unit Testing (UT), Integration Testing (IT) and System Testing
(ST), and non-functional testing includes User acceptance testing (UAT).Validation testing is also
known as dynamic testing, where we are ensuring that "we have developed the product right." And
it also checks that the software meets the business needs of the client. It is a process of checking the
software during or at the end of the development cycle to decide whether the software follow the
specified business requirements. We can validate that the user accepts the product or not.

Fig 7.4 Validation Testing


7.2 CODING

LOGIN

from tkinter import *

from PIL import ImageTk,Image

import tkinter

import ar_master

mm = ar_master.master_flask_code()

window=tkinter.Tk()

window.geometry("700x600")

window.title("student_attentive")

class sample:

name="guru"

image_0=Image.open('static/class.jpg')

bck_end=ImageTk.PhotoImage(image_0)

def login():

text1=entry1.get()

text2=entry2.get()

laa = mm.select_direct_query("select name,email from staff_details where name='" + str(text1) +


"' and email='"+ str(text2) +"'")

if(laa):

window.destroy()
import staff_home

def back():

window.destroy()

import main

canvas = tkinter.Canvas(window, width=1550, height=900)

canvas.pack()

canvas.create_image(-10,-3,anchor=NW,image=bck_end)

canvas.create_text(230,220,text="Name",font=('times', 15, ' bold '),fill="white")

canvas.create_text(230,300,text="Email",font=('times', 15, ' bold '),fill="white")

entry1=Entry(width=20,font=('times', 15, ' bold '))

entry1.place(x=300,y=210)

entry2=Entry(width=20,font=('times', 15, ' bold '))

entry2.place(x=300,y=285)

txt=Button(window,width=10,height=0,text="Login",fg="white",bg="#334CAF",font=('times', 15, '


bold '),command=login)

txt.place(x=160,y=450)

txt=Button(window,width=10,height=0,text="Back",fg="white",bg="#334CAF",font=('times', 15, '


bold '),command=login)

txt.place(x=380,y=450)

window.mainloop()

MAIN
from tkinter import *

from PIL import ImageTk,Image

import tkinter

window=tkinter.Tk()

window.geometry("700x600")

window.title("student_attentive")

class sample:

name="guru"

text="Student Attentive"

image_0=Image.open('static/class.jpg')

bck_end=ImageTk.PhotoImage(image_0)

def login():

window.destroy()

import log

def register():

window.destroy()

import register

canvas = tkinter.Canvas(window, width=1550, height=900)

canvas.pack()

canvas.create_image(-10,-3,anchor=NW,image=bck_end)

canvas.create_text(350,100,text=sample.text,font=('times', 50, ' bold '),fill="#0179F8")


txt=Button(window,width=10,height=0,text="Login",fg="white",bg="#334CAF",font=('times', 15, '
bold '),command=login)

txt.place(x=150,y=450)

txt=Button(window,width=10,height=0,text="Register",fg="white",bg="#334CAF",font=('times',
15, ' bold '),command=register)

txt.place(x=400,y=450)

window.mainloop()

MAR

from scipy.spatial import distance as dist

def mouth_aspect_ratio(mouth):

# compute the euclidean distances between the two sets of

# vertical mouth landmarks (x, y)-coordinates

A = dist.euclidean(mouth[2], mouth[10]) # 51, 59

B = dist.euclidean(mouth[4], mouth[8]) # 53, 57

# compute the euclidean distance between the horizontal

# mouth landmark (x, y)-coordinates

C = dist.euclidean(mouth[0], mouth[6]) # 49, 55

# compute the mouth aspect ratio

mar = (A + B) / (2.0 * C)

# return the mouth aspect ratio

return mar
MODEL

import os

from keras.preprocessing import image

import matplotlib.pyplot as plt

import numpy as np

from keras.utils.np_utils import to_categorical

import random,shutil

from keras.models import Sequential

from keras.layers import Dropout,Conv2D,Flatten,Dense, MaxPooling2D, BatchNormalization

from keras.models import load_model

def generator(dir, gen=image.ImageDataGenerator(rescale=1./255),


shuffle=True,batch_size=1,target_size=(24,24),class_mode='categorical' ):

return
gen.flow_from_directory(dir,batch_size=batch_size,shuffle=shuffle,color_mode='grayscale',class_
mode=class_mode,target_size=target_size)

BS= 32

TS=(24,24)

train_batch= generator('data/train',shuffle=True, batch_size=BS,target_size=TS)

valid_batch= generator('data/valid',shuffle=True, batch_size=BS,target_size=TS)

SPE= len(train_batch.classes)//BS

VS = len(valid_batch.classes)//BS

print(SPE,VS)
# img,labels= next(train_batch)

# print(img.shape)

model = Sequential([

Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(24,24,1)),

MaxPooling2D(pool_size=(1,1)),

Conv2D(32,(3,3),activation='relu'),

MaxPooling2D(pool_size=(1,1)),

#32 convolution filters used each of size 3x3

#again

Conv2D(64, (3, 3), activation='relu'),

MaxPooling2D(pool_size=(1,1)),

#64 convolution filters used each of size 3x3

#choose the best features via pooling

#randomly turn neurons on and off to improve convergence

Dropout(0.25),

#flatten since too many dimensions, we only want a classification output

Flatten(),

#fully connected to get all relevant data

Dense(128, activation='relu'),

#one more dropout for convergence' sake :)

Dropout(0.5),
#output a softmax to squash the matrix into output probabilities

Dense(2, activation='softmax')

])

model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])

model.fit_generator(train_batch, validation_data=valid_batch,epochs=15,steps_per_epoch=SPE
,validation_steps=VS)

model.save('models/cnnCat2.h5', overwrite=True)

FACE DETECT

import smtplib

import imagehash

from PIL import Image

import cv2

import os

from email.mime.multipart import MIMEMultipart

from email.mime.image import MIMEImage

from sample_data import student

def image_matching(a,b):

i1 = Image.open(a)

i2 = Image.open(b)

assert i1.mode == i2.mode, "Different kinds of images."

assert i1.size == i2.size, "Different sizes."


pairs = zip(i1.getdata(), i2.getdata())

if len(i1.getbands()) == 1:

# for gray-scale jpegs

dif = sum(abs(p1-p2) for p1,p2 in pairs)

else:

dif = sum(abs(c1-c2) for p1,p2 in pairs for c1,c2 in zip(p1,p2))

ncomponents = i1.size[0] * i1.size[1] * 3

xx= (dif / 255.0 * 100) / ncomponents

return xx

def match_templates(in_image):

name=[]

values=[]

entries = os.listdir('train/')

folder_lenght= len(entries)

i=0

for x in entries:

val=100

directory=x

name.append(x)

x1="train/"+x

arr = os.listdir(x1)
for x2 in arr:

path=x1+"/"+str(x2)

find=image_matching(path,in_image)

hash0 = imagehash.average_hash(Image.open(path))

hash1 = imagehash.average_hash(Image.open(in_image))

cc1=hash0 - hash1

print(cc1)

find=cc1

if(find<val):

val=find

values.append(val)

values_lenght= len(values)

pos=0

pos_val=100

for x in range(0, values_lenght):

if values[x]<pos_val:

pos=x

pos_val=values[x]

if(pos_val<20):

print(pos,pos_val,name[pos])

return name[pos]
else:

return "unknown"

cascPath = "haarcascade_frontalface_default.xml"

faceCascade = cv2.CascadeClassifier(cascPath)

train=True

video_capture = cv2.VideoCapture(0)

name="testing"

if os.path.exists(name):

h=0

else:

os.mkdir(name)

e_mail=0

while True:

# Capture frame-by-frame

ret, frame = video_capture.read()

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

######

if (frame is None):

print("Can't open image file")

face_cascade = cv2.CascadeClassifier(cascPath)

faces = face_cascade.detectMultiScale(frame, 1.1, 3, minSize=(100, 100))


if (faces is None):

print('Failed to detect face')

if (True):

for (x, y, w, h) in faces:

cv2.rectangle(frame, (x,y), (x+w, y+h), (255,0,0), 2)

facecnt = len(faces)

#print("Detected faces: %d" % facecnt)

i=0

height, width = frame.shape[:2]

for (x, y, w, h) in faces:

r = max(w, h) / 2

centerx = x + w / 2

centery = y + h / 2

nx = int(centerx - r)

ny = int(centery - r)

nr = int(r * 2)

faceimg = frame[ny:ny+nr, nx:nx+nr]

font = cv2.FONT_HERSHEY_SIMPLEX

str1=name+'\\tt.jpg'

# kk=kk+1
lastimg = cv2.resize(faceimg, (100, 100))

cv2.imwrite(str1, lastimg)

ar=match_templates(str1)

print(ar)

if ar=="unknown":

e_mail=e_mail+1

else:

e_mail=0

#print e_mail

if e_mail>=30:

msg = MIMEMultipart()

s=student

to_mail=s.email

password = "egjuabqhwvktwdqf"

msg['From'] = "serverkey2018@gmail.com"

msg['To'] = to_mail

msg['Subject'] = "Unknown Face Detected"

file = str1

fp = open(file, 'rb')

img = MIMEImage(fp.read())

fp.close()
msg.attach(img)

server = smtplib.SMTP('smtp.gmail.com: 587')

server.starttls()

server.login(msg['From'], password)

server.sendmail(msg['From'], msg['To'], msg.as_string())

server.quit()

cv2.putText(faceimg, (ar), (10,70), cv2.FONT_HERSHEY_SIMPLEX,


2,(0,255,0),2,cv2.LINE_AA)

cv2.imshow('Video', frame)

if cv2.waitKey(1) & 0xFF == ord('q'):

break

# When everything is done, release the capture

video_capture.release()

cv2.destroyAllWindows()

STUDENT REGISTER

from tkinter import *

from PIL import ImageTk,Image

import tkinter

import ar_master
mm = ar_master.master_flask_code()

window=tkinter.Tk()

window.geometry("700x600")

window.title("student_attentive")

class sample:

name="guru"

image_0=Image.open('static/class.jpg')

bck_end=ImageTk.PhotoImage(image_0)

def register():

enry1=entry1.get()

enry2 = entry2.get()

enry3 = entry3.get()

maxin = mm.find_max_id("staff_details")

qry = ("insert into staff_details values('" + str(maxin) + "','" + str(enry1) + "','" + str(

enry2) + "','" + str(enry3) + "')")

result = mm.insert_query(qry)

print(qry)
window.destroy()

import main

canvas = tkinter.Canvas(window, width=1550, height=900)

canvas.pack()

canvas.create_image(-10,-3,anchor=NW,image=bck_end)

canvas.create_text(230,180,text="Name",font=('times', 15, ' bold '),fill="white")

canvas.create_text(230,250,text="Class",font=('times', 15, ' bold '),fill="white")

canvas.create_text(230,330,text="Email",font=('times', 15, ' bold '),fill="white")

entry1=Entry(window,width=20,font=('times', 15, ' bold '))

entry1.place(x=320,y=165)

entry2=Entry(window,width=20,font=('times', 15, ' bold '))

entry2.place(x=320,y=240)

entry3=Entry(window,width=20,font=('times', 15, ' bold '))

entry3.place(x=320,y=315)

txt=Button(window,width=10,height=0,text="Register",fg="white",bg="#334CAF",font=('times',
15, ' bold '),command=register)

txt.place(x=270,y=400)

window.mainloop()

TEST

import imutils

from scipy.spatial import distance as dist


from imutils import face_utils

import argparse

import time

import dlib

import math

import cv2

import numpy as np

from EAR import eye_aspect_ratio

from MAR import mouth_aspect_ratio

from HeadPose import getHeadTiltAndCoords

print("[INFO] loading facial landmark predictor...")

detector = dlib.get_frontal_face_detector()

predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')

print("[INFO] initializing camera...")

vs = cv2.VideoCapture(0)

time.sleep(2.0)

frame_width = 1024

frame_height = 576

image_points = np.array([

(359, 391), # Nose tip 34

(399, 561), # Chin 9


(337, 297), # Left eye left corner 37

(513, 301), # Right eye right corne 46

(345, 465), # Left Mouth corner 49

(453, 469) # Right mouth corner 55

], dtype="double")

(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]

(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]

EYE_AR_THRESH = 0.25

MOUTH_AR_THRESH = 0.70

EYE_AR_CONSEC_FRAMES = 3

COUNTER = 0

(mStart, mEnd) = (49, 68)

while True:

ret,frame = vs.read()

frame = imutils.resize(frame, width=1024, height=576)

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

size = gray.shape

rects = detector(gray, 0)

if len(rects) > 0:

text = "{} face(s) found".format(len(rects))

cv2.putText(frame, text, (10, 20),


cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)

for rect in rects:

(bX, bY, bW, bH) = face_utils.rect_to_bb(rect)

cv2.rectangle(frame, (bX, bY), (bX + bW, bY + bH), (0, 255, 0), 1)

shape = predictor(gray, rect)

shape = face_utils.shape_to_np(shape)

leftEye = shape[lStart:lEnd]

rightEye = shape[rStart:rEnd]

leftEAR = eye_aspect_ratio(leftEye)

rightEAR = eye_aspect_ratio(rightEye)

ear = (leftEAR + rightEAR) / 2.0

leftEyeHull = cv2.convexHull(leftEye)

rightEyeHull = cv2.convexHull(rightEye)

cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)

cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)

if ear < EYE_AR_THRESH:

COUNTER += 1

if COUNTER >= EYE_AR_CONSEC_FRAMES:

cv2.putText(frame, "Eyes Closed!", (500, 20),

cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)

else:
COUNTER = 0

mouth = shape[mStart:mEnd]

mouthMAR = mouth_aspect_ratio(mouth)

mar = mouthMAR

mouthHull = cv2.convexHull(mouth)

cv2.drawContours(frame, [mouthHull], -1, (0, 255, 0), 1)

cv2.putText(frame, "MAR: {:.2f}".format(mar), (650, 20),

cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)

print(mar)

if mar > MOUTH_AR_THRESH:

cv2.putText(frame, "Yawning!", (800, 20),

cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)

for (i, (x, y)) in enumerate(shape):

if i == 33:

image_points[0] = np.array([x, y], dtype='double')

cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

cv2.putText(frame, str(i + 1), (x - 10, y - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

elif i == 8:

image_points[1] = np.array([x, y], dtype='double')

cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)


cv2.putText(frame, str(i + 1), (x - 10, y - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

elif i == 36:

image_points[2] = np.array([x, y], dtype='double')

cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

cv2.putText(frame, str(i + 1), (x - 10, y - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

elif i == 45:

image_points[3] = np.array([x, y], dtype='double')

cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

cv2.putText(frame, str(i + 1), (x - 10, y - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

elif i == 48:

image_points[4] = np.array([x, y], dtype='double')

cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

cv2.putText(frame, str(i + 1), (x - 10, y - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

elif i == 54:

image_points[5] = np.array([x, y], dtype='double')

cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

cv2.putText(frame, str(i + 1), (x - 10, y - 10),


cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

else:

cv2.circle(frame, (x, y), 1, (0, 0, 255), -1)

cv2.putText(frame, str(i + 1), (x - 10, y - 10),

cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)

for p in image_points:

cv2.circle(frame, (int(p[0]), int(p[1])), 3, (0, 0, 255), -1)

(head_tilt_degree, start_point, end_point,

end_point_alt) = getHeadTiltAndCoords(size, image_points, frame_height)

cv2.line(frame, start_point, end_point, (255, 0, 0), 2)

cv2.line(frame, start_point, end_point_alt, (0, 0, 255), 2)

if head_tilt_degree:

cv2.putText(frame, 'Head Tilt Degree: ' + str(head_tilt_degree[0]), (170, 20),

cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)

cv2.imshow("Frame", frame)

key = cv2.waitKey(1) & 0xFF

if key == ord("q"):

break

cv2.destroyAllWindows()

vs.stop()
IMPORT VIDEO

from time import strftime

import cv2

import os

from keras.models import load_model

import numpy as np

import time

import imagehash

from PIL import Image

import cv2

import sys

import os

import imutils

from scipy.spatial import distance as dist

from imutils import face_utils

import argparse

import time

import dlib

import math

import cv2

import numpy as np
from EAR import eye_aspect_ratio

from MAR import mouth_aspect_ratio

from HeadPose import getHeadTiltAndCoords

import pyttsx3

from datetime import datetime

engine = pyttsx3.init()

name="attentive"

if os.path.exists(name):

h=0

else:

os.mkdir(name)

now = datetime.now()

dt_string = now.strftime("%Y_%m_%d")

student_list=[]

def SpeakText(command):

engine = pyttsx3.init()

print(command)

engine.say(command)

engine.runAndWait()

engine.stop()

print("[INFO] loading facial landmark predictor...")


detector = dlib.get_frontal_face_detector()

predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')

print("[INFO] initializing camera...")

vs = cv2.VideoCapture(0)

time.sleep(2.0)

frame_width = 1024

frame_height = 576

image_points = np.array([

(359, 391), # Nose tip 34

(399, 561), # Chin 9

(337, 297), # Left eye left corner 37

(513, 301), # Right eye right corne 46

(345, 465), # Left Mouth corner 49

(453, 469) # Right mouth corner 55

], dtype="double")

(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]

(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]

EYE_AR_THRESH = 0.25

MOUTH_AR_THRESH = 0.70

EYE_AR_CONSEC_FRAMES = 3

COUNTER = 0
(mStart, mEnd) = (49, 68)

def SpeakText(command):

print(command)

engine.say(command)

engine.runAndWait()

engine.stop()

def image_matching(a,b):

i1 = Image.open(a)

i2 = Image.open(b)

assert i1.mode == i2.mode, "Different kinds of images."

assert i1.size == i2.size, "Different sizes."

pairs = zip(i1.getdata(), i2.getdata())

if len(i1.getbands()) == 1:

# for gray-scale jpegs

dif = sum(abs(p1-p2) for p1,p2 in pairs)

else:

dif = sum(abs(c1-c2) for p1,p2 in pairs for c1,c2 in zip(p1,p2))

ncomponents = i1.size[0] * i1.size[1] * 3

xx= (dif / 255.0 * 100) / ncomponents

return xx
def match_templates(in_image):

name=[]

values=[]

entries = os.listdir('train/')

folder_lenght= len(entries)

i=0

for x in entries:

val=100

directory=x

name.append(x)

x1="train/"+x

arr = os.listdir(x1)

for x2 in arr:

path=x1+"/"+str(x2)

find=image_matching(path,in_image)

hash0 = imagehash.average_hash(Image.open(path))

hash1 = imagehash.average_hash(Image.open(in_image))

cc1=hash0 - hash1

find=cc1

if(find<val):

val=find
values.append(val)

values_lenght= len(values)

pos=0;

pos_val=100

for x in range(0, values_lenght):

if values[x]<pos_val:

pos=x

pos_val=values[x]

if(pos_val<16):

print(pos,pos_val,name[pos])

return name[pos]

else:

return "unknown"

face = cv2.CascadeClassifier('haar cascade files\haarcascade_frontalface_alt.xml')

leye = cv2.CascadeClassifier('haar cascade files\haarcascade_lefteye_2splits.xml')

reye = cv2.CascadeClassifier('haar cascade files\haarcascade_righteye_2splits.xml')

count=0

score=0

thicc=2

rpred=[99]

lpred=[99]
facepred=[99]

yawning=0

sleeping=0

model = load_model('models/cnnCat2.h5')

cascPath = "haar cascade files/haarcascade_frontalface_default.xml"

faceCascade = cv2.CascadeClassifier(cascPath)

train=True

video_capture = cv2.VideoCapture(0)

name="testing"

if os.path.exists(name):

h=0;

else:

os.mkdir(name)

e_mail=0

while True:

ret, frame = video_capture.read()

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

######

if (frame is None):

print("Can't open image file")

face_cascade = cv2.CascadeClassifier(cascPath)
faces = face_cascade.detectMultiScale(frame, 1.1, 3, minSize=(100, 100))

if (faces is None):

print('Failed to detect face')

if (True):

for (x, y, w, h) in faces:

dd=0

facecnt = len(faces)

i=0

height, width = frame.shape[:2]

for (x, y, w, h) in faces:

r = max(w, h) / 2

centerx = x + w / 2

centery = y + h / 2

nx = int(centerx - r)

ny = int(centery - r)

nr = int(r * 2)

faceimg = frame[ny:ny+nr, nx:nx+nr]

font = cv2.FONT_HERSHEY_SIMPLEX

str1=name+'\\tt.jpg'

# kk=kk+1

lastimg = cv2.resize(faceimg, (100, 100))


cv2.imwrite(str1, lastimg)

ar=match_templates(str1)

if ar=="unknown":

dd=0

else:

height,width = frame.shape[:2]

faceimg = frame[ny:ny + nr, nx:nx + nr]

# cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)

cv2.putText(faceimg, (ar), (10,70), cv2.FONT_HERSHEY_SIMPLEX,


1,(0,255,0),1,cv2.LINE_AA)

########################################

frame = imutils.resize(frame, width=1024, height=576)

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

size = gray.shape

rects = detector(gray, 0)

if len(rects) > 0:

text = "{} face(s) found".format(len(rects))

cv2.putText(frame, text, (10, 20),cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255),


2)

for rect in rects:

(bX, bY, bW, bH) = face_utils.rect_to_bb(rect)

cv2.rectangle(frame, (bX, bY), (bX + bW, bY + bH), (0, 255, 0), 1)


shape = predictor(gray, rect)

shape = face_utils.shape_to_np(shape)

leftEye = shape[lStart:lEnd]

rightEye = shape[rStart:rEnd]

leftEAR = eye_aspect_ratio(leftEye)

rightEAR = eye_aspect_ratio(rightEye)

ear = (leftEAR + rightEAR) / 2.0

leftEyeHull = cv2.convexHull(leftEye)

rightEyeHull = cv2.convexHull(rightEye)

cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)

cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)

if ear < EYE_AR_THRESH:

COUNTER += 1

if COUNTER >= EYE_AR_CONSEC_FRAMES:

cv2.putText(frame, "Eyes Closed!", (500, 20),cv2.FONT_HERSHEY_SIMPLEX,


0.7, (0, 0, 255), 2)

sleeping+=1

else:

COUNTER = 0

sleeping=0

mouth = shape[mStart:mEnd]
mouthMAR = mouth_aspect_ratio(mouth)

mar = mouthMAR

mouthHull = cv2.convexHull(mouth)

cv2.drawContours(frame, [mouthHull], -1, (0, 255, 0), 1)

# cv2.putText(frame, "MAR: {:.2f}".format(mar), (650,


20),cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)

if mar > MOUTH_AR_THRESH:

# cv2.putText(frame, "Yawning!", (800, 20),cv2.FONT_HERSHEY_SIMPLEX,


0.7, (0, 0, 255), 2)

yawning+=1

else:

yawning=0

# for (i, (x, y)) in enumerate(shape):

# if i == 33:

# image_points[0] = np.array([x, y], dtype='double')

# cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

# cv2.putText(frame, str(i + 1), (x - 10, y -


10),cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

# elif i == 8:

# image_points[1] = np.array([x, y], dtype='double')

# cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)


# cv2.putText(frame, str(i + 1), (x - 10, y -
10),cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

# elif i == 36:

# image_points[2] = np.array([x, y], dtype='double')

# cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

# cv2.putText(frame, str(i + 1), (x - 10, y -


10),cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

# elif i == 45:

# image_points[3] = np.array([x, y], dtype='double')

# cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

# cv2.putText(frame, str(i + 1), (x - 10, y -


10),cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

# elif i == 48:

# image_points[4] = np.array([x, y], dtype='double')

# cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

# cv2.putText(frame, str(i + 1), (x - 10, y -


10),cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

# elif i == 54:

# image_points[5] = np.array([x, y], dtype='double')

# cv2.circle(frame, (x, y), 1, (0, 255, 0), -1)

# cv2.putText(frame, str(i + 1), (x - 10, y -


10),cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)

# else:
# cv2.circle(frame, (x, y), 1, (0, 0, 255), -1)

# cv2.putText(frame, str(i + 1), (x - 10, y - 10),


cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)

# for p in image_points:

# cv2.circle(frame, (int(p[0]), int(p[1])), 3, (0, 0, 255), -1)

if yawning >= 3:

SpeakText(ar+" is Yawning")

# ar+=":Yawning"

student_list.append(""+ar+" # "+"Yawning")

if sleeping>=3:

SpeakText(ar+" is Sleeping")

# ar += ":Sleeping"

student_list.append("" + ar + " # " + "Sleeping")

# (head_tilt_degree, start_point, end_point,end_point_alt) =


getHeadTiltAndCoords(size, image_points, frame_height)

# if head_tilt_degree:

# cv2.putText(frame, 'Head Tilt Degree: ' + str(head_tilt_degree[0]), (170, 20),

# cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)

frame = imutils.resize(frame, width=1024, height=576)

cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):

print(student_list)

result=""

for x in student_list:

result+=x+"\n"

file=os.path.join("attentive" + "/" + dt_string + '.txt')

print(file)

with open((file), 'w') as f:

f.writelines(result)

f.close()

break

# When everything is done, release the capture

video_capture.release()

cv2.destroyAllWindows()
CHAPTER 8

SYSTEM SECURITY

8.1 INTRODUCTION

Ensuring the security and privacy of sensitive student data is paramount in the implementation of
the Student Attention Detection System. All data transmitted and stored within the system,
including live video streams and facial recognition data, shall be encrypted using industry-standard
encryption algorithms. This ensures that sensitive information remains confidential and protected
from unauthorized access. Access to the system's functionalities and data shall be restricted based
on role-based access control (RBAC) mechanisms. Only authorized users, such as educators and
administrators, shall have access to specific features based on their roles and permissions.
Communication between system components, including CCTV cameras, central processing units,
and online courses platforms, shall be conducted over secure channels using protocols such as
HTTPS and SSH. This prevents eavesdropping and tampering of data during transmission. Users
accessing the system shall be required to authenticate themselves using strong authentication
methods such as username/password combinations or multi-factor authentication (MFA).
Additionally, authorization mechanisms shall be in place to ensure that users can only access data
and functionalities relevant to their roles. The system shall implement logging and auditing
mechanisms to track user activities and system events. This allows administrators to monitor for
suspicious behavior and detect potential security breaches in real-time. The system shall adhere to
relevant privacy regulations and guidelines, such as GDPR and CCPA, to protect the privacy rights
of students. This includes obtaining explicit consent from students for the collection and processing
of their facial data and ensuring that data handling practices are transparent and compliant.

8.2 SECURITY IN SOFTWARE

Security is paramount in the development and deployment of the Student Attention Detection
System to safeguard sensitive student data and ensure the integrity and reliability of system
operations. The following security measures are proposed:

Authentication and Authorization: Implement robust authentication mechanisms to verify the


identity of users accessing the system. Users should authenticate using strong credentials such as
usernames and passwords or biometric authentication. Additionally, role-based access control
(RBAC) should be enforced to ensure that users only have access to functionalities and data
relevant to their roles and responsibilities.

Encryption: Employ encryption techniques to protect data both at rest and in transit. Data collected
from CCTV live streaming and facial recognition processes should be encrypted to prevent
unauthorized access. Secure communication protocols such as HTTPS should be used to encrypt
data transmitted between system components, including CCTV cameras, processing units, and
online courses platforms.

Secure Data Storage: Ensure that collected student face images and emotion analysis data are stored
securely. Data should be stored in encrypted databases or storage systems with access controls in
place to restrict access to authorized personnel only. Regular backups should be performed to
prevent data loss and ensure data availability in the event of a security incident.

Regular Security Audits and Monitoring: Conduct regular security audits and vulnerability
assessments to identify and address potential security vulnerabilities. Implement intrusion detection
and prevention systems (IDPS) to monitor system activities and detect any suspicious behavior or
unauthorized access attempts. Security logs should be generated and monitored to track user
activities and system events for timely response to security incidents.

Compliance with Privacy Regulations: Ensure compliance with relevant privacy regulations such as
GDPR, CCPA, and FERPA to protect student privacy rights. Obtain explicit consent from students
for the collection and processing of their facial data, and clearly communicate data handling
practices to maintain transparency and trust. 6. Secure Software Development Practices:
CHAPTER 9

CONCLUSION

It is clear that students’ attention does vary during lectures, but the literature does not
support the perpetuation of the 10- to 15-min attention estimate. Perhaps the only valid use of this
parameter is as a rhetorical device to encourage teachers to develop ways to maintain student
interest in the classroom. The first point in responding to this question is to emphasize that the
results are consistent with the existing view that attention is not highest near the start of a lesson
(first ten minutes) and that there is not necessarily a drop in attention that takes place throughout a
class; rather, a wave-like pattern is observed. This consistency with other studies also lends
credibility to the results showing that attention is not particularly low near the end of a lesson (from
the final fifteen minutes to the final five minutes), though there is notable increased tuning-out and
low attention after the final five minutes. Thus, the observations from this case study are suggestive
that language classes or studying in a foreign language follow the same patterns as classes in a first
language, at least for level B2 and higher. In terms of interaction type, student-centered other
educators continue to promote such a parameter as an empirically based estimate; they need to
support it with more controlled research. Beyond that, teachers must do as much as possible to
increase students’ motivation to “pay attention” as well as try to understand what students are really
thinking about during class. Thus in our system the attentiveness of the student is evaluated by
using the normal web cam access using their face feature evaluation. Also in the proposed system
the haar cascade algorithm is used which is Security and Authentication is an imperative part of any
industry. In Real time, Human face recognition can be performed in two stages such as, Face
detection and Face recognition.
CHAPTER 10

FUTURE ENHANCEMENT

Implement real-time feedback mechanisms to provide personalized interventions based on


individual student's emotions and attention levels. Integrate machine learning algorithms to
adaptively refine facial expression recognition and attentiveness detection over time. Explore the
integration of biometric data from additional sensors for a more comprehensive assessment of
student engagement. Develop a mobile application interface for instructors to remotely monitor and
interact with the system. Conduct longitudinal studies to evaluate the long-term impact of the
system on student performance and classroom dynamics.
REFERENCES

BOOK REFERENCES

[1] White Belt Mastery · “SQL For Beginners: SQL Guide to understand how to work with a
Data Base” 2nd edition 2020.

[2] Eric Matthes,” Python Crash Course”, 2nd edition 2020.

[3] Mark Lutz “Python Pocket Reference: Python in Your Pocket”, 5th edition 2023.

WEB REFERENCES

1. https://www.researchgate.net/publication/347555599_Students'_attention_in_class_Patterns
_perceptions_of_cause_and_a_tool_for_measuring_classroom_quality_of_life

2. https://www.irjweb.com/Face%20Recognition%20Using%20Machine%20Learning.pdf

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy