0% found this document useful (0 votes)
0 views

Backbenchers Unite Report

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Backbenchers Unite Report

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

A

Industry oriented mini project Report


On

“BACKBENCHERS UNITE”
Submitted in partial fulfillment of the

Requirements for the award of the degree of


Bachelor of Technology

In

Computer Science & Engineering

By
Kummari Sowmya – 23R25A0506
Netapalli Adithya – 22R21A0540
Gurugu Shyam Tarun – 22R21A0526
D.N.S HariCharan – 22R21A0517

Under the guidance of

Mr. S. Lingaiah
Associate Professor

Department of Computer Science & Engineering

2025
Department of Computer Science & Engineering

CERTIFICATE

This is to certify that the project entitled “BackBenchers Unite” has been submitted by

Kummari Sowmya (23R25A0506), Netapalli Adithya (22R21A0540), Gurugu Shyam Tarun

(22R21A0526) and D.N.S HariCharan (22R21S0517) in partial fulfillment of the requirements

for the award of degree of Bachelor of Technology in Computer Science and Engineering from

Jawaharlal Nehru Technological University, Hyderabad. The results embodied in this project have

not been submitted to any other University or Institution for the award of any degree or diploma.

Internal Guide Head of the Department

External Examiner

i
Department of Computer Science & Engineering

DECLARATION

We hereby declare that the project entitled “BackBenchers Unite” is the work done during the

period from July 2023 to April 2024 and is submitted in partial fulfillment of the requirements

for the award of degree of Bachelor of Technology in Computer Science and Engineering from

Jawaharlal Nehru Technology University, Hyderabad. The results embodied in this project have

not been submitted to any other university or Institution for the award of any degree or diploma.

Kummari Sowmya 23R21A0506


Netapalli Adithya 22R21A0540
Gurugu Shyam Tarun 22R21A0526
D.N.S HariCharan 22R21A0517

ii
Department of Computer Science & Engineering

ACKNOWLEDGEMENT

The satisfaction and euphoria that accompany the successful completion of any task

would be incomplete without the mention of people who made it possible, whose constant

guidance and encouragement crowned our efforts with success. It is a pleasant aspect that we

now have the opportunity to express our guidance for all of them.

First of all, we would like to express our deep gratitude towards our internal guide

Mr. S.Lingaiah, Associate Professor, Department of CSE for his support in the completion

of our dissertation. We wish to express our sincere thanks to Dr. A. BALARAM, HOD,

Department of CSE and also Principal Dr. K. SRINIVAS RAO for providing the facilities to

complete the dissertation.

We would like to thank all our faculty and friends for their help and constructive

criticism during the project period. Finally, we are very much indebted to our parents for their

moral support and encouragement to achieve goals.

Kummari Sowmya 23R21A0506


Netapalli Adithya 22R21A0540
Gurugu Shyam Tarun 22R21A0526
D.N.S HariCharan 22R21A0517

iii
Department of Computer Science & Engineering
ABSTRACT

In today’s rapidly evolving job market, engineering students face the challenge of balancing academic

demands with career preparation. Traditionally, many students focus on cramming for exams just

before assessments, neglecting long-term skill development and professional growth. As a result, by

the time they reach their third or fourth year, students often begin to worry about their future career

prospects, feeling unprepared for campus placements or the transition into the workforce.

This initiative aims to address this gap by creating a platform that supports both academic success and

career readiness in a cohesive, time-efficient manner. The core of this platform is its approach to exam

preparation. These questions will be analyzed and organized in a way that allows students to focus on

the most critical topics, thereby making their study time more efficient. This will allow them to retain

more knowledge in less time, improving their academic performance.

Recognizing that engineering students must be ready for the professional world by the time they

graduate, the platform also provides career development tools aimed at preparing students for success

in campus placements and internships. A detailed breakdown of campus placement rounds will be

available, including insight into what to expect during interviews and group discussions, and advice

on the best preparation resources.

Beyond just improving academic performance and placement readiness, the platform also aims to

equip students with the skills necessary for thriving in the fast-paced, tech-driven world of today’s

industries.

iv
LIST OF FIGURES & TABLES

Figure Number Name of the Figure Page Number

3.1.1 MobileNetV2 6

3.3.1 System Architecture 7

3.4.1 Use Case Diagram 11

3.4.2 Class Diagram 11

3.4.3 Activity Diagram 12

3.4.4 Sequence Diagram 12

3.4.5 Collaboration Diagram 13

3.4.6 Deployment Diagram 13

3.4.7 Component Diagram 14

5.1.1 Module Diagram 17

7.1 Home Page 28

7.2 Sign In Page 28

7.3 Sign Up Page 29

7.4 Image Upload Page 29

7.5 Result Page 30

7.6 Comparison Graph 30

v
Table Page Number
Number Name of the Table

7.1 Comparison Table 31

vi
INDEX
CERTIFICATE i
DECLARATION ii
ACKNOWLEDGEMENT iii
ABSTRACT iv
LIST OF FIGURES & TABLES v
CHAPTER 1
INTRODUCTION 1
1.1 Overview 1
1.2 Purpose of the project 1
1.3 Motivation 1
CHAPTER 2
LITERATURE SURVEY 2
2.1 Existing System 2
2.2 Disadvantages of Existing System 4
CHAPTER 3
PROPOSED SYSTEM 6
3.1 Proposed System 6
3.2 Advantages of Proposed System 7
3.3 System Architecture 7
3.4 UML Diagrams 10
CHAPTER 4
SYSTEM REQUIREMENTS 15
4.1 Software Requirements 15
4.2 Hardware Requirements 15
4.3 Functional Requirements 15
4.4 Non-Functional Requirements 16
CHAPTER 5
MODULE DESIGN 17
5.1 Module Design 17
CHAPTER 6
IMPLEMENTATION 19
6.1 Source Code 19
CHAPTER 7
RESULTS 28
CHAPTER 8
CONCLUSION 33
FUTURE ENHANCEMENTS AND DISCUSSIONS 33
REFERENCES 34
CHAPTER 1

INTRODUCTION

1.1 OVERVIEW
Engineering students today face difficulties in balancing their academic studies and job
preparation after graduation. Many tend to study seriously only before exams, which impacts
both their learning and long-term career planning. This often leads to stress and uncertainty
during campus placements or internships, especially in the later years of college. To address
this issue, this project introduces a platform that provides both important semester exam
questions and career guidance in one place. The questions are carefully selected and organized
to help students focus on key topics, making their study time more effective. Alongside this,
the platform offers career support through interview tips, group discussion strategies, and step-
by-step placement preparation, ensuring students are well-prepared for job opportunities by
graduation.

1.2 PURPOSE OF THE PROJECT


The purpose of this project is to create an all-in-one platform that supports engineering students
in both academic and career aspects. It aims to provide a well-organized collection of important
semester exam questions, allowing students to focus on high-priority topics and prepare
efficiently. In addition to academic help, the platform also offers detailed career guidance,
including resources for campus placements, such as interview tips, group discussion strategies,
and insights into the hiring process. By combining academic preparation with career readiness,
the platform ensures students are better equipped for both exams and future job opportunities.

1.3 MOTIVATION
The motivation behind this project comes from the common struggle engineering students face
in trying to manage academic pressure while also preparing for their careers. Many students
delay career planning until their final year, often feeling overwhelmed and unprepared when
it's time for placements. There is a clear need for a system that helps students stay on track
throughout their academic journey. By offering exam preparation and professional development
tools in a single, easy-to-use platform, this project aims to reduce stress, save time, and improve
student outcomes both academically and professionally.

1
CHAPTER 2

LITERATURE SURVEY
An extensive literature survey has been conducted. Researchers and educators have explored
various tools to support engineering students in both academic preparation and career
development. Studies show that most students rely on last-minute exam preparation, often
neglecting long-term skill-building. Several platforms offer exam resources or placement
support, but very few integrate both efficiently. Various research papers, journals, and
educational resources have been reviewed to design a unified solution that enhances both
academic performance and career readiness.

2.1 EXISTING SYSTEM


The Engineering students often struggle to juggle their studies and career preparation, leading
to last-minute cramming and stress. Many students study without a clear plan, going through
tons of material but not truly understanding or remembering it. This makes exams more difficult
and leaves them feeling unprepared for campus placements.
Right now, there isn’t a single platform that helps students both ace their exams and get ready
for their careers. Some websites offer career advice, while others provide study materials, but
nothing combines both in a simple, effective way. As a result, students find it hard to manage
their time and focus on what truly matters.
An organized system that automatically arranges study materials and placement resources
would make learning more efficient. It would help students focus on important topics, improve
their knowledge retention, and confidently prepare for exams and job opportunities. With the
right support, students can study smarter, not harder, and enter the workforce feeling ready to
succeed.
Hasan Yusefzadeh et al. [1] conducted a comprehensive study on the Effect of Study
Preparation on Test Anxiety & Performance. Their research shows that structured study
schedules and organized learning materials significantly reduce test-related anxiety, resulting
in improved student performance during assessments. This highlights the importance of guided
preparation tools for students who struggle with exam fear.
Lavina Sharma and Asha Nagendra [2] explored Skill Development in India: Challenges &
Opportunities. Their study emphasizes that early exposure to employability skills, particularly
during the second and third years of undergraduate engineering programs, boosts students'
confidence and readiness for industry roles. This supports platforms that integrate soft skill
development alongside academics.
2
John Dunlosky et al. [3] in their work on Effective Learning Techniques for Students,
introduced techniques such as spaced repetition, interleaving, and retrieval practice. Their
findings suggest that these strategies enhance long-term retention and academic achievement,
making them suitable for digital platforms that promote structured study habits.
[Authors Not Specified] [4] investigated the Impact of Study Strategies on Post-Secondary
Students. The study concluded that students who adopted active recall, note restructuring, and
daily review techniques experienced substantial improvements in grades and concept retention.
This supports incorporating learning strategies into student apps for exam readiness.
Kavita Mehra and P. Basu [5] focused on Digital Interventions for Academic Planning. Their
proposed framework suggests that when students use digital tools to set weekly academic goals
and track study progress, their productivity increases by 25%. The system encourages consistent
study behavior among students who otherwise rely on last-minute cramming.
Dr. Anil Gupta et al. [6] analyzed Academic Burnout and Study Habits in Indian Engineering
Colleges. The paper reveals that unstructured study patterns lead to increased burnout and
disengagement. However, when students adopt a planned and guided study approach, their
emotional well-being and academic consistency improve.
Sneha Reddy and M. Narasimha Rao [7] presented a study titled Bridging Academic Learning
and Career Skills in Engineering Education. They propose an integrated model that combines
academic preparation with career planning modules, leading to better placement outcomes and
student confidence during recruitment processes.
Tanmay Bhat and A. Deshmukh [8] proposed a Career Readiness Framework for Final-Year
Engineering Students. Their findings show that a structured breakdown of placement
preparation — including aptitude, coding, group discussions, and mock interviews —
significantly increases student placement rates and internship conversions.
Dr. Priya Menon and Karthik J. [9] in their paper Role of Peer Learning and Resource Sharing
in Technical Education highlight that collaborative learning through peer communities and
doubt-clearing forums enhances student understanding and reduces academic pressure,
especially among average-performing students.
Vikram S. and Ritu Kalra [10] conducted a longitudinal study titled Long-Term Skill Retention
Through Modular Learning Systems. Their results show that breaking down course content into
micro-modules and including regular MCQs improves both retention and recall during semester
exams, making the approach ideal for student-centric platforms.

3
2.2 DISADVANTAGES OF EXISTING SYSTEM
Concisely summarizing the disadvantages of currently available academic support and
preparation platforms:
1. Most platforms provide generic material, failing to align with specific university or
branch-wise syllabi followed in different colleges.
2. Existing systems often ignore structured guidance for campus placements, including
round-wise preparation and company-specific strategies.
3. Many platforms do not regularly update their content, resulting in the use of old question
banks and irrelevant materials.
4. Cluttered layouts and confusing navigation reduce student engagement and usability,
especially on mobile devices.
5. Platforms lack intelligent systems to suggest subjects, questions, or preparation paths
based on the user’s branch, performance, or upcoming exams.
6. There is no feature for students to share previous question papers, personal notes, or real-
time insights that could help others in the same college or branch.

4
5
6
2.3 DISADVANTAGES OF EXISTING SYSTEM
Concisely summarizing the disadvantages of the above implementations:
1. Sensitivity to complex scenes may lead to false positives.
2. Limited adaptability to dynamic lighting and drastic viewpoint changes.
3. Susceptibility to errors in user-outlined object boundaries affecting accuracy.
4. Challenges in handling occlusion scenarios impacting object recognition.
Dependency on pre-computed matches may hinder real-time adaptability.

7
CHAPTER 3

PROPOSED SYSTEM

3.1 PROPOSED SYSTEM

The Heritage Identification of Monuments project proposes an innovative system utilizing


MobileNetV2 for automated monument identification. Beginning with comprehensive data
collection and image preprocessing, the project employs MobileNetV2 for feature extraction,
followed by machine learning model training to categorize monuments based on architectural
style, location, and historical context. Continuous learning and improvement are integral, with
user feedback enhancing model accuracy. This holistic approach ensures efficient monument
preservation, promoting global cultural and historical heritage awareness.
MobileNetV2 are the novel system that the Heritage Identification of Monuments project
suggests for automated monument identification. The research starts with extensive data
gathering and image preprocessing, then uses MobileNetV2 for feature extraction and machine
learning model training to classify monuments according to their location, architectural style,
and historical significance. MobileNetV2 designed for mobile and embedded vision
applications. It's known for its efficiency and accuracy, making it suitable for tasks like image
classification, object detection.

Figure 3.1.1 MobileNetV2


It is essential to continuously learn and improve, and user feedback improves the accuracy of
the model. This all-encompassing strategy guarantees effective monument preservation while
raising awareness of the world's cultural and historical heritage.

8
3.2 ADVANTAGES OF PROPOSED SYSTEM
The proposed system has the following advantages:
1. Rapid identification: MobileNetV2 enable swift monument categorization for efficient
preservation efforts.
2. Comprehensive categorization: Architecture, location, and historical context enhance
precise monument classification.
3. Continuous improvement: User feedback refines models, ensuring evolving accuracy
over time.
4. Global heritage awareness: Promotes cultural preservation, fostering appreciation of
historical significance.

3.3 SYSTEM ARCHITECTURE


The Heritage Identification of Monuments System architecture is a sophisticated framework
designed to effectively identify and document heritage monuments. Creating a system
architecture diagram for the heritage identification of monuments involves outlining the
components and their interactions. This architecture enables efficient data collection,
processing, identification, and management of heritage monuments, supporting their
preservation, research, and public awareness efforts.

Figure 3.3.1 System Architecture


Data collection:
Data collection for a heritage identification system involves gathering information about
various monuments, including their historical significance, architectural features, images, and
geographical locations. Collect images of heritage monuments from online sources, image

9
repositories, and digital archives. Validate and clean collected data to remove duplicates,
inconsistencies, and inaccuracies.
Image Preprocessing:
It involves applying various techniques to the raw input images to enhance their quality,
reduce noise, and extract meaningful features.
• Image resizing: Resizing the image to a standardized resolution can make processing
more efficient and consistent.

• Gray scale conversion: Converting colour images to grayscale can simplify


processing while preserving essential features for identification.

• Edge Detection: Techniques like gaussian blur, median filtering, denoising


algorithms can help reduce noise in the image.

Feature Extraction:
It involves identifying and extracting relevant information or features from raw input images,
which can then be used for tasks such as classification, recognition, or matching.
• Corner detection: Identifying corner points in the image, which are locations with high
variation in intensity in multiple directions.

• Edge detection: Detecting edges in the image, which represent significant changes in
intensity and can outline shapes or structures.

• Colour Histograms: Representing the distribution of colour intensities or colour


channels in the image. Colour histograms are useful for tasks involving colour-based
recognition or segmentation.

Image Splitting:
Image splitting, also known as image segmentation, involves dividing a single image into
multiple smaller images or segments.
Train data:
Training data refers to the dataset used to train machine learning models. In the context of
heritage identification of monuments, training data would consist of images and associated
labels or annotations that are used to teach a machine learning model to recognize and classify
different monuments accurately.
• Model Training: Train a machine learning model using the annotated training data.
During training, the model learns to recognize patterns and features in the input

10
images that are indicative of different types of monuments.

• Model Building: Model building typically refers to the process of developing a


machine learning or statistical model to solve a specific problem or make predictions
based on data.

• Model Evaluation: Evaluate the performance of the trained model on the validation set
to assess its accuracy, precision, recall, and other relevant metrics.

Test Data:
Test data refers to a separate dataset that is used to evaluate the performance of a trained
machine learning model. Test data would consist of a collection of images that the model has
not seen during the training phase. Overall, test data plays a crucial role in objectively
evaluating the performance and generalization ability of machine learning models for heritage
identification of monuments
• Model Testing: Model testing is a critical step in the machine learning pipeline to
evaluate the performance and generalization ability of a trained model on unseen data.
Assess the performance of your trained model and ensure that it performs well on
unseen data, thus demonstrating its reliability and utility for real-world applications.

Classification:
Classification is a fundamental task in machine learning, and it plays a central role in heritage
identification of monuments. In this context, classification involves assigning a label or
category to input images of monuments based on their visual characteristics. Image
classification tasks include CNN-based architectures like VGG, ResNet, Inception,
EfficientNet, Mobile net V2, Xception.

Visualizing Results:
Visualizing results is essential for interpreting the performance of a heritage identification
system. By visualizing the results of monument identification researchers and stakeholders can
gain a better understanding of the system's performance, identify areas for improvement, and
make informed decisions about future developments and applications.

Input Image:
The user can input images of monuments to identify them using the system.

11
Preprocessing:
Preprocessing is a crucial step in the pipeline of heritage identification of monuments,
especially when dealing with image data. It involves preparing and cleaning the data to ensure
that it is in a suitable format for analysis.
• ImageLoading: Load the raw image data into memory from storage or from an external
source. This step ensures that the images are accessible for further processing.

Feature Extraction:
Feature extraction is a critical step in heritage identification of monuments, particularly in
computer vision tasks where images of monuments are analyzed to extract meaningful
information.
• Image Representation: Convert each input image into a format suitable for analysis.
This may involve loading the image data from files, resizing images to a consistent size,
and converting them to a standardized colour space.

3.4 UML DIAGRAMS

UML stands for Unified Modelling Language. UML is a standardized general-purpose


modelling language in the field of object-oriented software engineering. The standard is
managed, and was created by, the Object Management Group. The goal is for UML to become
a common language for creating models of object-oriented computer software. The Unified
Modelling Language is a standard language for specifying, Visualization, Constructing and
documenting the artifacts of software system, as well as for business modelling and other non-
software systems.
Use Case Diagram:
A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram
defined by and created from a Use-case analysis. Its purpose is to present a graphical overview
of the functionality provided by a system in terms of actors, their goals (represented as use
cases), and any dependencies between those use cases. The main purpose of a use case diagram
is to show what system functions are performed for which actor. Roles of the actors in the
system can be depicted.

12
Figure 3.4.1 Use Case Diagram
Class Diagram:
The class diagram is used to refine the use case diagram and define a detailed design of the
system. The class diagram classifies the actors defined in the use case diagram into a set of
interrelated classes. The relationship or association between the classes can be either an "is-a"
or "has-a" relationship. Each class in the class diagram may be capable of providing certain
functionalities. These functionalities provided by the class are termed "methods" of the class.
Apart from this, each class may have certain "attributes" that uniquely identify the class.

Figure 3.4.2 Class Diagram


Activity Diagram:
The process flows in the system are captured in the activity diagram. Similar to a state diagram,
an activity diagram also consists of activities, actions, transitions, initial and final states, and
guard conditions.

13
Figure 3.4.3 Activity Diagram
Sequence Diagram:
A sequence diagram represents the interaction between different objects in the system. The
important aspect of a sequence diagram is that it is time-ordered. This means that the exact
sequence of the interactions between the objects is represented step by step. Different objects
in the sequence diagram interact with each other by passing "messages".

Figure 3.4.4 Sequence Diagram

Collaboration Diagram:
A collaboration diagram groups together the interactions between different objects. The
interactions are listed as numbered interactions that help to trace the sequence of the

14
interactions. The collaboration diagram helps to identify all the possible interactions that each
object has with other objects.

Figure 3.4.5 Collaboration Diagram

Deployment Diagram:
The deployment diagram captures the configuration of the runtime elements of the application.
This diagram is by far most useful when a system is built and ready to be deployed.

User System

Figure 3.4.6 Deployment Diagram

Component Diagram:
The component diagram represents the high-level parts that make up the system. This diagram
depicts, at a high level, what components form part of the system and how they are interrelated.
A component diagram depicts the components culled after the system has undergone the
development or construction phase.

15
Figure 3.4.7 Component Diagram

16
CHAPTER 4

SYSTEM REQUIREMENTS

The system requirements for the development and deployment of the project as an application
are specified in this section. These requirements are not be confused with the end-user system
requirements. There are no specific, end-user requirements as the intended application is cross-
platform and is supposed to work on devices of all form-factors and configurations.

4.1 SOFTWARE REQUIREMENTS


Below are the software requirements for application development:
Software : Anaconda
Primary Language : Python
Frontend Framework : Flask
Back-end Framework : Jupyter Notebook
Database : Sqlite3
Front-End Technologies : HTML, CSS, JavaScript and Bootstrap4

4.2 HARDWARE REQUIREMENTS


Hardware requirements for application development are as follows:
• Operating System: Windows Only
• Processor: i5 and above
• Ram: 8gb and above
• Hard Disk: 25 GB in local drive

4.3 FUNCTIONAL REQUIREMENTS

• Image Recognition: Utilize image recognition technology to analyze visual features of


monuments for identification and validation.

• Geospatial Mapping: Integrate a mapping feature that accurately locates monuments


on a geographical map. Provide users with the ability to explore and navigate the
mapped heritage sites.

• Data Collection and Integration: The system should be able to collect and preprocess
a dataset of labelled images and integrate them into single dataset for analysis.

17
• Data Preprocessing: Machine Learning Model: The system should perform data
preprocessing tasks, such as data cleaning, normalization to ensure the data's quality
and reliability.

4.4 NON-FUNCTIONAL REQUIREMENTS


• Accuracy: The classification models must maintain a high level of accuracy in
identifying the heritage of monuments.
• Reliability: The system should demonstrate a high level of reliability, aiming for
uninterrupted availability.
• Usability: The user interface should be highly usable, with a user-friendly design,
clear navigation, and straightforward data input and result presentation.
• Compatibility: The system should be designed to seamlessly integrate with diverse
data sources and systems that are commonly utilized.

18
CHAPTER 5

MODULE DESIGN

5.1 MODULE DESIGN


Module design for heritage identification of monuments involves breaking down the system
into smaller, more manageable components, each responsible for specific tasks.

Figure 5.1.1 Module Diagram

Image Processing Module:


An image processing module refers to a software component or library designed to perform
various operations on images. These operations can range from basic tasks like resizing and
cropping to more advanced tasks like object recognition, image segmentation, and feature
extraction.
MobileNetV2 Model Module:
MobileNetV2 is a convolutional neural network architecture designed for efficient and mobile-
friendly deep learning applications. It is an improvement over the original MobileNet
architecture, aiming to provide better performance while maintaining low computational cost
and memory footprint.
Database Integration Module:
A database integration module refers to a software component or system that facilitates the
interaction between different software applications and databases. Its primary purpose is to

19
enable seamless communication and data exchange between applications and databases,
allowing them to work together efficiently.
User Interface Module:
The User Interface (UI) module serves as the primary point of interaction between users and
the system. Its main purpose is to present information and functionality in a clear, intuitive, and
aesthetically pleasing manner, facilitating efficient user interaction and task completion.

20
CHAPTER 6

IMPLEMENTATION

6.1 SOURCE CODE

Importing Libraries
import pandas as pd
import pathlib
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import os
import splitfolders
import PIL
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import Callback, EarlyStopping, ModelCheckpoint
import glob
import cv2
from tensorflow.keras import datasets, layers, models, losses
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Activation, Flatten, Dense,
Dropout, BatchNormalization, LSTM
from tensorflow.keras.regularizers import l2
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import CategoricalCrossentropy
import tensorflow.keras.backend as K
from tensorflow import keras
from tensorflow.keras.preprocessing import image
import os, shutil
import warnings
import sys
import shutil
import glob as gb
warnings.filterwarnings('ignore')

21
Importing Dataset
BATCH_SIZE = 64
IMAGE_SHAPE = (224, 224)
TRAIN_PATH = "dataset/train"
VAL_PATH = "dataset/test"
datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1/255)
train_gen = datagen.flow_from_directory(directory = TRAIN_PATH,
class_mode="categorical",
target_size = IMAGE_SHAPE,
batch_size = BATCH_SIZE,
color_mode='rgb',
seed = 1234,
shuffle = True)
val_gen = datagen.flow_from_directory(directory = VAL_PATH,
class_mode="categorical",
target_size = IMAGE_SHAPE,
batch_size = BATCH_SIZE,
color_mode='rgb',
seed = 1234,
shuffle = True)
class_map = dict([(v, k) for k, v in train_gen.class_indices.items()])
print(class_map)
def recall_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_score(y_true, y_pred):
precision = precision_m(y_true, y_pred)

22
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))

VGG16
from tensorflow.keras.models import Model
import tensorflow as tf
inc =
tf.keras.applications.vgg16.VGG16(include_top=False,weights='imagenet',input_shape=(224,
224, 3), pooling='max')
x31 = Flatten()(inc.output)
predictionss = Dense(24, activation='softmax')(x31)
modelss = Model(inputs = inc.inputs, outputs = predictionss)
modelss.summary()
modelss.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy',f1_sco
re,recall_m,precision_m])
r2=modelss.fit(train_gen,validation_data=val_gen,epochs=20,steps_per_epoch=len(train_gen
), validation_steps=len(val_gen))
history=r2
x = r2
modelss.save('models/VGG16.h5')
train_acc = history.history['accuracy']
train_recall = history.history['recall_m']
train_precision = history.history['precision_m']
train_f1 = history.history['f1_score']
val_acc = history.history['val_accuracy']
val_recall = history.history['val_recall_m']
val_precision = history.history['val_precision_m']
val_f1 = history.history['val_f1_score']
fig, axs = plt.subplots(2, 2, figsize=(12, 8))
axs[0, 0].plot(train_acc, label='Train')
axs[0, 0].plot(val_acc, label='Validation')
axs[0, 0].set_title('Accuracy')
axs[0, 0].legend()
axs[0, 1].plot(train_precision, label='Train')

23
axs[0, 1].plot(val_precision, label='Validation')
axs[0, 1].set_title('Precision')
axs[0, 1].legend()
axs[1, 0].plot(train_recall, label='Train')
axs[1, 0].plot(val_recall, label='Validation')
axs[1, 0].set_title('Recall')
axs[1, 0].legend()
axs[1, 1].plot(train_f1, label='Train')
axs[1, 1].plot(val_f1, label='Validation')
axs[1, 1].set_title('F1 Score')
axs[1, 1].legend()
plt.tight_layout()
plt.show()
a = history.history['accuracy'][-1]
f = history.history['f1_score'][-1]
p = history.history['precision_m'][-1]
r = history.history['recall_m'][-1]
print('Accuracy = ' + str(a * 100))
print('Precision = ' + str(p * 100))
print('F1 Score = ' + str(f * 100))
print('Recall = ' + str(r * 100))

MobileNet
inc = tf.keras.applications.mobilenet.MobileNet(include_top=False, weights='imagenet',
input_shape=(224, 224, 3), pooling='max')

MobileNetv2
inc = tf.keras.applications.mobilenet_v2.MobileNetV2(include_top=False,
weights='imagenet', input_shape=(224, 224, 3), pooling='max')

Resnet50
inc = tf.keras.applications.resnet50.ResNet50(include_top=False, weights='imagenet',
input_shape=(224, 224, 3), pooling='max')

24
InceptionV3

inc = tf.keras.applications.inception_v3.InceptionV3(include_top=False, weights='imagenet',


input_shape=(224, 224, 3), pooling='max')

Xception
inc = tf.keras.applications.xception.Xception(include_top=False, weights='imagenet',
input_shape=(224, 224, 3), pooling='max')

DenseNet169

inc = tf.keras.applications.densenet.DenseNet169(include_top=False, weights='imagenet',


input_shape=(224, 224, 3), pooling='max')

ResNet101

inc = tf.keras.applications.resnet.ResNet101(include_top=False, weights='imagenet',


input_shape=(224, 224, 3), pooling='max')

LeNet
model = models.Sequential()
model.add(layers.Conv2D(6, 5, activation='tanh', input_shape=(224,224,3)))
model.add(layers.AveragePooling2D(2))
model.add(layers.Activation('sigmoid'))
model.add(layers.Conv2D(16, 5, activation='tanh'))
model.add(layers.AveragePooling2D(2))
model.add(layers.Activation('sigmoid'))
model.add(layers.Conv2D(120, 5, activation='tanh'))
model.add(layers.Flatten())
model.add(layers.Dense(84, activation='tanh'))
model.add(layers.Dense(24, activation='softmax'))

EfficientNet V2S
from tensorflow.keras import layers, models
model = models.Sequential()
model.add(layers.Input(shape=(224,224,3)))
model.add(layers.experimental.preprocessing.Rescaling(scale=1./255))

25
model.add(layers.Conv2D(32, (3,3), activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64, (3,3), activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(24, activation='softmax'))

CNN
model = models.Sequential() # model object
model.add(Conv2D(filters=32, kernel_size=3, strides=1, padding='same', activation='relu',
input_shape=[224, 224, 3]))
model.add(MaxPooling2D(2, ))
model.add(Conv2D(filters=64, kernel_size=3, strides=1, padding='same', activation='relu'))
model.add(MaxPooling2D(2))

Accuracy Comparison
import pandas as pd
results = {
'Accuracy': [a, a1, a2, a3, a4, a5, a6, a7,a8,a9,a10],
'Recall': [r, r1, r2, r3, r4, r5, r6, r7,r8,r9,r10],
'Precision': [p, p1, p2, p3, p4, p5, p6, p7,p8,p9,p10],
'F1': [f, f1, f2, f3, f4, f5, f6, f7,f8,f9,f10]
}
index = ['VGG16', 'MobileNet', 'MobileNetV2', 'Resnet 50', 'InceptionV3','Xception',
'DenseNet169', 'Resnet101', 'LesNet', 'EfficientNet V2S','CNN']
results =pd.DataFrame(results,index=index)
print(results)
fig =results.plot(kind='bar',title='Comaprison of models',figsize =(19,19)).get_figure()
fig.savefig('Final Result.png')
results.plot(subplots=True,kind ='bar',figsize=(4,10))
App
import os
import numpy as np
from tensorflow.keras.models import load_model

26
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from sklearn.metrics import f1_score
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from flask import Flask, redirect, url_for, request, render_template
import sqlite3
app = Flask( name )
UPLOAD_FOLDER = 'static/uploads/'
model_path2 = 'models/mobile.h5'
custom_objects = {
'f1_score': f1_score,
'recall_m': recall_score,
'precision_m': precision_score
}
model = load_model(model_path2, custom_objects=custom_objects)
def getHeritage(filepath):
Heritage = ""
if os.path.exists("Heritage/"+filepath+".txt"):
with open("Heritage/"+filepath+".txt", "r") as file:
lines = file.readlines()
for i in range(len(lines)):
Heritage += lines[i]+"\n"
file.close()
else:
with open("Heritage/others.txt", "r") as file:
lines = file.readlines()
for i in range(len(lines)):
Heritage += lines[i]+"\n"
file.close()
return Heritage
@app.route('/predict2', methods=['POST'])
def model_predict2():
if 'files' in request.files:

27
image_file = request.files['files']
if image_file.filename != '':
image_path = 'temp_image.jpg'
image_file.save(image_path)
image = load_img(image_path, target_size=(224, 224))
image = img_to_array(image)
image = image / 255
image = np.expand_dims(image, axis=0)
result = np.argmax(model.predict(image))
result_mapping = {0: 'Ajanta Caves', 1: 'Charar-E- Sharif', 2: 'Chhota_Imambara', 3: 'Ellora
Caves', 4: 'Fatehpur Sikri', 5: 'Gateway of India', 6: 'Humayun_s Tomb', 7: 'India gate', 8:
'Khajuraho', 9: 'Sun Temple Konark', 10: 'Alai darwaza', 11: 'Alai minar', 12:
'basilica_of_bom_jesus', 13: 'charminar', 14: 'golden temple', 15: 'hawa mahal', 16:
'iron_pillar', 17: 'jamali_kamali_tomb', 18: 'lotus_temple', 19: 'mysore_palace', 20:
'qutub_minar', 21: 'tajmahal', 22: 'tanjavur temple', 23: 'victoria memorial'}
if result in result_mapping:
monuments = result_mapping[result]
else:
monuments = "Unknown"
information = getHeritage(monuments)
return render_template('after.html',
monuments=monuments,information=information)
return "No file uploaded."
@app.route("/index")
def index():
return render_template("index.html")
@app.route('/')
@app.route('/home')
def home():
return render_template('home.html')
@app.route('/logon')
def logon():
return render_template('signup.html')
@app.route('/login')

28
def login():
return render_template('signin.html')
@app.route("/signup")
def signup():
username = request.args.get('user','')
name = request.args.get('name','')
email = request.args.get('email','')
number = request.args.get('mobile','')
password = request.args.get('password','')
con = sqlite3.connect('signup.db')
cur = con.cursor()
cur.execute("insert into `info` (`user`,`email`, `password`,`mobile`,`name`) VALUES (?, ?, ?,
?, ?)",(username,email,password,number,name))
con.commit()
con.close()
return render_template("signin.html")
@app.route("/signin")
def signin():
mail1 = request.args.get('user','')
password1 = request.args.get('password','')
con = sqlite3.connect('signup.db')
cur = con.cursor()
cur.execute("select `user`, `password` from info where `user` = ? AND `password` =
?",(mail1,password1,))
data = cur.fetchone()
if data == None:
return render_template("signin.html")
elif mail1 == 'admin' and password1 == 'admin':
return render_template("index.html")
elif mail1 == str(data[0]) and password1 == str(data[1]):
return render_template("index.html")
else:
return render_template("signup.html")
@app.route("/after")

29
CHAPTER 7

RESULTS
Our website serves as a dedicated academic and career resource for engineering students, aimed
at enhancing both their semester performance and future job readiness.

Figure 7.1 Home Page


The homepage of BackBenchers Unite is clean, modern, and student-friendly. It has a simple
top menu with links like Home, Placement Prep, DSA Sheet, and more. A big headline in the
center asks, "Confused about your engineering career?" followed by a short message to
motivate students.

Figure 7.2 Sign In Page

The sign-in form is prominently displayed, featuring fields for users to enter their username
and password. The form is designed for clarity and ease of use. Beneath the sign-in form, users

30
can also find a link to create a new account, inviting newcomers to join the heritage community
and start their journey of discovery. Overall, the sign-in page for our Heritage Identification
app is designed with user convenience and aesthetic appeal in mind, ensuring a smooth and
engaging experience for both new and returning users as they access the app's rich heritage
resources.

Figure 7.3 Sign Up Page

The sign-up page for our Heritage Identification app welcomes new users with an inviting and
intuitive design. The sign-up form is prominently displayed, featuring fields for users to enter
essential information such as their name, email address, desired username, phone number and
password. The form is designed with clear labels and placeholders, making it easy for users to
input their details accurately. Overall, the sign-up page for our Heritage Identification app is
designed to be user-friendly, informative, and visually engaging, encouraging users to sign up
and embark on a journey of discovery and appreciation for global heritage.

Figure 7.4 Image Upload Page

31
The image upload form is prominently featured, providing users with intuitive options and
fields to submit their heritage images. The form includes a "Choose File" button, allowing users
to easily select and upload their images from their devices. At the bottom of the page, users can
find an upload button to complete the upload process. At the top of the image upload page,
users can find a logout button that provides the users a convenient way to securely log out of
their accounts.

Figure 7.5 Result Page

The Result Page in the Heritage Identification app serves as a comprehensive and informative
interface where users can view detailed information about the monument or historical site that
was detected through image recognition. The Result Page often integrates an interactive map
link that pinpoints the exact location of the detected monument. Overall, the Result Page in the
Heritage Identification app offers a rich and immersive experience and user-friendly features
to understand the global heritage and historical monuments.

Figure 7.6 Comparison Graph

32
Machine learning models are evaluated using metrics like accuracy, recall, precision, and F1
score. These metrics measure the model's ability to accurately detect and identify monuments
with minimal false positives and false negatives. High accuracy, recall, precision, and F1 score
indicate a reliable and efficient system. MobileNetV2 outperformed other algorithms in terms
of accuracy, recall, precision, and F1 score in detecting monuments with minimal false
positives and negatives.

Table 7.1 Comparison Table

The table compares algorithms used in monument detection within the Heritage Identification,
providing key attributes and performance metrics to make informed decisions on algorithm
selection. MobileNetV2 outperformed other algorithms in terms of accuracy, recall, precision,
and F1 score in detecting monuments with minimal false positives and negatives.

33
CHAPTER 8

CONCLUSION

In conclusion, using MobileNetV2 for monument detection in the Heritage Identification


website provides a good balance between accuracy, efficiency, and speed. It enables reliable
monument identification while ensuring smooth performance and responsiveness for users.
Computer vision and machine learning are revolutionizing the preservation and accessibility of
cultural heritage by automating the classification and identification of historical sites. This
technology has the potential to transform travel and education, allowing visitors to interact with
historical monuments in immersive ways. As technology advances, complex algorithms will
improve accuracy, enhancing educational and recreational value for the general population and
aiding academics and environmentalists.

FUTURE ENHANCEMENTS AND DISCUSSIONS


The future of heritage identification and preservation of monuments involves several key
enhancements and discussions. These advancements are driven by technological innovations,
changing perspectives on heritage conservation, and the need for sustainable practices.
Platforms and apps that leverage crowdsourcing enable public participation in heritage
identification, documentation, and monitoring. Citizens can contribute photographs, historical
information, and observations, enhancing the collective knowledge base and fostering a sense
of ownership and responsibility towards cultural heritage. High-resolution imaging
technologies like LiDAR (Light Detection and Ranging), photogrammetry, and drone-based
imaging are revolutionizing the way we document and analyze monuments. These technologies
provide detailed 3D models, allowing for precise measurements, virtual reconstructions, and
damage assessments without physical intervention.

34
REFERENCES
[1] The essay "Object recognition using composed receptive field histograms of
higher dimensionality" by O. Linde and T. Lindeberg appeared in the 2004
Proceedings of the 17th International Conference on Pattern detection.
doi:10.1109/ICPR.2004.1333965. Vol. 2, pages. 1-6, ICPR 2004, vol. 2.
[2] IEEE Int. Conf. Cybern. Intell. Syst. CIS 2008, IEEE, pp. 838–842, 2008, doi:
10.1109/ICCIS.2008.4670816; Yu, J., and Y. Ge, "A scene recognition algorithm
based on covariance descriptor."
[3] J. Deng, J. Guo, T. Liu, M. Gong, and S. Zafeiriou's paper "Sub-center ArcFace:
Boosting Face Recognition by Large-Scale Noisy Web Faces" was published in
Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes
Bioinformatics), vol. 12356 LNCS, pp. 741–757, 2020; doi: 10.1007/978-3-030-
58621-8_43.
[4] "Visual landmark recognition from Internet photo collections," a large-scale
evaluation by Weyand and Leibe, was published in 2015 in Comput. Vis. Image
Underst., vol. 135, pp. 1–15, doi: 10.1016/j.cviu.2015.02.002.
[5] "Video google: A text retrieval approach to object matching in videos," J. Sivic
and A. Zisserman (http://10.1109/iccv.2003.1238663) IEEE Int. Conf. Comput.
Vis., vol. 2, 2003, pp. 1470–1477.
[6] R. Fergus, P. Perona, and A. Zisserman, "Object class recognition by
unsupervised scale-invariant learning," Compute Vis Pattern detection Proc. IEEE
Comput. Soc. Conf., vol. 2, 2003, doi: 10.1109/cvpr.2003.1211479.
[7] D. Parikh, C. L. Zitnick, and T. Chen, "Determining Patch Saliency Using Low-
Level Context," Computer Vision -- ECCV 2008, 2008, pp. 446–459.
[8] Torralba, K. P. Murphy, W. T. Freeman, and M. A. Rubin, “Contextbased vision
system for place and object recognition,” Radiology, vol. 239, no. 1, p. 301, 2006,
doi: 10.1148/radiol.2391051085.
[9] D. G. Lowe, “Object recognition from local scale-invariant features,” Proc.
Seventh IEEE Int. Conf. Comput. Vision, 1999, vol. 2, pp. 1150– 1157, 1999, doi:
10.1109/ICCV.1999.790410.
[10] A.Bosch, A. Zisserman, and X. Muñoz, “Scene classification using a hybrid
generative/discriminative approach,” IEEE Trans. Pattern Anal. Mach. Intell., vol.
30, no. 4, pp. 712–727, 2008, doi: 10.1109/TPAMI.2007.70716.

35
[11] L. Lu, K. Toyama, and G. D. Hager, “A two level approach for scene
recognition,” Proc. - 2005 IEEE Comput. Soc. Conf. Comput. Vis. Pattern
Recognition, CVPR 2005, vol. I, pp. 688–695, 2005, doi: 10.1109/cvpr.2005.51.
[12] J. Lim, Y. Li, Y. You, and J. Chevallet, “Scene Recognition with Camera
Phones for Tourist Information Access,” Jul. 2007, doi:
10.1109/ICME.2007.4284596.
[13] T. Chen and K. H. Yap, “Discriminative BoW framework for mobile Landmark
recognition,” IEEE Trans. Cybern., vol. 44, no. 5, pp. 695– 706, 2014, doi:
10.1109/TCYB.2013.2267015.
[14] J. Cao et al., “Landmark recognition with sparse representation classification
and extreme learning machine,” J. Franklin Inst., vol. 352, no. 10, pp. 4528–4545,
2015, doi: 10.1016/j.jfranklin.2015.07.002.
[15] D. Nistér and H. Stewénius, “Scalable recognition with a vocabulary tree,” Proc.
IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2, pp. 2161– 2168,
2006, doi: 10.1109/CVPR.2006.264.

36

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy