Backbenchers Unite Report
Backbenchers Unite Report
“BACKBENCHERS UNITE”
Submitted in partial fulfillment of the
In
By
Kummari Sowmya – 23R25A0506
Netapalli Adithya – 22R21A0540
Gurugu Shyam Tarun – 22R21A0526
D.N.S HariCharan – 22R21A0517
Mr. S. Lingaiah
Associate Professor
2025
Department of Computer Science & Engineering
CERTIFICATE
This is to certify that the project entitled “BackBenchers Unite” has been submitted by
for the award of degree of Bachelor of Technology in Computer Science and Engineering from
Jawaharlal Nehru Technological University, Hyderabad. The results embodied in this project have
not been submitted to any other University or Institution for the award of any degree or diploma.
External Examiner
i
Department of Computer Science & Engineering
DECLARATION
We hereby declare that the project entitled “BackBenchers Unite” is the work done during the
period from July 2023 to April 2024 and is submitted in partial fulfillment of the requirements
for the award of degree of Bachelor of Technology in Computer Science and Engineering from
Jawaharlal Nehru Technology University, Hyderabad. The results embodied in this project have
not been submitted to any other university or Institution for the award of any degree or diploma.
ii
Department of Computer Science & Engineering
ACKNOWLEDGEMENT
The satisfaction and euphoria that accompany the successful completion of any task
would be incomplete without the mention of people who made it possible, whose constant
guidance and encouragement crowned our efforts with success. It is a pleasant aspect that we
now have the opportunity to express our guidance for all of them.
First of all, we would like to express our deep gratitude towards our internal guide
Mr. S.Lingaiah, Associate Professor, Department of CSE for his support in the completion
of our dissertation. We wish to express our sincere thanks to Dr. A. BALARAM, HOD,
Department of CSE and also Principal Dr. K. SRINIVAS RAO for providing the facilities to
We would like to thank all our faculty and friends for their help and constructive
criticism during the project period. Finally, we are very much indebted to our parents for their
iii
Department of Computer Science & Engineering
ABSTRACT
In today’s rapidly evolving job market, engineering students face the challenge of balancing academic
demands with career preparation. Traditionally, many students focus on cramming for exams just
before assessments, neglecting long-term skill development and professional growth. As a result, by
the time they reach their third or fourth year, students often begin to worry about their future career
prospects, feeling unprepared for campus placements or the transition into the workforce.
This initiative aims to address this gap by creating a platform that supports both academic success and
career readiness in a cohesive, time-efficient manner. The core of this platform is its approach to exam
preparation. These questions will be analyzed and organized in a way that allows students to focus on
the most critical topics, thereby making their study time more efficient. This will allow them to retain
Recognizing that engineering students must be ready for the professional world by the time they
graduate, the platform also provides career development tools aimed at preparing students for success
in campus placements and internships. A detailed breakdown of campus placement rounds will be
available, including insight into what to expect during interviews and group discussions, and advice
Beyond just improving academic performance and placement readiness, the platform also aims to
equip students with the skills necessary for thriving in the fast-paced, tech-driven world of today’s
industries.
iv
LIST OF FIGURES & TABLES
3.1.1 MobileNetV2 6
v
Table Page Number
Number Name of the Table
vi
INDEX
CERTIFICATE i
DECLARATION ii
ACKNOWLEDGEMENT iii
ABSTRACT iv
LIST OF FIGURES & TABLES v
CHAPTER 1
INTRODUCTION 1
1.1 Overview 1
1.2 Purpose of the project 1
1.3 Motivation 1
CHAPTER 2
LITERATURE SURVEY 2
2.1 Existing System 2
2.2 Disadvantages of Existing System 4
CHAPTER 3
PROPOSED SYSTEM 6
3.1 Proposed System 6
3.2 Advantages of Proposed System 7
3.3 System Architecture 7
3.4 UML Diagrams 10
CHAPTER 4
SYSTEM REQUIREMENTS 15
4.1 Software Requirements 15
4.2 Hardware Requirements 15
4.3 Functional Requirements 15
4.4 Non-Functional Requirements 16
CHAPTER 5
MODULE DESIGN 17
5.1 Module Design 17
CHAPTER 6
IMPLEMENTATION 19
6.1 Source Code 19
CHAPTER 7
RESULTS 28
CHAPTER 8
CONCLUSION 33
FUTURE ENHANCEMENTS AND DISCUSSIONS 33
REFERENCES 34
CHAPTER 1
INTRODUCTION
1.1 OVERVIEW
Engineering students today face difficulties in balancing their academic studies and job
preparation after graduation. Many tend to study seriously only before exams, which impacts
both their learning and long-term career planning. This often leads to stress and uncertainty
during campus placements or internships, especially in the later years of college. To address
this issue, this project introduces a platform that provides both important semester exam
questions and career guidance in one place. The questions are carefully selected and organized
to help students focus on key topics, making their study time more effective. Alongside this,
the platform offers career support through interview tips, group discussion strategies, and step-
by-step placement preparation, ensuring students are well-prepared for job opportunities by
graduation.
1.3 MOTIVATION
The motivation behind this project comes from the common struggle engineering students face
in trying to manage academic pressure while also preparing for their careers. Many students
delay career planning until their final year, often feeling overwhelmed and unprepared when
it's time for placements. There is a clear need for a system that helps students stay on track
throughout their academic journey. By offering exam preparation and professional development
tools in a single, easy-to-use platform, this project aims to reduce stress, save time, and improve
student outcomes both academically and professionally.
1
CHAPTER 2
LITERATURE SURVEY
An extensive literature survey has been conducted. Researchers and educators have explored
various tools to support engineering students in both academic preparation and career
development. Studies show that most students rely on last-minute exam preparation, often
neglecting long-term skill-building. Several platforms offer exam resources or placement
support, but very few integrate both efficiently. Various research papers, journals, and
educational resources have been reviewed to design a unified solution that enhances both
academic performance and career readiness.
3
2.2 DISADVANTAGES OF EXISTING SYSTEM
Concisely summarizing the disadvantages of currently available academic support and
preparation platforms:
1. Most platforms provide generic material, failing to align with specific university or
branch-wise syllabi followed in different colleges.
2. Existing systems often ignore structured guidance for campus placements, including
round-wise preparation and company-specific strategies.
3. Many platforms do not regularly update their content, resulting in the use of old question
banks and irrelevant materials.
4. Cluttered layouts and confusing navigation reduce student engagement and usability,
especially on mobile devices.
5. Platforms lack intelligent systems to suggest subjects, questions, or preparation paths
based on the user’s branch, performance, or upcoming exams.
6. There is no feature for students to share previous question papers, personal notes, or real-
time insights that could help others in the same college or branch.
4
5
6
2.3 DISADVANTAGES OF EXISTING SYSTEM
Concisely summarizing the disadvantages of the above implementations:
1. Sensitivity to complex scenes may lead to false positives.
2. Limited adaptability to dynamic lighting and drastic viewpoint changes.
3. Susceptibility to errors in user-outlined object boundaries affecting accuracy.
4. Challenges in handling occlusion scenarios impacting object recognition.
Dependency on pre-computed matches may hinder real-time adaptability.
7
CHAPTER 3
PROPOSED SYSTEM
8
3.2 ADVANTAGES OF PROPOSED SYSTEM
The proposed system has the following advantages:
1. Rapid identification: MobileNetV2 enable swift monument categorization for efficient
preservation efforts.
2. Comprehensive categorization: Architecture, location, and historical context enhance
precise monument classification.
3. Continuous improvement: User feedback refines models, ensuring evolving accuracy
over time.
4. Global heritage awareness: Promotes cultural preservation, fostering appreciation of
historical significance.
9
repositories, and digital archives. Validate and clean collected data to remove duplicates,
inconsistencies, and inaccuracies.
Image Preprocessing:
It involves applying various techniques to the raw input images to enhance their quality,
reduce noise, and extract meaningful features.
• Image resizing: Resizing the image to a standardized resolution can make processing
more efficient and consistent.
Feature Extraction:
It involves identifying and extracting relevant information or features from raw input images,
which can then be used for tasks such as classification, recognition, or matching.
• Corner detection: Identifying corner points in the image, which are locations with high
variation in intensity in multiple directions.
• Edge detection: Detecting edges in the image, which represent significant changes in
intensity and can outline shapes or structures.
Image Splitting:
Image splitting, also known as image segmentation, involves dividing a single image into
multiple smaller images or segments.
Train data:
Training data refers to the dataset used to train machine learning models. In the context of
heritage identification of monuments, training data would consist of images and associated
labels or annotations that are used to teach a machine learning model to recognize and classify
different monuments accurately.
• Model Training: Train a machine learning model using the annotated training data.
During training, the model learns to recognize patterns and features in the input
10
images that are indicative of different types of monuments.
• Model Evaluation: Evaluate the performance of the trained model on the validation set
to assess its accuracy, precision, recall, and other relevant metrics.
Test Data:
Test data refers to a separate dataset that is used to evaluate the performance of a trained
machine learning model. Test data would consist of a collection of images that the model has
not seen during the training phase. Overall, test data plays a crucial role in objectively
evaluating the performance and generalization ability of machine learning models for heritage
identification of monuments
• Model Testing: Model testing is a critical step in the machine learning pipeline to
evaluate the performance and generalization ability of a trained model on unseen data.
Assess the performance of your trained model and ensure that it performs well on
unseen data, thus demonstrating its reliability and utility for real-world applications.
Classification:
Classification is a fundamental task in machine learning, and it plays a central role in heritage
identification of monuments. In this context, classification involves assigning a label or
category to input images of monuments based on their visual characteristics. Image
classification tasks include CNN-based architectures like VGG, ResNet, Inception,
EfficientNet, Mobile net V2, Xception.
Visualizing Results:
Visualizing results is essential for interpreting the performance of a heritage identification
system. By visualizing the results of monument identification researchers and stakeholders can
gain a better understanding of the system's performance, identify areas for improvement, and
make informed decisions about future developments and applications.
Input Image:
The user can input images of monuments to identify them using the system.
11
Preprocessing:
Preprocessing is a crucial step in the pipeline of heritage identification of monuments,
especially when dealing with image data. It involves preparing and cleaning the data to ensure
that it is in a suitable format for analysis.
• ImageLoading: Load the raw image data into memory from storage or from an external
source. This step ensures that the images are accessible for further processing.
Feature Extraction:
Feature extraction is a critical step in heritage identification of monuments, particularly in
computer vision tasks where images of monuments are analyzed to extract meaningful
information.
• Image Representation: Convert each input image into a format suitable for analysis.
This may involve loading the image data from files, resizing images to a consistent size,
and converting them to a standardized colour space.
12
Figure 3.4.1 Use Case Diagram
Class Diagram:
The class diagram is used to refine the use case diagram and define a detailed design of the
system. The class diagram classifies the actors defined in the use case diagram into a set of
interrelated classes. The relationship or association between the classes can be either an "is-a"
or "has-a" relationship. Each class in the class diagram may be capable of providing certain
functionalities. These functionalities provided by the class are termed "methods" of the class.
Apart from this, each class may have certain "attributes" that uniquely identify the class.
13
Figure 3.4.3 Activity Diagram
Sequence Diagram:
A sequence diagram represents the interaction between different objects in the system. The
important aspect of a sequence diagram is that it is time-ordered. This means that the exact
sequence of the interactions between the objects is represented step by step. Different objects
in the sequence diagram interact with each other by passing "messages".
Collaboration Diagram:
A collaboration diagram groups together the interactions between different objects. The
interactions are listed as numbered interactions that help to trace the sequence of the
14
interactions. The collaboration diagram helps to identify all the possible interactions that each
object has with other objects.
Deployment Diagram:
The deployment diagram captures the configuration of the runtime elements of the application.
This diagram is by far most useful when a system is built and ready to be deployed.
User System
Component Diagram:
The component diagram represents the high-level parts that make up the system. This diagram
depicts, at a high level, what components form part of the system and how they are interrelated.
A component diagram depicts the components culled after the system has undergone the
development or construction phase.
15
Figure 3.4.7 Component Diagram
16
CHAPTER 4
SYSTEM REQUIREMENTS
The system requirements for the development and deployment of the project as an application
are specified in this section. These requirements are not be confused with the end-user system
requirements. There are no specific, end-user requirements as the intended application is cross-
platform and is supposed to work on devices of all form-factors and configurations.
• Data Collection and Integration: The system should be able to collect and preprocess
a dataset of labelled images and integrate them into single dataset for analysis.
17
• Data Preprocessing: Machine Learning Model: The system should perform data
preprocessing tasks, such as data cleaning, normalization to ensure the data's quality
and reliability.
18
CHAPTER 5
MODULE DESIGN
19
enable seamless communication and data exchange between applications and databases,
allowing them to work together efficiently.
User Interface Module:
The User Interface (UI) module serves as the primary point of interaction between users and
the system. Its main purpose is to present information and functionality in a clear, intuitive, and
aesthetically pleasing manner, facilitating efficient user interaction and task completion.
20
CHAPTER 6
IMPLEMENTATION
Importing Libraries
import pandas as pd
import pathlib
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import os
import splitfolders
import PIL
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import Callback, EarlyStopping, ModelCheckpoint
import glob
import cv2
from tensorflow.keras import datasets, layers, models, losses
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Activation, Flatten, Dense,
Dropout, BatchNormalization, LSTM
from tensorflow.keras.regularizers import l2
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import CategoricalCrossentropy
import tensorflow.keras.backend as K
from tensorflow import keras
from tensorflow.keras.preprocessing import image
import os, shutil
import warnings
import sys
import shutil
import glob as gb
warnings.filterwarnings('ignore')
21
Importing Dataset
BATCH_SIZE = 64
IMAGE_SHAPE = (224, 224)
TRAIN_PATH = "dataset/train"
VAL_PATH = "dataset/test"
datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1/255)
train_gen = datagen.flow_from_directory(directory = TRAIN_PATH,
class_mode="categorical",
target_size = IMAGE_SHAPE,
batch_size = BATCH_SIZE,
color_mode='rgb',
seed = 1234,
shuffle = True)
val_gen = datagen.flow_from_directory(directory = VAL_PATH,
class_mode="categorical",
target_size = IMAGE_SHAPE,
batch_size = BATCH_SIZE,
color_mode='rgb',
seed = 1234,
shuffle = True)
class_map = dict([(v, k) for k, v in train_gen.class_indices.items()])
print(class_map)
def recall_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_score(y_true, y_pred):
precision = precision_m(y_true, y_pred)
22
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
VGG16
from tensorflow.keras.models import Model
import tensorflow as tf
inc =
tf.keras.applications.vgg16.VGG16(include_top=False,weights='imagenet',input_shape=(224,
224, 3), pooling='max')
x31 = Flatten()(inc.output)
predictionss = Dense(24, activation='softmax')(x31)
modelss = Model(inputs = inc.inputs, outputs = predictionss)
modelss.summary()
modelss.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy',f1_sco
re,recall_m,precision_m])
r2=modelss.fit(train_gen,validation_data=val_gen,epochs=20,steps_per_epoch=len(train_gen
), validation_steps=len(val_gen))
history=r2
x = r2
modelss.save('models/VGG16.h5')
train_acc = history.history['accuracy']
train_recall = history.history['recall_m']
train_precision = history.history['precision_m']
train_f1 = history.history['f1_score']
val_acc = history.history['val_accuracy']
val_recall = history.history['val_recall_m']
val_precision = history.history['val_precision_m']
val_f1 = history.history['val_f1_score']
fig, axs = plt.subplots(2, 2, figsize=(12, 8))
axs[0, 0].plot(train_acc, label='Train')
axs[0, 0].plot(val_acc, label='Validation')
axs[0, 0].set_title('Accuracy')
axs[0, 0].legend()
axs[0, 1].plot(train_precision, label='Train')
23
axs[0, 1].plot(val_precision, label='Validation')
axs[0, 1].set_title('Precision')
axs[0, 1].legend()
axs[1, 0].plot(train_recall, label='Train')
axs[1, 0].plot(val_recall, label='Validation')
axs[1, 0].set_title('Recall')
axs[1, 0].legend()
axs[1, 1].plot(train_f1, label='Train')
axs[1, 1].plot(val_f1, label='Validation')
axs[1, 1].set_title('F1 Score')
axs[1, 1].legend()
plt.tight_layout()
plt.show()
a = history.history['accuracy'][-1]
f = history.history['f1_score'][-1]
p = history.history['precision_m'][-1]
r = history.history['recall_m'][-1]
print('Accuracy = ' + str(a * 100))
print('Precision = ' + str(p * 100))
print('F1 Score = ' + str(f * 100))
print('Recall = ' + str(r * 100))
MobileNet
inc = tf.keras.applications.mobilenet.MobileNet(include_top=False, weights='imagenet',
input_shape=(224, 224, 3), pooling='max')
MobileNetv2
inc = tf.keras.applications.mobilenet_v2.MobileNetV2(include_top=False,
weights='imagenet', input_shape=(224, 224, 3), pooling='max')
Resnet50
inc = tf.keras.applications.resnet50.ResNet50(include_top=False, weights='imagenet',
input_shape=(224, 224, 3), pooling='max')
24
InceptionV3
Xception
inc = tf.keras.applications.xception.Xception(include_top=False, weights='imagenet',
input_shape=(224, 224, 3), pooling='max')
DenseNet169
ResNet101
LeNet
model = models.Sequential()
model.add(layers.Conv2D(6, 5, activation='tanh', input_shape=(224,224,3)))
model.add(layers.AveragePooling2D(2))
model.add(layers.Activation('sigmoid'))
model.add(layers.Conv2D(16, 5, activation='tanh'))
model.add(layers.AveragePooling2D(2))
model.add(layers.Activation('sigmoid'))
model.add(layers.Conv2D(120, 5, activation='tanh'))
model.add(layers.Flatten())
model.add(layers.Dense(84, activation='tanh'))
model.add(layers.Dense(24, activation='softmax'))
EfficientNet V2S
from tensorflow.keras import layers, models
model = models.Sequential()
model.add(layers.Input(shape=(224,224,3)))
model.add(layers.experimental.preprocessing.Rescaling(scale=1./255))
25
model.add(layers.Conv2D(32, (3,3), activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64, (3,3), activation='relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(24, activation='softmax'))
CNN
model = models.Sequential() # model object
model.add(Conv2D(filters=32, kernel_size=3, strides=1, padding='same', activation='relu',
input_shape=[224, 224, 3]))
model.add(MaxPooling2D(2, ))
model.add(Conv2D(filters=64, kernel_size=3, strides=1, padding='same', activation='relu'))
model.add(MaxPooling2D(2))
Accuracy Comparison
import pandas as pd
results = {
'Accuracy': [a, a1, a2, a3, a4, a5, a6, a7,a8,a9,a10],
'Recall': [r, r1, r2, r3, r4, r5, r6, r7,r8,r9,r10],
'Precision': [p, p1, p2, p3, p4, p5, p6, p7,p8,p9,p10],
'F1': [f, f1, f2, f3, f4, f5, f6, f7,f8,f9,f10]
}
index = ['VGG16', 'MobileNet', 'MobileNetV2', 'Resnet 50', 'InceptionV3','Xception',
'DenseNet169', 'Resnet101', 'LesNet', 'EfficientNet V2S','CNN']
results =pd.DataFrame(results,index=index)
print(results)
fig =results.plot(kind='bar',title='Comaprison of models',figsize =(19,19)).get_figure()
fig.savefig('Final Result.png')
results.plot(subplots=True,kind ='bar',figsize=(4,10))
App
import os
import numpy as np
from tensorflow.keras.models import load_model
26
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from sklearn.metrics import f1_score
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from flask import Flask, redirect, url_for, request, render_template
import sqlite3
app = Flask( name )
UPLOAD_FOLDER = 'static/uploads/'
model_path2 = 'models/mobile.h5'
custom_objects = {
'f1_score': f1_score,
'recall_m': recall_score,
'precision_m': precision_score
}
model = load_model(model_path2, custom_objects=custom_objects)
def getHeritage(filepath):
Heritage = ""
if os.path.exists("Heritage/"+filepath+".txt"):
with open("Heritage/"+filepath+".txt", "r") as file:
lines = file.readlines()
for i in range(len(lines)):
Heritage += lines[i]+"\n"
file.close()
else:
with open("Heritage/others.txt", "r") as file:
lines = file.readlines()
for i in range(len(lines)):
Heritage += lines[i]+"\n"
file.close()
return Heritage
@app.route('/predict2', methods=['POST'])
def model_predict2():
if 'files' in request.files:
27
image_file = request.files['files']
if image_file.filename != '':
image_path = 'temp_image.jpg'
image_file.save(image_path)
image = load_img(image_path, target_size=(224, 224))
image = img_to_array(image)
image = image / 255
image = np.expand_dims(image, axis=0)
result = np.argmax(model.predict(image))
result_mapping = {0: 'Ajanta Caves', 1: 'Charar-E- Sharif', 2: 'Chhota_Imambara', 3: 'Ellora
Caves', 4: 'Fatehpur Sikri', 5: 'Gateway of India', 6: 'Humayun_s Tomb', 7: 'India gate', 8:
'Khajuraho', 9: 'Sun Temple Konark', 10: 'Alai darwaza', 11: 'Alai minar', 12:
'basilica_of_bom_jesus', 13: 'charminar', 14: 'golden temple', 15: 'hawa mahal', 16:
'iron_pillar', 17: 'jamali_kamali_tomb', 18: 'lotus_temple', 19: 'mysore_palace', 20:
'qutub_minar', 21: 'tajmahal', 22: 'tanjavur temple', 23: 'victoria memorial'}
if result in result_mapping:
monuments = result_mapping[result]
else:
monuments = "Unknown"
information = getHeritage(monuments)
return render_template('after.html',
monuments=monuments,information=information)
return "No file uploaded."
@app.route("/index")
def index():
return render_template("index.html")
@app.route('/')
@app.route('/home')
def home():
return render_template('home.html')
@app.route('/logon')
def logon():
return render_template('signup.html')
@app.route('/login')
28
def login():
return render_template('signin.html')
@app.route("/signup")
def signup():
username = request.args.get('user','')
name = request.args.get('name','')
email = request.args.get('email','')
number = request.args.get('mobile','')
password = request.args.get('password','')
con = sqlite3.connect('signup.db')
cur = con.cursor()
cur.execute("insert into `info` (`user`,`email`, `password`,`mobile`,`name`) VALUES (?, ?, ?,
?, ?)",(username,email,password,number,name))
con.commit()
con.close()
return render_template("signin.html")
@app.route("/signin")
def signin():
mail1 = request.args.get('user','')
password1 = request.args.get('password','')
con = sqlite3.connect('signup.db')
cur = con.cursor()
cur.execute("select `user`, `password` from info where `user` = ? AND `password` =
?",(mail1,password1,))
data = cur.fetchone()
if data == None:
return render_template("signin.html")
elif mail1 == 'admin' and password1 == 'admin':
return render_template("index.html")
elif mail1 == str(data[0]) and password1 == str(data[1]):
return render_template("index.html")
else:
return render_template("signup.html")
@app.route("/after")
29
CHAPTER 7
RESULTS
Our website serves as a dedicated academic and career resource for engineering students, aimed
at enhancing both their semester performance and future job readiness.
The sign-in form is prominently displayed, featuring fields for users to enter their username
and password. The form is designed for clarity and ease of use. Beneath the sign-in form, users
30
can also find a link to create a new account, inviting newcomers to join the heritage community
and start their journey of discovery. Overall, the sign-in page for our Heritage Identification
app is designed with user convenience and aesthetic appeal in mind, ensuring a smooth and
engaging experience for both new and returning users as they access the app's rich heritage
resources.
The sign-up page for our Heritage Identification app welcomes new users with an inviting and
intuitive design. The sign-up form is prominently displayed, featuring fields for users to enter
essential information such as their name, email address, desired username, phone number and
password. The form is designed with clear labels and placeholders, making it easy for users to
input their details accurately. Overall, the sign-up page for our Heritage Identification app is
designed to be user-friendly, informative, and visually engaging, encouraging users to sign up
and embark on a journey of discovery and appreciation for global heritage.
31
The image upload form is prominently featured, providing users with intuitive options and
fields to submit their heritage images. The form includes a "Choose File" button, allowing users
to easily select and upload their images from their devices. At the bottom of the page, users can
find an upload button to complete the upload process. At the top of the image upload page,
users can find a logout button that provides the users a convenient way to securely log out of
their accounts.
The Result Page in the Heritage Identification app serves as a comprehensive and informative
interface where users can view detailed information about the monument or historical site that
was detected through image recognition. The Result Page often integrates an interactive map
link that pinpoints the exact location of the detected monument. Overall, the Result Page in the
Heritage Identification app offers a rich and immersive experience and user-friendly features
to understand the global heritage and historical monuments.
32
Machine learning models are evaluated using metrics like accuracy, recall, precision, and F1
score. These metrics measure the model's ability to accurately detect and identify monuments
with minimal false positives and false negatives. High accuracy, recall, precision, and F1 score
indicate a reliable and efficient system. MobileNetV2 outperformed other algorithms in terms
of accuracy, recall, precision, and F1 score in detecting monuments with minimal false
positives and negatives.
The table compares algorithms used in monument detection within the Heritage Identification,
providing key attributes and performance metrics to make informed decisions on algorithm
selection. MobileNetV2 outperformed other algorithms in terms of accuracy, recall, precision,
and F1 score in detecting monuments with minimal false positives and negatives.
33
CHAPTER 8
CONCLUSION
34
REFERENCES
[1] The essay "Object recognition using composed receptive field histograms of
higher dimensionality" by O. Linde and T. Lindeberg appeared in the 2004
Proceedings of the 17th International Conference on Pattern detection.
doi:10.1109/ICPR.2004.1333965. Vol. 2, pages. 1-6, ICPR 2004, vol. 2.
[2] IEEE Int. Conf. Cybern. Intell. Syst. CIS 2008, IEEE, pp. 838–842, 2008, doi:
10.1109/ICCIS.2008.4670816; Yu, J., and Y. Ge, "A scene recognition algorithm
based on covariance descriptor."
[3] J. Deng, J. Guo, T. Liu, M. Gong, and S. Zafeiriou's paper "Sub-center ArcFace:
Boosting Face Recognition by Large-Scale Noisy Web Faces" was published in
Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes
Bioinformatics), vol. 12356 LNCS, pp. 741–757, 2020; doi: 10.1007/978-3-030-
58621-8_43.
[4] "Visual landmark recognition from Internet photo collections," a large-scale
evaluation by Weyand and Leibe, was published in 2015 in Comput. Vis. Image
Underst., vol. 135, pp. 1–15, doi: 10.1016/j.cviu.2015.02.002.
[5] "Video google: A text retrieval approach to object matching in videos," J. Sivic
and A. Zisserman (http://10.1109/iccv.2003.1238663) IEEE Int. Conf. Comput.
Vis., vol. 2, 2003, pp. 1470–1477.
[6] R. Fergus, P. Perona, and A. Zisserman, "Object class recognition by
unsupervised scale-invariant learning," Compute Vis Pattern detection Proc. IEEE
Comput. Soc. Conf., vol. 2, 2003, doi: 10.1109/cvpr.2003.1211479.
[7] D. Parikh, C. L. Zitnick, and T. Chen, "Determining Patch Saliency Using Low-
Level Context," Computer Vision -- ECCV 2008, 2008, pp. 446–459.
[8] Torralba, K. P. Murphy, W. T. Freeman, and M. A. Rubin, “Contextbased vision
system for place and object recognition,” Radiology, vol. 239, no. 1, p. 301, 2006,
doi: 10.1148/radiol.2391051085.
[9] D. G. Lowe, “Object recognition from local scale-invariant features,” Proc.
Seventh IEEE Int. Conf. Comput. Vision, 1999, vol. 2, pp. 1150– 1157, 1999, doi:
10.1109/ICCV.1999.790410.
[10] A.Bosch, A. Zisserman, and X. Muñoz, “Scene classification using a hybrid
generative/discriminative approach,” IEEE Trans. Pattern Anal. Mach. Intell., vol.
30, no. 4, pp. 712–727, 2008, doi: 10.1109/TPAMI.2007.70716.
35
[11] L. Lu, K. Toyama, and G. D. Hager, “A two level approach for scene
recognition,” Proc. - 2005 IEEE Comput. Soc. Conf. Comput. Vis. Pattern
Recognition, CVPR 2005, vol. I, pp. 688–695, 2005, doi: 10.1109/cvpr.2005.51.
[12] J. Lim, Y. Li, Y. You, and J. Chevallet, “Scene Recognition with Camera
Phones for Tourist Information Access,” Jul. 2007, doi:
10.1109/ICME.2007.4284596.
[13] T. Chen and K. H. Yap, “Discriminative BoW framework for mobile Landmark
recognition,” IEEE Trans. Cybern., vol. 44, no. 5, pp. 695– 706, 2014, doi:
10.1109/TCYB.2013.2267015.
[14] J. Cao et al., “Landmark recognition with sparse representation classification
and extreme learning machine,” J. Franklin Inst., vol. 352, no. 10, pp. 4528–4545,
2015, doi: 10.1016/j.jfranklin.2015.07.002.
[15] D. Nistér and H. Stewénius, “Scalable recognition with a vocabulary tree,” Proc.
IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2, pp. 2161– 2168,
2006, doi: 10.1109/CVPR.2006.264.
36