bhargav last (1)_241128_143747 (1)

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 48

INTERNSHIP REPORT ON

WILD TRACK UTILIZES COMPUTER VISION TECHNOLOGY TO


CLASSIFY ANIMAL FOOTPRINTS

A Report Submitted to
Jawaharlal Nehru Technological University Kakinada,
Kakinada in partial fulfillment for the award of the degree
of

BACHELOR OF TECHNOLOGY
IN
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

Submitted by

Name: EDA BHARGAV REDDY Regd No: 21KN1A6118

DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING


NRI INSTITUTE OF TECHNOLOGY
Autonomous
(Approved by AICTE, Permanently Affiliated to JNTUK, Kakinada)
Accredited by NBA (CSE, ECE & EEE), Accredited by NAAC with
‘A’ Grade ISO 9001: 2015 CertifiedInstitution
Pothavarappadu (V), (Via) Nunna, Agiripalli (M), Krishna Dist., PIN: 521212, A.P,
India.
2024-2025
NRI INSTITUTE OF TECHNOLOGY
(An Autonomous Institution, Approved by AICTE, Permanently Affiliated to JNTUK,
Kakinada) Accredited by NBA (CSE, ECE & EEE), Accredited by NAAC with
‘A’
Grade ISO 9001: 2015 Certified Institution
Pothavarappadu (V), (Via) Nunna, Agiripalli (M), Krishna Dist., PIN: 521212, A.P, India.

CERTIFICATE

This is to certify that the “Internship report” submitted by E D A BHARGAV


R E D D Y (21KN1A6118), is work done by him and submitted during 2024-2025 academic
year, in partial fulfillment of the requirements for the award of the degree of BACHELOR OF
TECHNOLOGY in ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING, at
QUALITY THOUGHT PVT.LTD .

INTERNSHIP COORDINATOR HEAD OF THE DEPARTMENT


(Dr. P. RAJENDRA KUMAR)

EXTERNAL EXAMINER
CERTIFICATE OF INTERNSHIP
ACKNOWLEDGEMENT

We take this opportunity to thank all who have rendered their full support to our
work. The pleasure, the achievement, the glory, the satisfaction, the reward, the
appreciation and the construction of our project cannot be expressed with a few
words for their valuable suggestions.

We are expressing our heartfelt thanks to HEAD OF THE DEPARTMENT, Dr. P.


RAJENDRA KUMAR garu for his continuous guidance for completion of our Project work.

We are thankful to the Principal, Dr. C. NAGA BHASKAR garu for his encouragement to
complete the Project work.

We are extending my sincere and honest thanks to the Chairman, Dr. R.


VENKATA RAO garu & Secretary, Sri K. Sridhar garu for their continuous
support in completing the Project work.

Finally, we thank the Administrative Officer, Staff Members, Faculty of


Department of AIML, NRI Institute of Technology and my friends, directly or
indirectly helped us in the completion of this project.

Name: EDA BHARGAV REDDY Regd No: 21KN1A6118


ABSTRACT
To avoid fraudulent post for job in the internet, an automated tool
using machine learning based classification techniques is proposed
in the project.
Different classifiers are used for checking fraudulent post in the
web and the results of those classifiers are compared for identifying
the best employment scam detection model. It helps in detecting
fake job posts from an enormous number of posts. Two major types
of classifiers, such as single classifier and ensemble classifiers are
considered for fraudulent job posts detection. However,
experimental results indicate that ensemble classifiers are the best
classification to detect scams over the single classifiers. In recent
years, the proliferation of online job postings has led to a
corresponding increase in fraudulent job advertisements, which can
significantly undermine the job- seeking experience and even lead
to financial loss for applicants, we propose the development of an
automated tool leveraging machine learning classification
techniques for the detection of fraudulent job posts on the internet.
Our approach involves employing various classifiers to analyze and
identify potentially deceptive job listings, allowing for more
informed and secure job-seeking practices. In our study, we focus
on two primary categories of classifiers: single classifiers and
ensemble classifiers. Single classifiers include well-known
algorithms such as Decision Trees, Support Vector Machines, and
Naïve Bayes, which serve as baseline models for detecting
fraudulent posts. robustness, include methods such as Random
Forest, Gradient Boosting, and AdaBoost .To evaluate the
effectiveness of these classification techniques, we conduct a
comprehensive set of experiments using a labeled dataset of job
postings. Performance metrics such as accuracy, precision, recall,
and F1-score are employed to assess the detection capabilities of
each model. Our experimental results demonstrate that ensemble
classifiers significantly outperform single classifiers in terms of
overall classification accuracy and reliability for detecting
fraudulent job postings.
Organization Information:

Quality Thought is a global technology and training services provider specializing in IT


consulting, corporate training, and skill development. The company offers a variety of programs
aimed at enhancing the skills of individuals and teams, including courses in software development,
data science, cloud computing, and other emerging technologies.

With a focus on real-time learning and practical skills, Quality Thought partners with industry
experts and companies to ensure that the training remains aligned with market needs. The
company operates across different countries, with a prominent presence in India and other parts of
Asia.

Programs and opportunities:

This ground-up approach helps us deliver not only the solution to our clients but also add value
to at the core which operates in Five specific domains namely TapTap - AI Driven, Post
Graduation Programs, Center of Excellence, Virtual Programming Labs and Happie Days - A
social Networking site for the students. TapTap offers services in Campus Recruitment drives
for the Employers as well as College authorities. Recruiters can Conduct Customized Online
Assessments secured with Best-in-class Proctoring and Schedule the end-to-end hiring process.
Under each division we further provide specific industry solutions on focused domains with
cutting edge technologies. It emphasizes building relationships with our clients by delivering
projects on time and within budget.
INDEX

S.no CONTENTS
1. Introduction
2. Analysis
3. Software requirements specifications
4. Architecture
5. Coding
6. Diagram
7. Conclusion
8. References
Learning Objectives/Internship Objectives

Internships are generally thought of to be reserved for college students looking to gain
experience in a particular field. However, a wide array of people can benefit from Training
Internships in order to receive real world experience and develop their skills.

An objective for this position should emphasize the skills you already possess in the area and
your interest in learning moreInternships are utilized in a number of different career fields,
including architecture, engineering, healthcare, economics, advertising and many more.

Some internships are used to allow individuals to perform scientific research while others are
specifically designed to allow people to gain first-hand experience working.

Utilizing internships is a great way to build your resume and develop skills that can be
emphasized in your resume for future jobs.

When you are applying for a Training Internship, make sure to highlight any special skills or
talents that can make you stand apart from the rest of the applicants so that you have an
improved chance of landing the position.
WEEKLY OVERVIEW OF INTERNSHIP
ACTIVITIES

DATE DAY NAME OF THE TOPIC/MODULE COMPLETED


15/07/2024 Monday Content delivery
16/07/2024 Tuesday Introduce the Topic & the Problem Statement
17/07/2024 Wednesday Abstract Building
18/07/2024 Thursday Abstract Submission
19/07/2024 Friday Explain your Approach to Solving Problem

DATE DAY NAME OF THE TOPIC/MODULE COMPLETED


22/07/2024 Monday Explain Structure of Project
23/07/2024 Tuesday Data Preprocessing
24/07/2024 Wednesday Perform Analysis
25/07/2024 Thursday PPT Preparation
26/07/2024 Friday PPT Submission

DATE DAY NAME OF THE TOPIC/MODULE COMPLETED


29/07/2024 Monday Mid Review
30/07/2024 Tuesday Building & Applying Algorithm
31/07/2024 Wednesday Building & Applying Algorithm
01/08/2024 Thursday Building & Applying Algorithm
02/08/2024 Friday Building & Applying Algorithm

DATE DAY NAME OF THE TOPIC/MODULE COMPLETED


05/08/2024 Monday Building & Applying Algorithm
06/08/2024 Tuesday Concluding Project
07/08/2024 Wednesday Concluding Project
08/08/2024 Thursday Final Review
09/08/2024 Friday Final Review
CHAPTER 1
INTRODUCTIO
N
INTRODUCTION

Employment scams have emerged as a significant issue in recent times,


particularly within the domain of Online Recruitment Frauds (ORF). With the rise
of digital platforms for job searching, many companies have increasingly turned
to online postings to advertise their vacancies, facilitating easier access for job
seekers. This shift towards online recruitment offers numerous benefits, such as
improved reach and timely dissemination of information. However, it also
creates opportunities for malicious actors to exploit the system, leading to an
alarming increase in fraudulent job advertisements.
These scams often manifest in various forms, with fraudsters posing as
legitimate employers to lure job seekers into their schemes. One common tactic
involves offering enticing job opportunities that require upfront payments for
processing fees, training materials, or background checks. In reality, these
“employment offers” are nothing more than ploys to extract money from
unsuspecting individuals. The consequences can be devastating, as job seekers
not only lose their hard-earned money but may also experience emotional
distress and a significant loss of trust in the job market.
Furthermore, fraudulent job advertisements can severely impact the
credibility and reputation of legitimate companies. Scammers may create fake
listings that appear to be affiliated with well-known organizations, misleading
job seekers and potentially damaging the companies' standing in the eyes of
the public. Such violations of credibility can lead to long-lasting reputational
harm, making it imperative for organizations to protect their brand image and
for job seekers to remain vigilant in their search for legitimate employment
opportunities.
To combat this growing issue, it is essential to implement robust measures that
can detect and flag fraudulent job postings. By leveraging advanced
technologies, such as machine learning and artificial intelligence, we can
develop automated tools that analyze job listings and identify patterns
indicative of scams. This proactive approach can help safeguard job seekers
from falling victim to fraudulent schemes, fostering a safer and more
trustworthy online recruitment environment.
In conclusion, while the shift to online recruitment presents valuable
opportunities for job seekers and employers alike, it also brings the challenge of
combating employment scams. Addressing this issue requires a concerted
effort from all stakeholders involved, including technology developers,
employers, and job seekers, to create a more secure online job market. By
enhancing awareness and implementing effective detection mechanisms, we
can work towards reducing the prevalence of online recruitment fraud and
protecting the interests of all parties involved.
CHAPTER 2
ANALYSIS
2. SYSTEM ANALYSIS

Requirement Analysis

To address the growing issue of fraudulent job postings on the internet, this
project proposes the development of an automated tool that utilizes machine
learning-based classification techniques for effective scam detection. With the
increasing number of job seekers turning to online platforms for employment
opportunities, the risk of encountering deceptive job advertisements has
escalated. These fraudulent postings not only mislead job seekers but can also
result in significant financial and emotional distress. Therefore, there is an
urgent need for innovative solutions that can efficiently sift through vast
quantities of online job listings and identify those that are potentially
fraudulent.
In our approach, we employ a range of classifiers to assess and detect
fraudulent job posts. The effectiveness of different classification methods is
thoroughly evaluated to determine the most reliable employment scam
detection model. This comparative analysis involves the use of both single
classifiers—such as Decision Trees, Support Vector Machines (SVM), and Naïve
Bayes—and ensemble classifiers, which combine the strengths of multiple
models to improve prediction accuracy. Ensemble classifiers, such as Random
Forest, Gradient Boosting, and AdaBoost, are particularly noteworthy for their
ability to reduce overfitting and enhance the robustness of the predictions
made.
The methodology includes collecting a diverse dataset of job postings,
consisting of both legitimate and fraudulent examples. Various machine
learning techniques are applied to preprocess the data, extract relevant
features, and train the classifiers. Performance metrics, including accuracy,
precision, recall, and F1-score, are employed to evaluate the classification
results. The findings from our experimental results indicate a clear distinction
in performance between the two categories of classifiers. Ensemble classifiers
consistently demonstrate superior capabilities in accurately detecting
fraudulent job postings compared to their single classifier counterparts.
The implications of this research are significant, as the proposed automated
tool can significantly enhance the job-seeking experience by providing a
means to identify and flag potentially deceptive listings. This advancement not
only protects job seekers from scams but also fosters a safer online job market
overall. By streamlining the detection process and improving accuracy, our tool
has the potential to empower job seekers, enhance trust in online platforms,
and ultimately contribute to a more transparent employment landscape. In
summary, this project highlights the critical role of machine learning in
combating online fraud in the job market. By comparing various classifiers and
identifying the most effective models, we aim to provide a robust solution for
detecting fraudulent job postings, thereby promoting a healthier and more
secure environment for job seekers.

.
CHAPTER 3
SOFTWARE
HARDWARE
REQUIREMENTS
&
SPECIFICATION
S
3. SOFTWARE
REQUIREMENTS
SPECIFICATIONS

3.1 System configurations


The software requirement specification can be produced at the culmination of the
analysis task.

The function and performance allocated to software as part of system engineering are
refined by establishing a complete information description, a detailed functional
description, a representation of system behavior, and indication of performance and
design constraints, appropriate validate criteria, and other information pertinent to
requirements.

Following software is used to develop this Web based application:

SOFTWARE REQUIREMENTS:

OPERATING SYSTEM :WINDOWS 10 PRO

CODING LANGUAGE :PYTHON 3.10.9,HTML,CSS,JS

WEB FRAMEWORK : DJANGO


3.2 HARDWARE INTERFACE

• RAM : 4GB and Higher


• Processor : Intel i3 and above
• Hard Disk : 500GB: Minimum

3.2.1 Client Site:


Processor : Intel Pentium IV
Speed : 8.00GHz
RAM : 4 GB
Hard Disk : 500 GB
Key Board (104 keys) : Standard
Screen Resolution : 1024 x 768 Pixels

3.2.2 Server Site:


Processor : Intel Pentium IV
Speed : 8.00GHz
RAM : 4 GB
Hard Disk : 500 GB
Key Board (104 keys) : Standard
Screen Resolution : 1024 x 768 Pixels
CHAPTER 4
ARCHITECTURE
CHAPTER 5
CODING
5.CODING

5.1 Userapp.py
from django.shortcuts import render,redirect,get_object_or_404
from jobseakerapp.models import *
from django.contrib import messages
from django.db.models.query import Q
from jobseakerapp.models import *
from django.core.paginator import Paginator

def admin_login(req):
if req.method == "POST":
username=req.POST.get("username")
password=req.POST.get("password")
if username == "admin" and password == "admin":
messages.success(req,'Successfully Login')
return redirect('admin_index')
else:
messages.warning(req,'invalid login')
return redirect("admin_login")
return render (req,'main/main-admin-login.html')

def admin_index(req):
return render (req,'admin/admin-index.html')

def admin_all_users(req):

restrict=User.objects.filter(Q(user_otp_status='otp verified')|
Q(user_otp_status='Accepted')|Q(user_otp_status='Restricted')).order_by('- user_id')
paginator = Paginator(restrict, 5)
page_number = req.GET.get('page')
post = paginator.get_page(page_number)
return render (req,'admin/admin-allusers.html',{'restrict':post})
def change_status_users(req,user_id):
change_status =
get_object_or_404(User,user_id=user_id) if
change_status.user_otp_status == 'Accepted':
change_status.user_otp_status
="Restricted" messages.warning(req,'User
Restricted')
else:
change_status.user_otp_status
="Accepted" messages.success(req,'User
Approved')

change_status.save(update_fields=["user_otp_status"])
change_status.save()
return redirect('admin_all_users')

def remove_users(req,user_id):
remove = get_object_or_404(User,user_id=user_id)
remove.delete()
messages.error(req,'User Terminated')
return redirect('admin_all_users')

# def change_status_users(req,user_id):
# change_status = get_object_or_404(User,user_id=user_id)
# if change_status.user_status == 'Accepted':
# change_status.user_status ="Restricted"
# messages.warning(req,'User Restricted')
# else:
# change_status.user_status ="Accepted"
# messages.success(req,'User Approved')

# change_status.save(update_fields=["user_status"])
# change_status.save()
# return redirect('admin_all_user')

# def remove_users(req,user_id):
# remove =
get_object_or_404(User,user_id=user_id) #
remove.delete()
# messages.error(req,'User
Terminated') # return
redirect('admin_all_user’)
def admin_user_profile(req):

restrict=User.objects.all().order_by('-user_id').exclude(user_otp_status='otp is pending')

paginator = Paginator(restrict, 5)
page_number = req.GET.get('page')
post = paginator.get_page(page_number)
return render (req,'admin/admin-userprofile.html',{'restrict':post})

def admin_user_profile_view(req,user_id):
profile = User.objects.get(user_id=user_id)
return render (req,'admin/admin-user-profile-view.html',{'user':profile})

def admin_analysis_report(req):
Aa = Survey.objects.filter(option1 = 'Computer Software').count()
Ab = Survey.objects.filter(option1 = 'Information Technology and Services').count()
Ac = Survey.objects.filter(option1 = 'Internet').count()
Ad = Survey.objects.filter(option1 = 'Marketing and Advertising').count()
Ae = Survey.objects.filter(option1 = 'Education Management').count()
Ba = Survey.objects.filter(option2 = 'Full Time').count()
Bb = Survey.objects.filter(option2 = 'Part Time').count()
Bc = Survey.objects.filter(option2 = 'Intern').count()
Bd = Survey.objects.filter(option2 = 'Contract').count()
Ca = Survey.objects.filter(option3 = 'Yes').count()
Cb = Survey.objects.filter(option3 = 'No').count()
Da = Survey.objects.filter(option4 = 'Fresher').count()
Db = Survey.objects.filter(option4 = 'Associate').count()
Dc = Survey.objects.filter(option4 =
'Internship').count()
Dd = Survey.objects.filter(option4 = 'Mid Senior Level').count()
De = Survey.objects.filter(option4 = 'Not Applicable').count()
Ea = Survey.objects.filter(option5 = "Bachelor's Degree").count()
Eb = Survey.objects.filter(option5 = 'High School').count()
Ec = Survey.objects.filter(option5 = "Master's
Degree").count() Ed = Survey.objects.filter(option5 =
'Associate Degree').count() Ee = Survey.objects.filter(option5 =
'Unspecified').count()
Fa = Survey.objects.filter(option6 = 'Sales Executive').count()
Fb = Survey.objects.filter(option6 = 'Web
DEveloper').count() Fc = Survey.objects.filter(option6 =
'Project Intern').count()
Fd = Survey.objects.filter(option6 = 'Research associate').count()
Fe = Survey.objects.filter(option6 = 'Product Manager').count()
Ga = Survey.objects.filter(option7 = 'E-mail').count()
Gb = Survey.objects.filter(option7 = 'Social Media').count()
Gc = Survey.objects.filter(option7 = 'Online Website').count()
Gd = Survey.objects.filter(option7 = 'College').count()
Ge = Survey.objects.filter(option7 = 'Super Set').count()
Ha = Survey.objects.filter(option8 = 'Less Than
1000k').count() Hb = Survey.objects.filter(option8 = '1000k to
5000k').count() Hc = Survey.objects.filter(option8 = '5000k to
10,000k').count() Hd = Survey.objects.filter(option8 = '10,000k
Above').count() Ia = Survey.objects.filter(option9 =
'Yes').count()
context = {
'Aa':Aa ,
'Ab':Ab ,
'Ac':Ac ,
'Ad':Ad ,
'Ae':Ae ,
'Ba':Ba ,
'Bb':Bb ,
'Bc':Bc ,
'Bd':Bd ,
'Ca':Ca ,
'Cb':Cb ,
'Da':Da ,
'Db':Db ,
'Dc':Dc ,
'Dd':Dd ,
'De':De ,
'Ea':Ea ,
'Eb':Eb ,
'Ec':Ec ,
'Ed':Ed ,
'Ee':Ee ,
'Fa':Fa ,
'Fb':Fb ,
'Fc':Fc ,
'Fd':Fd ,
'Fe':Fe ,
'Ga':Ga ,
'Gb':Gb ,
'Gc':Gc ,
'Gd':Gd ,
'Ge':Ge ,
'Ha':Ha ,
'Hb':Hb ,
'Hc':Hc ,
'Hd':Hd ,
'Ia':Ia ,
'Ib':Ib ,
'Ja':Ja ,
'Jb':Jb ,
'Jc':Jc ,
'Jd':Jd ,
'Je':Je ,
'Ka':Ka ,
'Kb':Kb ,
'Kc':Kc ,
'La':La ,
'Lb':Lb ,
'Lc':Lc ,
'Ld':Ld ,
'Le':Le
}

return render (req,'admin/admin-analysis-


report.html',context)
def admin_feedback(req):

restrict=Feedback.objects.all().order_by('-
feed_id')

paginator = Paginator(restrict, 5)
page_number = req.GET.get('page')
post = paginator.get_page(page_number)
return render (req,'admin/admin- feedback.html',
{'restrict':post})
def admin_feedback_analysis(req):
very_positive =
Feedback.objects.filter(feedback_sentiment=
' Very Positive').count()
positive =
Feedback.objects.filter(feedback_sentiment='
P ositive').count()
neutral =
Feedback.objects.filter(feedback_sentiment=
' Neutral').count()
negative =
Feedback.objects.filter(feedback_sentiment=
' Negative').count()
}
return render (req,'admin/admin-feedback-
analysis.html',context)
5.2 Jobseaker.py
from django.shortcuts import render,redirect,get_object_or_404
import requests
from bs4 import BeautifulSoup
from collections import defaultdict
import pandas as pd
from jobseakerapp.models import *
from textblob import TextBlob
import random
from django.contrib import messages

def jobseaker_login(req):
if req.method ==
'POST':
email = req.POST.get('email')
password = req.POST.get('password')
print(email,password)
try:
user = User.objects.get(
user_email=email, user_password=password)
req.session['user_id'] = user.user_id
if user.user_otp_status == 'otp verified' or user.user_otp_status
== 'Accepted':
messages.success(req, 'Successfully Login')
return redirect('jobseaker_index')
elif user.user_otp_status == 'otp is pending':

# messages.warning(req, 'Your request is in pending,


please wait for until acceptance')
return redirect('otp_verification')

elif user.user_otp_status == 'Restricted':


messages.warning(req, 'Your request is Restricted, so you
cannot login')
return redirect('jobseaker_login')
def jobseaker_register(req):
if req.method == 'POST' and req.FILES["pic"]:
username = req.POST.get("username")
email = req.POST.get("email")
password = req.POST.get("password")
contact = req.POST.get("contact")
addresss = req.POST.get("addresss")
image = req.FILES["pic"]
gen_otp = (9999)
print(gen_otp)
User.objects.create( user_username=username , user_email=email
, user_password=password , user_contact=contact , user_addresss=addresss ,
user_image=image ,user_otp=gen_otp)
url = "https://www.fast2sms.com/dev/bulkV2"
message = ' Dear {}. Welcome to Reveal. Here is your One Time
Validation {}. For Your First Time Login'.format(username,gen_otp)
numbers = contact
payload =
f'sender_id=FTWSMS&message={message}&language=english&route=v3&num
bers={numbers}'
headers = {
'authorization':
"xZIssgvbBl4hSeai7mMebAMxcusK4BbhQZGO3v1O0ZlAUjuRFWhLAR5hA2SK",
'Content-Type': "application/json",
'Cache-Control': "no-cache",
}
response = requests.request("POST", url, data=payload,
headers=headers)
print(response.text,'heloooooo')
messages.success(req, 'Successfully Registered')
return render(req,'main/main-user-register.html')
def otp_verification(req):

if req.method == 'POST':
otp1 =
req.POST.get('otp1') otp2
= req.POST.get('otp2')
otp3 = req.POST.get('otp3')
# set benign = 0, malignant = 1
for i in range(0,df.shape[0]):
if(df.loc[i,'Class'] == 2):
df.loc[i,'Class'] = 0
else:
df.loc[i,'Class'] = 1

print ("Total number of diagnosis are ", str(df.shape[0]), ", ",


df.Class.value_counts()[0], "Benign and Malignant are",
df.Class.value_counts()[1])

featureMeans = list(df.columns[0:9])
print(featureMeans)

sns.heatmap(data.df.corr())
'''

for i in range(len(featureMeans)):
print(featureMeans[i],"=",type(featureMeans[i]))

import seaborn as sns

bins = 20
plt.figure(figsize=(15,15))
plt.subplot(3, 2, 1)
sns.distplot(df[df['Class']==1]['Clump Thickness'], bins=bins,
color='green', label='M')
sns.distplot(df[df['Class']==0]['Clump Thickness'], bins=bins,
color='red' label='B')
plt.legend(loc='upper right')

plt.subplot(3, 2, 2)
sns.distplot(df[df['Class']==1]['Uniformity of Cell Size'],
bins=bins, color='green', label='M')
sns.distplot(df[df['Class']==0]['Uniformity of Cell Size'],
bins=bins, color='red', label='B')
plt.legend(loc='upper right')
plt.subplot(3, 2, 3)
sns.distplot(df[df['Class']==1]['Uniformity of Cell Shape'],
bins=bins, color='green', label='M')
sns.distplot(df[df['Class']==0]['Uniformity of Cell Shape'],
bins=bins, color='red', label='B')
plt.legend(loc='upper right')

plt.subplot(3, 2, 4)
sns.distplot(df[df['Class']==1]['Marginal Adhesion'], bins=bins,
color='green', label='M')
sns.distplot(df[df['Class']==0]['Marginal Adhesion'], bins=bins,
color='red', label='B')
plt.legend(loc='upper right')

plt.subplot(3, 2, 5)
sns.distplot(df[df['Class']==1]['Bland Chromatin'], bins=bins,
color='green', label='M')
sns.distplot(df[df['Class']==0]['Bland Chromatin'], bins=bins,
color='red', label='B')
plt.legend(loc='upper right')

plt.subplot(3, 2, 6)
sns.distplot(df[df['Class']==1]['Normal Nucleoli'], bins=bins,
color='green', label='M')
sns.distplot(df[df['Class']==0]['Normal Nucleoli'], bins=bins,
color='red', label='B')
plt.legend(loc='upper right')

plt.subplot(3, 2, 8)
sns.distplot(df[df['Class']==1]['Mitoses'], bins=bins,
color='green',
label='M')
sns.distplot(df[df['Class']==0]['Mitoses'], bins=bins, color='red',
label='B') plt.legend(loc='upper right')
return render(req,'main/main-otp-verification.html')

def otp_validation(req,otp):
user_id = req.session['user_id']
user = User.objects.get(user_id=user_id)
if user.user_otp == otp:
ver_otp = get_object_or_404(User,user_id=user_id)
ver_otp.user_otp_status ="otp verified"
ver_otp.save(update_fields=["user_otp_status"])
ver_otp.save()
messages.success(req, 'OTP Verified Successfully')

return redirect('jobseaker_index')
else:
messages.warning(req, 'Invalid OTP')

return redirect('otp_verification')

return redirect('otp_validation')

def jobseaker_index(req):
user_id =
req.session['user_id']
user = User.objects.get(user_id=user_id)
return render(req,'jobseaker/jobseaker-index.html')

def jobseaker_analyze_job_post(req):
user_id = req.session['user_id']
user = User.objects.get(user_id=user_id)
if req.method == 'POST':
url=req.POST.get('url')
URL.objects.create(url=url,user_url=user)
return redirect('jobseaker_job_details_page')
return render(req,'jobseaker/jobseaker-analyze-job-post.html')
def jobseaker_job_details_page(req):

user_id = req.session['user_id']
user =
User.objects.get(user_id=user_id)
try:

url_link = URL.objects.filter(user_url=user_id).order_by('-
url_id')[0:1] for i in url_link:

url = i.url

page = requests.get(url)

soup = BeautifulSoup(page.content,'html.parser')

name=soup.find('div', class_="detail_view")

title=soup.find('span',
attrs={'class':'profile_on_detail_page'}).get_text().strip()
company=soup.find('a',
attrs={'class':'link_display_like_text
view_detail_button'}).get_text().strip()

details_companylinksrc=soup.find('div',
attrs={'class':'text-container
website_link'})

try:
details_companylinktext =
details_companylinksrc.find('a').get_text().strip()
details_companylink =
details_companylinksrc.find('a').get('href').strip()
except:
details_companylinktext = 'No Website Link Available'
details_companylink = ''
# details_companylinktext =
details_companylinksrc.find('a').get_text().strip()

logosrc=soup.find('div',
attrs={'class':'internship_logo'}) # logo =
logosrc.find('img').get('src').strip()
location=soup.find('p',
attrs={'id':'location_names'}).get_text().strip()

internshipdetails=soup.find('div',
attrs={'class':'other_detail_item_row'}).get_text().strip()
tags=soup.find('div',
attrs={'class':'tags_container_outer'}).get_text().strip()
details=soup.find('div', attrs={'class':'internship_details'})

details_about_company_title=soup.find('div',attrs={'class':'sect
ion_headi
heading_5_5'}).get_text().strip()
details_about_company=soup.find('div',attrs={'class':'text-
container about_company_text_container'}).get_text().strip()

details_company_activity=soup.find('div',attrs={'class':'activity
_section'}).
_text().strip()
candidates =
soup.find('div',attrs={'class':'activity_container'})
candidates_hired =
candidates.find_all('div',attrs={'class':'text body-
main'})[-1].get_text()

print(details_companylink,'linkkkkk')

print(candidates_hired,'hired',type(candidates_hired)
) job_details=""
for i in details.find_all('div',attrs={'class':"text-
container"})[2:3]: # print(i)
job_details += i.get_text().strip()
openings=0
for j in details.find_all('div',attrs={'class':"text-
container"})[-1]:
openings += int(j)
skills_required =
soup.find('div',attrs={'class':'round_tabs_container'}).get_
text().strip()
salary = soup.find('div',attrs={'class':'text-container
salary_container'}).get_text().strip()
#
print(job_details,tags,details_company_acti
vity) if 'Immediately' in internshipdetails
and 'Work from
home' in location:
status = 'Fake'
elif details_companylink == '':
status = 'Fake'
elif openings >=
50 : status =
'Fake' else:
status =
'Genuine'
print(status)
context = {
'status':status,
'job_details':job_deta
ils, 'logosrc':logosrc,
'job_title':title,
'company':company,
'companylinktext':details_companylinkt
ext,
'company_website_link':details_compan
ylink, # 'company_logo':logo,
'company_location':location,
'internship_deatils':internshipdetails,
'company_posted_tags':tags,
'about_company_title':details_about_company_title,
'about_company':details_about_company,
'company_hiring_activity':details_company_activity,
'skills_required':skills_required,
'salary':salary,
'openings':openings
}
except:
context=None
messages.warning(req, 'Invalid
URL')
return redirect('jobseaker_analyze_job_post')
return render(req,'jobseaker/jobseaker-job-details-
page.html',context)

def jobseaker_survey(req):
user_id = req.session['user_id']
user =
User.objects.get(user_id=user_id) if
req.method == 'POST':
option1 = req.POST.get("radio1")
option2 = req.POST.get("radio2")
option3 = req.POST.get("radio3")
option4 = req.POST.get("radio4")
option5 = req.POST.get("radio5")
option6 = req.POST.get("radio6")
option7 = req.POST.get("radio7")
option8 = req.POST.get("radio8")
option9 = req.POST.get("radio9")
option10 =
req.POST.get("radio10") option11
= req.POST.get("radio11")
option12 =
req.POST.get("radio12")

Survey.objects.create(user_id=user,option1=option1,option2
=option2,option3=option3,option4=option4,option5=
option5,o
ption6=option6,option7=option7,option8=option8,opt
ion9=opti
on9,option10=option10,option11=option11,option12
=option1 2)

print(option1,option2,option3,option4,option5,option6,option7,
option8,option9,option10,option11,option12)
messages.success(req, 'Survey Submitted
Successfully')

return render(req,'jobseaker/jobseaker-survey.html')
def jobseaker_analysis_report(req):
user_id = req.session['user_id']
user = User.objects.get(user_id=user_id)
Aa = Survey.objects.filter(option1 = 'Computer
Software').count() Ab = Survey.objects.filter(option1 =
'Information Technology and
Services').count()
Ac = Survey.objects.filter(option1 =
'Internet').count() Ad =
Survey.objects.filter(option1 = 'Marketing and
Advertising').count()
Ae = Survey.objects.filter(option1 = 'Education
Management').count()
Ba = Survey.objects.filter(option2 = 'Full
Time').count() Bb =
Survey.objects.filter(option2 = 'Part
Time').count() Bc =
Survey.objects.filter(option2 = 'Intern').count()
Bd = Survey.objects.filter(option2 = 'Contract').count()
Ca = Survey.objects.filter(option3 =
'Yes').count() Cb =
Survey.objects.filter(option3 = 'No').count()
Da = Survey.objects.filter(option4 =
'Fresher').count() Db =
Survey.objects.filter(option4 =
'Associate').count() Dc =
Survey.objects.filter(option4 =
'Internship').count()
Dd = Survey.objects.filter(option4 = 'Mid Senior
Level').count() De = Survey.objects.filter(option4 =
'Not Applicable').count()
Ea = Survey.objects.filter(option5 = "Bachelor's
Degree").count()
Eb = Survey.objects.filter(option5 = 'High School').count()
Ec = Survey.objects.filter(option5 = "Master's
Degree").count() Ed = Survey.objects.filter(option5 =
'Associate Degree').count() Ee =
Survey.objects.filter(option5 = 'Unspecified').count()
Fa = Survey.objects.filter(option6 = 'Sales
Executive').count() Fb = Survey.objects.filter(option6
= 'Web DEveloper').count() Fc =
Survey.objects.filter(option6 = 'Project
Intern').count()
Fd = Survey.objects.filter(option6 = 'Research
associate').count() Fe = Survey.objects.filter(option6 =
'Product Manager').count() Ga =
Survey.objects.filter(option7 = 'E-mail').count()
Gb = Survey.objects.filter(option7 = 'Social
Media').count() Gc = Survey.objects.filter(option7 =
'Online Website').count() Gd =
Survey.objects.filter(option7 = 'College').count()
Ge = Survey.objects.filter(option7 = 'Super Set').count()
Ha = Survey.objects.filter(option8 = 'Less Than
1000k').count() Hb = Survey.objects.filter(option8 =
'1000k to 5000k').count() Hc =
Survey.objects.filter(option8 = '5000k to
10,000k').count() Hd = Survey.objects.filter(option8 =
'10,000k Above').count() Ia =
Survey.objects.filter(option9 = 'Yes').count()
Ib = Survey.objects.filter(option9 = 'No').count()
Ja = Survey.objects.filter(option10 = 'Personal
Details').count() Jb = Survey.objects.filter(option10 =
'Credit Card Details').count() Jc =
Survey.objects.filter(option10 = 'Documents').count()
Jd = Survey.objects.filter(option10 = 'Photo and
Media').count() Je = Survey.objects.filter(option10 =
'Money').count()
Ka = Survey.objects.filter(option11 =
'Yes').count() Kb =
Survey.objects.filter(option11 = 'No').count()
Kc = Survey.objects.filter(option11 =
'Nill').count()
La = Survey.objects.filter(option12 =
'Whatsapp').count() Lb =
Survey.objects.filter(option12 =
'Facebook').count() Lc =
Survey.objects.filter(option12 =
'Instagram').count() Ld =
Survey.objects.filter(option12 = 'Indeed').count()
Le = Survey.objects.filter(option12 = 'Linkedin').count()
CHAPTER 6
DIAGRAMS
UML DIAGRAM’S USE CASE
CLASS DIAGRAM

SEQUENCE DIAGRAM
ACTIVITY DIAGRAM
CONCLUSION

Only reputable job offers should reach job seekers, and to ensure this, several machine
learning methods have been proposed to detect employment scams. These fraudulent
schemes have become increasingly sophisticated, requiring the development of
advanced solutions for reliable detection. In this work, we explore a variety of
countermeasures aimed at identifying and filtering out fake job postings before they
reach the public.
The focus of this research is on using a supervised machine learning approach to
demonstrate the effectiveness of several mechanisms for employment scam detection. By
training classifiers with labeled data, the system can learn to distinguish between
legitimate and fraudulent job listings.
Various classifiers are examined in this study, each designed to identify scam patterns
within job postings based on key features, such as suspicious language, request for
money, and unrealistic job offers.
Among the classifiers tested, the Multi-Layer Perceptron (MLP) demonstrated superior
performance. MLP, a type of artificial neural network, excels at learning complex
relationships in data, making it highly effective in this context. The experimental
results show that MLP outperforms other classifiers, including traditional methods such
as Decision Trees, Support Vector Machines (SVM), and Naïve Bayes. This success can
be attributed to MLP’s ability to capture intricate patterns and relationships in large
datasets, providing more accurate classifications.
The proposed MLP-based method achieved a remarkable 98 percent accuracy rate in
detecting fraudulent job postings, which is significantly higher than that of existing
approaches. This high level of accuracy underscores the potential of machine learning in
addressing the growing problem of online employment scams. By implementing such
advanced models, online job platforms can ensure that only legitimate offers reach job
seekers, improving trust and safety in the digital recruitment process.
In summary, this research demonstrates the effectiveness of using machine learning,
particularly MLP, for employment scam detection. With a 98 percent accuracy rate,
this approach surpasses traditional methods, offering a robust solution for mitigating
the risks of online recruitment fraud. As fraudulent job posts continue to evolve,
machine learning classifiers like MLP provide a promising path forward for
safeguarding the interests of job seekers and maintaining the integrity of online job
platforms.
CHAPTER8
REFERENCE
S
REFERENCES

• S. Anita, P. Nagarajan, G. A. Sairam, P. Ganesh, and G.


Deepakkumar, “Fake Job Detection and Analysis Using
Machine Learning and Deep Learning Algorithms,” Revista
GEINTEC: Gestão Inovação e Tecnologias, vol. 11, no. 2, pp.
642–650, 2021.
• B. Alghamdi and F. Alharby, “An intelligent model for
online recruitment fraud detection,” Journal of Information
Security, vol. 10, no. 03, p. 155, 2019.
• “Report | Cyber.gov.au.” [Online]. Available:
https://www.cyber.gov.au/acsc/report. [Accessed: Jun.
19, 2021].
• A. Pagotto, “Text Classification with Noisy Class Labels,” M.S.
thesis,
Carleton University, 2020.
• “Employment Scam Aegean Dataset.” [Online]. Available:
http://emscad.samos.aegean.gr/. [Accessed: Jun. 19, 2021].
• S. Vidros, C. Kolias, G. Kambourakis, and L. Akoglu,
“Automatic detection of online recruitment frauds:
Characteristics, methods, and a public dataset,” Future
Internet, vol. 9, no. 1, p. 6, 2017.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy