MD Kamrul Islam
MD Kamrul Islam
MD Kamrul Islam
Supervised by:
Nusrhat Jahan Sarker
Lecturer
Submitted by:
Md Kamrul Islam
Registration No. 17502005078
Session: 2017-18
…………………………… ..………………………..
Examiner Examiner
………………………… ..……………………..
Nusrhat Jahan Sarker Md. Imran Hossain
Project Guide (Supervisor) Head
Lecturer Department of CSE,
Department of CSE, DIIT Daffodil Institute of IT (DIIT)
ii
Declaration
I hereby declare that the project worked entitled “Heart Disease Pre-
diction System Using Machine Learning” submitted to the degree of
BSc. (Hon’s) in Computer Science& Engineering (CSE) is a record of
original work done by me. Except as acknowledged in the text and
that the material has not been submitted, either in whole or in part, for
a degree at this or any other university.
Submitted by:
……………………
Md Kamrul Islam
Registration No. – 17502005078
Session __ 2017-18
iii
Acknowledgement
Last but not the least I extend my sincere thanks to my family members
and my friends for their constant support throughout this project.
iv
Abstract
v
Table of Contents
Approval ii
Declaration iii
Acknowledgement iv
Abstract v
Table of Contents vi
Contents
vi
3.6 Javascript 13
3.7 Django 14
viii
List of Figures
1.2 Problem Definition 03
3.1 Confusion Matrix 10
3.1.1 Correlation Matrix 11
4.2 Agile Software Development Method 16
4.3 Advantage of Agile Method 17
5.1 System Architecture 19
5.3.1 Support Vector Machine (Svm) 23
5.3.2 Naive Bayes Algorithm 24
5.3.3 Decision Tree 25
5.4 Dataset Used 26
5.6 Collection of Dataset 28
5.7 Pre-processing Of Data 29
5.8 Balancing Of Data 30
5.9 Prediction of Disease 30
6.1.1.1 Use Case Diagram between ADMIN and SYSTEM 33
6.1.1.2 Use Case Diagram between USER and SYSTEM 33
6.1.1.3 Sequence Diagram for User and Administrator 34
6.2.1 Data Flow Diagram Level-0 35
6.2.2 Data Flow Diagram Level-1 36
6.2.3 Data Flow Diagram Level-2 36
6.3 Work Flow Diagram 37
6.4 Activity Diagram 38
6.5 Project Flow chart 39
6.6 E-R Diagram 40
ix
Chapter 1
Introduction
1
1.1 Introduction
According to the World Health Organization, every year 12 million deaths occur
worldwide due to Heart Disease. Heart disease is one of the biggest causes of morbidity
and mortality among the population of the world. Prediction of cardiovascular disease
is regarded as one of the most important subjects in the section of data analysis. The
load of cardiovascular disease is rapidly increasing all over the world from the past few
years. Many researches have been conducted in attempt to pinpoint the most influential
factors of heart disease as well as accurately predict the overall risk. Heart Disease is
even highlighted as a silent killer which leads to the death of the person without obvious
symptoms. The early diagnosis of heart disease plays a vital role in making decisions
on lifestyle changes in high-risk patients and in turn reduces the complications.
Machine learning proves to be effective in assisting in making decisions and predictions
from the large quantity of data produced by the health care industry. This project aims
to predict future Heart Disease by analyzing data of patients which classifies whether
they have heart disease or not using machine-learning algorithm. Machine Learning
techniques can be a boon in this regard. Even though heart disease can occur in different
forms, there is a common set of core risk factors that influence whether someone will
ultimately be at risk for heart disease or not. By collecting the data from various sources,
classifying them under suitable headings & finally analysing to extract the desired data
we can say that this technique can be very well adapted to do the prediction of heart
disease. An estimated 17.5 million deaths occur due to cardiovascular diseases worldwide.
More than 75% deaths due to cardiovascular diseases occur in the middle-income and low-
income countries. Also, 80% of the deaths that occur due to CVDs are because of stroke and
heart attack.[1]
1.2 Problem Definition
The major challenge in heart disease is its detection. There are instruments available
which can predict heart disease but either it are expensive or are not efficient to
calculate chance of heart disease in human. Early detection of cardiac diseases can
decrease the mortality rate and overall complications. However, it is not possible to
monitor patients, everyday in all cases accurately and consultation of a patient for 24
hours by a doctor is not available since it requires more sapience, time and expertise.
Since we have a good amount of data in today’s world, we can use various machine
2
learning algorithms to analyze the data for hidden patterns. The hidden patterns can be
used for health diagnosis in medicinal data.
Deaths(Millions)
Tuberculosis
Diarrhoeal Disease
Road Injury
Diabetes Mellitus
Stroke
Ischaemic Heart Disease
0 2 4 6 8 10 12
3
1.4 Objective
This Research Is To Present a Heart Disease Prediction Model For The
Prediction Of Occurrence Of Heart Disease.
Due to lack of resources in the medical field, the prediction of heart disease
occasionally may be a problem.
Utilization of suitable technology support in this regard can prove to be highly
beneficial to the medical fraternity and patient.
Machine learning techniques can be very well adapted to the prediction of heart
disease.
1.5 Goal
User can search for doctor’s help at any point of time.
User can talk about their illness and get instant diagnosis.
Inform the user about the type of disease or disorder it feels.
Doctors get more clients online.
4
Thal.
Rest Blood Pressure.
Serum Cholesterol.
Thalach – maximum heart rate achieved.
Oldpeak.
Age in Year.
Medical health systems have been concentrating on artificial intelligence techniques for
speedy diagnosis. However, the recording of health data in a standard form still requires
attention so that machine learning can be more accurate and reliable by considering
multiple features. The aim of this study is to develop a general framework for recording
diagnostic data in an international standard format to facilitate prediction of disease
diagnosis based on symptoms using machine learning algorithms. Efforts were made to
ensure error-free data entry by developing a user-friendly interface.
5
Chapter 2
Proposed System
6
2.1 Related Work
With growing development in the field of medical science alongside machine learning
various experiments and researches has been carried out in these recent years releasing
the relevant significant papers. The paper [2] propose heart disease prediction using
KStar, J48, SMO, and Bayes Net and Multilayer perceptron using WEKA software.
Based on performance from different factor SMO (89% of accuracy) and Bayes Net
(87% of accuracy) achieve optimum performance than KStar, Multilayer perceptron
and J48 techniques using k-fold cross validation. The accuracy performance achieved
by those algorithms are still not satisfactory. So that if the performance of accuracy is
improved more to give batter decision to diagnosis disease.
In a research conducted using Cleveland dataset for heart diseases which contains 303
instances and used 10-fold Cross Validation, considering 13 attributes, implementing 4
different algorithms, they concluded Gaussian Naïve Bayes and Random Forest gave
the maximum accuracy of 91.2 percent.[3]
Using the similar dataset of Framingham, Massachusetts, the experiments were carried
out using 4 models and were trained and tested with maximum accuracy K Neighbors
Classifier: 87%, Support Vector Classifier: 83%, Decision Tree Classifier: 79% and
Random Forest Classifier: 84%.[4]
7
2.3 Our Proposed System
The working of the system starts with the collection of data and selecting the important
attributes. Then the required data is preprocessed into the required format. The data is
then divided into two parts training and testing data. The algorithms are applied and the
model is trained using the training data. The accuracy of the system is obtained by
testing the system using the testing data. This system is implemented using the
following modules.
2.4 Features
8
Chapter 3
Requirements
&
System Analysis
9
3.1 Performance Analysis
In this project, various machine learning algorithms like SVM, Naive Bayes, Decision
Tree, Random Forest, Logistic Regression, Adaboost, XG-boost are used to predict
heart disease. Heart Disease UCI dataset, has a total of 76 attributes, out of those only
14 attributes are considered for the prediction of heart disease. Various attributes of the
patient like gender, chest pain type, fasting blood pressure, serum cholesterol, exang,
etc are considered for this project. The accuracy for individual algorithms has to
measure and whichever algorithm is giving the best accuracy, that is considered for the
heart disease prediction. For evaluating the experiment, various evaluation metrics like
accuracy, confusion matrix, precision, recall, and f1-score are considered. Accuracy-
Accuracy is the ratio of the number of correct predictions to the total number of inputs
in the dataset. It is expressed as:
Confusion Matrix: It gives us a matrix as output and gives the total performance of
the system.
Where
TP: True positive
FP: False Positive
FN: False Negative
TN: True Negative 59
10
Correlation Matrix: The correlation matrix in machine learning is used for feature
selection. It represents dependency between various attributes.
Precision: It is the ratio of correct positive results to the total number of positive
results predicted by the system.
Recall: It is the ratio of correct positive results to the total number of positive
results predicted by the system.
The project has a wide scope, as it is not intended to a particular organization. This
project is going to develop generic software, which can be applied by any businesses
organization. Moreover it provides facility to its users. Also the software is going to
provide a huge amount of summary data.
3.3 Python
Python is a widely used general-purpose, high level programming language. It was
initially designed by Guido van Rossum in 1991 and developed by Python Software
Foundation. It was mainly developed for emphasis on code readability, and its syntax
allows programmers to express concepts in fewer lines of code. Python is a
programming language that lets you work quickly and integrate systems more
11
efficiently. Python is dynamically typed and garbage-collected. It supports multiple
programming paradigms, including procedural, object-oriented, and functional
programming. Python is often described as a "batteries included" language due to its
comprehensive standard library.
3.4 HTML
HTML (Hypertext Markup Language) is the set of markup symbols or codes inserted
in a file intended for display on a World Wide Web browser page. The markup tells the
Web browser how to display a Web page's words and images for the user. Each
individual markup code is referred to as an element (but many people also refer to it as
a tag). Some elements come in pairs that indicate when some display effect is to begin
and when it is to end.
12
3.5 Cascading Style Sheet (CSS)
Cascading Style Sheets (CSS) are a collection of rules we use to define and modify web
pages. CSS are similar to styles in Word. CSS allow Web designers to have much more
control over their pages look and layout. For instance, you could create a style that
defines the body text to be Verdana, 10 point. Later on, you may easily change the body
text to Times New Roman, 12 point by just changing the rule in the CSS. Instead of
having to change the font on each page of your website, all you need to do is redefine
the style on the style sheet, and it will instantly change on all of the pages that the style
sheet has been applied to. With HTML styles, the font change would be applied to each
instance of that font and have to be changed in each spot. CSS can control the placement
of text and objects on your pages as well as the look of those objects. HTML
information creates the objects (or gives objects meaning), but styles describe how the
objects should appear. The HTML gives your page structure, while the CSS creates the
“presentation”. An external CSS is really just a text file with a .css extension. These
files can be created with Dreamweaver, a CSS editor, or even Notepad. The best
practice is to design your web page on paper first so you know where you will want to
use styles on your page. Then you can create the styles and apply them to your page.
3.6 Javascript
3.7 Django
This framework uses a famous tag line: The web framework for perfectionists with
deadlines.
14
Chapter 4
Methodology
15
4.1 Our Used Methodology
In our project, we will use the “Agile Software Development” model. There are a
number of properties in the Agile Model that helps us to build our project “Blockchain-
Based Digital Land Registration System”. We will discuss why it is best for our project
throughout this chapter.
under which requirements and solutions evolve through the collaborative effort of self-
16
Figure 4.3: Advantage of Agile Method
17
Chapter 5
Working of System
18
5.1 System Architecture
Dataset collection is collecting data which contains patient details. Attributes selection
process selects the useful attributes for the prediction of heart disease. After identifying
the available data resources, they are further selected, cleaned, made into the desired
form. Different classification techniques as stated will be applied on preprocessed data
to predict the accuracy of heart disease. Accuracy measure compares the accuracy of
different classifiers.[5]
19
5.2.1 Supervised Learning
Supervised learning is the type of machine learning in which machines are trained using
well "labelled" training data, and on the basis of that data, machines predict the output.
The labelled data means some input data is already tagged with the correct output.
In supervised learning, the training data provided to the machines work as the
supervisor that teaches the machines to predict the output correctly. It applies the same
concept as a student learns in the supervision of the teacher.
Supervised learning is a process of providing input data as well as correct output data
to the machine learning model. The aim of a supervised learning algorithm is to find a
mapping function to map the input variable(x) with the output variable(y).
20
supervised learning the training data has the answer key with it so the model is trained
with the correct answer itself whereas in reinforcement learning, there is no answer but
the reinforcement agent decides what to do to perform the given task. In the absence of
a training dataset, it is bound to learn from its experience.
5.3 Algorithm
Support Vector Machine or SVM is one of the most popular Supervised Learning
algorithms, which is used for Classification as well as Regression problems. However,
primarily, it is used for Classification problems in Machine Learning.
The goal of the SVM algorithm is to create the best line or decision boundary that can
segregate n-dimensional space into classes so that we can easily put the new data point
in the correct category in the future. This best decision boundary is called a hyperplane.
SVM chooses the extreme points/vectors that help in creating the hyperplane. These
extreme cases are called support vectors, and hence the algorithm is termed as Support
Vector Machine.Support vector machines (SVMs) are powerful yet flexible supervised
machine learning algorithms which are used both for classification and regression. But
generally, they are used in classification problems. In the 1960s, SVMs were first
introduced but later they got refined in 1990. SVMs have their unique way of
implementation as compared to other machine learning algorithms. Lately, they are
extremely popular because of their ability to handle multiple continuous and categorical
variables.
Support Vectors - Data Points that are closest to the hyperplane are called
support vectors. Separating line will be defined with the help of these data
points.
Hyperplane - As we can see in the above diagram, it is a decision plane or space
which is divided between a set of objects having different classes.
21
Margin - It may be defined as the gap between two lines on the closest data
points of different classes. It can be calculated as the perpendicular distance
from the line to the
Types of SVM:
Linear SVM: Linear SVM is used for linearly separable data, which means if a
dataset can be classified into two classes by using a single straight line, then
such data is termed as linearly separable data, and classifier is used called as
Linear SVM classifier.
Non-linear SVM: Non-Linear SVM is used for non-linearly separated data,
which means if a dataset cannot be classified by using a straight line, then such
data is termed as non-linear data and classifier used is called as Non-linear SVM
classifier.
If the number of features is much greater than the number of samples, avoid
over-fitting in choosing Kernel functions and regularization term is crucial.
SVMs do not directly provide probability estimates, these are calculated using
an expensive five-fold cross-validation.
22
Figure 5.3.1: Support Vector Machine
It is a machine learning technique that works on the strategy of the Bayes’ Theorem. It
basically assumes that there would be no attributes dependent on each other. It is a
group of algorithms that have a common principle that every feature is independent of
the other. Bayes’ Theorem tells us the probability of an event that will occur when
another event has already occurred. The
Where
23
Figure 5.3.2: Naive Bayes Algorithm
Decision trees are treelike structures that are used to manage large datasets. They are
often depicted as flowcharts, with outer branches representing the results and inner
nodes representing the properties of the dataset. Decision trees are popular because they
are efficient, reliable, and easy to understand. The projected class label for a decision
tree originates from the tree’s root. The following steps in the tree are decided by
comparing the value of the root attribute with the information in the record. Following
a jump on the next node, the matching branch is followed to the value shown by the
comparison result. Entropy changes when training examples are divided into smaller
groups using a decision tree node. The measurement of this change in entropy is
information gain An accuracy of 73.0% has been achieved by the decision tree.[6] In a
research by.[7] 72.77% accuracy was achieved by the decision tree classifier. Decision nodes
are used to make any decision and have multiple branches, whereas Leaf nodes are the
output of those decisions and do not contain any further branches. The decisions or the
test are performed on the basis of features of the given dataset. It is a graphical
representation for getting all the possible solutions to a problem/decision based on
given conditions. It is called a Decision Tree because, similar to a tree, it starts with the
root node, which expands on further branches and constructs a tree-like structure. In
order to build a tree, we use the CART algorithm, which stands for Classification and
Regression Tree algorithm. A Decision Tree simply asks a question, and based on the
answer (Yes/No), it further split the tree into subtrees
24
Figure 5.3.3: Decision Tree
5.4 Datasets
The dataset is publicly available on the Kaggle Website at.[8] which is from an ongoing
cardiovascular study on residents of the town of Framingham, Massachusetts. It
provides patient information which includes over 4000 records and 14 attributes. The
attributes include: age, sex, chest pain type, resting blood pressure, serum cholesterol,
fasting, sugar blood, resting electrocardiographic results, maximum heart rate, exercise
induced angina, ST depression induced by exercise, slope of the peak exercise, number
of major vessels, and target ranging from 0 to 2, where 0 is absence of heart disease.
The data set is in csv (Comma Separated Value) format which is further prepared to
data frame as supported by pandas library in python.
25
Figure 5.4: Original Dataset Snapshot
We propose the use of binning as a method for converting continuous input, such as
age, into categorical input in order to improve the performance and interpretability of
classification algorithms. By categorizing continuous input into distinct groups or bins,
the algorithm is able to make distinctions between different classes of data based on
specific values of the input variables. For instance, if the input variable is “Age Group”
and the possible values are “Young”, “Middle-aged”, and “Elderly”, a classification
algorithm can use this information to separate the data into different classes or
categories based on the age group of the individuals in the dataset.[9]
Additionally, converting continuous input into categorical input through binning can
also aid in the interpretability of the results, as it is easier to understand and interpret
the relationship between the input variables and the output classes. On the other hand,
continuous input, such as numerical values, can be more difficult to use in classification
algorithms as the algorithm may have to make assumptions about where to draw
boundaries between different classes or categories.[10]
26
In this study, we applied the method of binning to the attribute of age in a dataset of
patients. The age of patients was initially given in days, but for better analysis and
prediction, it was converted to years by dividing it by 365. The age data were then
divided into bins of 5-year intervals, ranging from 0–20 to 95–100. The minimum age
in the dataset is 30 years, and the maximum is 65, so the bin 30–35 is labeled as 0, while
the last bin 60–65 is marked as 6.
Furthermore, other attributes with continuous values, such as height, weight, ap_hi, and
ap_lo, were also converted into categorical values. The results of this study demonstrate
that converting continuous input into categorical input through binning can improve the
performance and interpretability of classification algorithms.
The average blood pressure a person has during a single cardiac cycle is known as mean
arterial pressure (MAP) in medicine. MAP is a measure of peripheral resistance and
cardiac output, and has been shown to be linked to significant CVD events in the
ADVANCE study.[12,13]. In a research including people with type 2 diabetes, it was
shown that for every 13 mmHg rise in MAP, the risk of CVD rose by 13%. Additionally,
if MAP raises the risk of CVD in people with type 2 diabetes, it should also result in a
higher number of CVD hospitalizations.[13] These findings suggest a direct relationship
between MAP and CVD.
28
Figure 5.7: Data Pre-processing
Under Sampling:
In Under Sampling, dataset balance is done by the reduction of the size of the ample
class. This process is considered when the amount of data is adequate.
Over Sampling:
In Over Sampling, dataset balance is done by increasing the size of the scarce samples.
This process is considered when the amount of data is inadequate.
29
Figure 5.8: Data Balancing
30
5.10 Software Requirement
31
Chapter 6
System Design
32
6.1 Conceptual Design
6.1.1 Use Case Diagram
33
6.1.1.3 Sequence Diagram for User and Administrator
:Validate()
:executeQuery()
Response
Show Result
Success:hide() Failed:show()
34
they input ultimately has an effect upon the structure of the whole system from
order to dispatch to restock how any system is developed can be determined
through a dataflow diagram. The appropriate register saved in database and
maintained by appropriate authorities.
35
Figure 6.2.2: DFD L1
36
6.3 Work Flow Diagram
37
6.4 Activity Diagram
38
6.5 Project Flow chart
39
6.6 E-R Diagram
40
Chapter 7
Implementation
41
7.1 Home Page:
42
7.4 Patients and admin Feedback Page:
43
7.7 Admin Dashboard Page:
44
7.10 Doctor Suggestion Page:
45
Chapter 8
Conclusion
46
8.1 Limitation of Our System
8.3 Conclusion
It has been a great pleasure for me to work on this exciting and challenging project.
This project proved good for me as it provided practical knowledge of not only
programming in Python and Sqlite web based application. It also provides knowledge
about the latest technology used in developing web enabled application and client
server technology that will be great demand in future. This will provide better
opportunities and guidance in future in developing projects independently.future
research. Future research could focus on addressing the limitations of this study by
comparing the performance of the k-modes clustering algorithm with other commonly
used clustering algorithms, such as k-means.[14] or hierarchical clustering.[15] to gain a
more comprehensive understanding of its performance. Additionally, it would be
valuable to evaluate the impact of missing data and outliers on the accuracy of the
model and develop strategies for handling these cases. Furthermore, it would be
beneficial to evaluate the performance of the model on a held-out test dataset in order
to establish its generalizability to new, unseen data. Ultimately, future research should
aim to establish the robustness and generalizability of the results and the interpretability
of the clusters formed by the algorithm, which could aid in understanding the results
and support decision making based on the study’s findings.
47
8.4 References
48
11. Khan, S., Ning, H., Wilkins, J., Allen, N., Carnethon, M., Berry, J., . . . Lloyd-Jones,
D. (2018, May). Association of body mass index with lifetime risk of
cardiovascular disease and compression of morbidity. JAMA Cardiol, 280–287.
Retrieved April 11, 2023, from
https://jamanetwork.com/journals/jamacardiology/article-abstract/2673289
12. Kengne, A.-P., Czernichow, S., Huxley, R., Grobbee, D., Woodward, M., Neal,
B., . . . al., e. (2009). Blood Pressure Variables and Cardiovascular Risk.
Hypertension, 399–404. Retrieved April 23, 2023, from
https://www.ahajournals.org/doi/full/10.1161/HYPERTENSIONAHA.109.1330
41
13. Yu, D., Zhao, Z., & Simmons, D. (2016, Aug 3). Interaction between Mean
Arterial Pressure and HbA1c in Prediction of Cardiovascular Disease
Hospitalisation: A Population-Based Case-Control Study. J. Diabetes Res.
Retrieved May 1, 2023, from
https://www.hindawi.com/journals/jdr/2016/8714745/
14. Hassan, C., Iqbal, J., Irfan, R., Hussain, S., Algarni, A., Bukhari, S., . . . Ullah, S.
(2022, jun). Effectively Predicting the Presence of Coronary Heart Disease
Using Machine Learning Classifiers. Sensors, 01-35. Retrieved May 11, 2023,
from https://www.mdpi.com/1424-8220/22/19/7227
15. Subahi, A., Khalaf, O., Alotaibi, Y., Natarajan, R., Mahadev, N., & Ramesh, T.
(2022). Modified Self-Adaptive Bayesian Algorithm for Smart Heart Disease
Prediction in IoT System. Sustainability, 01-14. Retrieved Jun 1, 2023, from
https://www.mdpi.com/2071-1050/14/21/14208
49
Appendix
Add_heartdetails.html
{% extends 'index.html' %}
{% load static %}
{% block body %}
<!-- register -->
<section class="logins py-5">
<div class="container py-xl-5 py-lg-3">
<div class="title-section mb-md-5 mb-4">
<h6 class="w3ls-title-sub"></h6>
<h3 class="w3ls-title text-uppercase text-dark font-weight-bold">Add Heart
Detail</h3>
</div><hr/>
<div class="login px-sm-12" style="width:100%">
<form action="" method="post" enctype="multipart/form-data">
{% csrf_token %}
<div class="form-group row">
<div class="col-md-1">
<label>Age</label>
<input type="text" class="form-control" name="age" required="">
</div>
<div class="col-md-1">
<label>Sex</label>
<select type="text" class="form-control" name="sex" required="">
<option value="M">Male</option>
<option value="F">Female</option>
</select>
</div>
<div class="col-md-1">
50
<label>CP</label>
<select type="text" class="form-control" name="cp" required="">
<option value="0">typical angina</option>
<option value="1">
Views.py
from django.contrib import messages
from django.contrib.auth.decorators import login_required
from django.shortcuts import render, redirect
import datetime
from sklearn.ensemble import GradientBoostingClassifier
from .forms import DoctorForm
from .models import *
from django.contrib.auth import authenticate, login, logout
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
from sklearn.preprocessing import StandardScaler, MinMaxScaler, RobustScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neural_network import MLPClassifier
from django.http import HttpResponse
# Create your views here.
doctor.status = 2
messages.success(request, 'Selected doctor are successfully withdraw his approval.')
else:
51
doctor.status = 1
messages.success(request, 'Selected doctor are successfully approved.')
doctor.save()
return redirect('view_doctor')
else:
login(request, user)
error="notmember"
else:
error="not"
d = {'error': error}
return render(request, 'login.html', d)
def Login_admin(request):
error = ""
if request.method == "POST":
u = request.POST['uname']
p = request.POST['pwd']
user = authenticate(username=u, password=p)
if user.is_staff:
login(request, user)
error="pat"
else:
sign = Patient.objects.get(user=user)
if sign:
error = "pat"
except:
sign = Doctor.objects.get(user=user)
terror = ""
if request.method=="POST":
52
n = request.POST['pwd1']
c = request.POST['pwd2']
o = request.POST['pwd3']
if c == n:
u = User.objects.get(username__exact=request.user.username)
u.set_password(n)
pred = "<span style='color:red'>You are Unhealthy, Need to Checkup.</span>"
return redirect('predict_desease', str(rem), str(accuracy))
doc = Search_Data.objects.get(id=pid)
doc.delete()
return redirect('view_search_pat')
@login_required(login_url="login")
def View_Doctor(request):
doc = Doctor.objects.all()
d = {'doc':doc}
return render(request,'view_doctor.html',d)
@login_required(login_url="login")
def View_Patient(request):
patient = Patient.objects.all()
d = {'patient':patient}
return render(request,'view_patient.html',d)
@login_required(login_url="login")
def View_Feedback(request):
dis = Feedback.objects.all()
d = {'dis':dis}
return render(request,'view_feedback.html',d)
53
@login_required(login_url="login")
def View_My_Detail(request):
terror = ""
user = User.objects.get(id=request.user.id)
error = ""
try:
sign = Patient.objects.get(user=user)
error = "pat"
except:
sign = Doctor.objects.get(user=user)
d = {'error': error,'pro':sign}
return render(request,'profile_doctor.html',d)
@login_required(login_url="login")
def Edit_Doctor(request,pid):
doc = Doctor.objects.get(id=pid)
error = ""
# type = Type.objects.all()
if request.method == 'POST':
f = request.POST['fname']
l = request.POST['lname']
e = request.POST['email']
54