DocScanner 14-Mar-2025 11-59
DocScanner 14-Mar-2025 11-59
Submitted in partial fulfill ment of the i equirements fol the award of the ilegiee
BACHELOR OF TECHNOLOGY
IN
CO6IPUTER SCIENCE AND ENGINEERING
This is to ccrtify that the project work cntitlcd ”HEART DISEASE PREDICTION
USING MACHINE LEARNING" being submitted by E.Sri Lalitliadcvi ( 16NN I
A0564), B.Kasi Aiiiiaptima (16NN1 A055S), M.Naga Bliavani ( I6NN I A05S4) and
P.Sai Vaiiaja (l6NNl A0592) iii thc partial fulfillment for tlic award of tlcgrcc of
Bachelor of Technology in Coinptttcr Science and Enginccring in Vignan's Nirula Institute
or Technology and Science For \Vonicn and this bonafide work carried out by them.
External Examiner
We hereby declare thnt the work described in this project work. entitled “HEART
oisrAsn PREDICTION USING MACHINE LEARNING” which is submitted by us in
partial fulfilment for the award of Bachelor of Technology in tlic Department of
Computer Science and Engineering to the Vignan's Nirula Institute of Technology and
Science for \Voirien, affiliated to Jawaharlal Nehru Technological University Kakinada,
Andhra Pmdesh, is the result of work done by us under the guidance of Mr. R.VenkntesIi,
Assistant Professor.
The work is original and has not been submitted for any Degree of this or any other university.
The work is original and has not bccn stibmitted for any Degree of this or any other university.
We also thank all the faculty of the department of Computer Science and
Engineering for their help and guidance on numerous occasions, which has given us the
cogency to build- up adamant aspiration over the completion of our project thesis.
Finally we thank one and all who directly or indirectly helped us to complete our project
thesis successfully.
In ths medical field, the diagnosis of heart disease is the most difficult task. Tire diagnosis
of hcart disease is difficult as a decision mlicd on grouping of large clinical and
pathological data. Due to this complication, the interest increased in a significant amount
benveen the mscarcliers and clinical professionals abotit the cfficicnt and accur;ite
heart disease prediction. In case of heart disease, the correct diagnosis in early state is
important as time is tlic very important factor. Heart disease is the principal source of
deaths widespread, and the prediction of Hcari Disease is significant at an untimely phase.
Machine learning in recent years!iaS bCen till CVOl¥'ing, reliable and supporting tools in
medical doirtain and has provided the greatest support for prcdicting disease with correct
case of training and testing. The itiain idea behind ihis work is to study diverse prediction
models for the heart disease and selecting important heart disease fcatiirc using Random
Forests algorithm. Random Forests is the Supervised Machine Learning algorithm
which has the high accuracy compared to other Supcrvised Machine Learning
algorithms such as logistic recession etc. By using Random Forests algorithm we arc
going to predict if a person has heart disease or not.
TABLE OF CONTENTS
Chapter 1 Introduction 1
Chapter 2 Literature Survey 3
Chapter 3 System Analysis 5
3.1 Existing System 5
3.2 Proposed System 5
3.3 Algorithms 6
3.4 Feasibility Study S
3.5 Effort,Duration,andCost Estimation using Cocomo Model 9
Chapter 4 Software Requirements Specification 14
4.1 Introduction To Requirement Specification 14
4.2 Requirement Analysis 14
4.3System Requirements 17
4.4Sofhvare Description 17
Chapter S System Design 20
5.1 System Architecture 20
5.2 Modules 20
5.3 Data Flow Diagram 21
5.4 UML Diagram 24
5.4.1 Use Case Diagram 24
5.4.2 Activity Diagram 25
5.4.3 Sequence Diagram 25
5.4.4 Class Diagram 26
Chapter 6 Implementation 27
6.1 Steps for Implementation 27
6.2 Coding 27
Chapter 7 S3stem Testing 29
7.1 White Box Testing 29
7.2 Black Box Testing 33
ChRpter 8 Screen shots 36
5.1 Anaconda Prompt 36
8.2 Home Screen for Heart Attack Prediction 36
8.3 Patient Details 37
5.4 Output for Particular Patient Details 37
Chapter 9 Conclusion 35
Chaptes 10 Future Scope 39
Cliapter 11 Refereiices 40
List of Diamants
3.1 Logistic Regression 7
3.2 Random Forest
4.1 Jupyter Notebook 19
5.1System Architecture 20
5.2 Data Flow diagram Level 0 22
5.3 Data Flow Diagram Level l 23
5.4 Use Case Diagram 24
5.5 Activity Diagram 25
5.6 Sequence Diagram 26
List of Tables
2.1 LiSt of attributes 4
3.1 Organic, Semidetached and embedded system values 10
3.2 Project Architecture 12
CHAPTER 1
TRODUCVON
INTRODUCTION
The heart is a kind of muscular organ which pumps blood into the body and is
the central part of ihe body's cardiovascular system which also contains lungs.
Cardiovascular system also comprises a network of blood vcsscls. for example,
veins, arteries, and capillaries. These blood vessels deliver b!ood all over ltte body.
Abnonnalities in normal blood flOw' from the heart Cause several types of heart diseases
which are commonly known as cardiovascular diseases (CVD). Heart diseases arc the
main reasons for death worldwide. According to the survey of the \Vor1d Health
Organization (\VHO), 17.5 million total global deaths occur because of hearl attacks
and strokes. More ihan 75% of deaths from cardiovascular diseases occur mostly in
middle-income and low-income countries. Also, 80% of the deaths that occur due to
CVDs arc because of stroke and heart attack . Therefore, prediction of cardiac
abnormalities at the early stage and tools for the prediction of heart diseases can
save a lot of life and help doctors to design an effective treatment plan which
ultimately reduces the mortality rate due to cardiovascular diseases.
These pallcriis can be utilized for healthcare diagnosis. However, the available
raw medical daia arc widely distributed, voluminous and heterogeneous in nature .This data
needs to be collected in an organized form. This collected data can be then
intcgmted to form a
medical information system. Data mining provides a user-oriented approach to novel
and hidden patterns in the Data The data mining tools arc useful for answering business
questions and techniques for predicting the various diseases in the healthcare field.
Disease prediction plays a significant role in data mining. This paper analyzes the heart
disease predictions using classification algorithms. These invisible patterns can be
utilized for health diagnosis in licaltheam data.
Data mining technology affords an efficient approach to the latest and indcfinitc
patterns in the data. The information which is identified can be used by the healthcare
adininisimiors to get better services. Heart disease was the most cnicial reason for victims
in the countries like India, Unitcd States. In this project we arc predicting the heart
disease using classification algorithms. Machine learning techniques like Classification
algorithms such as Random forest, Logistic Rcgrcssion are uscd to explore different
kinds of hcart based problems.
CHAPTER 2
LITERATURE SURVEY
LITERATURE SURVEY
Machine Learning techniques arc used to analyze and predict the medical
data information resources. Diagnosis oflieari disease is a significant and tedious task in
medicine. The term Heart disease encompasses the various discases that affect the heart.
The e;tposure ofheart disease from vnrious factors or symptom is an issve which is not
coinpliinenary from false presumptions often accompanied by unpredictable effects. The
data classification is based on Supervised Machine Learning algorithm which rcsiilts in
better accumcy. Here we arc using the Random Forest as the training algorithm to train the
heart disease dataset and to predict the heart disease. The results showed that the
medicinal prescription and designed prediction system is capable of prophesying the heart
attack successfully. Machine Learning techniques am used to indicate the early mortality
by analyzing the heart disease patients and their clinical records (Richards, G. ct al.,
2001). (Sung, S.F. ct al., 2015) have brought about the hvo Machine Learning techniques,
k-nearest neighbor model and existing multi linear regression to predict the stroke
severity index (SSI) of the patients. Their study show that k- nearest neighbor pcrfonned
better than Multi Linear Regression ittodel. (Arslan, A. K. et at., 2016) have suggested
various Machine Learning techniques such as support vector machine (SVM), penalized
logistic regression (PLR) to predict the heart stroke. Their results show that SVM
produced the best performance in prediction when compared to
other models.Boshra Brahmi ct aI, (20] developed different Machine Learning
techniques to evaluate the prediction and diagnosis of heart disease. The main objective
is to evaluate the different classification techniques such as J4S, Decision Tree, KNN and
Naive Bayes. After this, evaluating some performance in measures of accuracy, precision,
sensitivity, specificity are evaluated .
Data source
Clinical databases have collected a significant amount of information about patients
and their medical conditions. Records set with medical attributes were obtained
from the Cleveland Heart Disease database. With the help of file dataset, the patterns
significant to the heart attack diagnosis are extnictcd. The records were split equally into
two datasets: tmininp dataset and testing dataset. A total of 303 records wiih 76 medical
attributes xv•eze obtained. All the attributes are numeric-valued. We are working on a
reduced set of attributes, i.e. only 14 attributes.
All these restrictions were announced to shrink the digit of designs, these are as follows:
1) The feattlms should seem on a single side of the rule.
3
2) The rule should distinct various features into the different groups.
4
3) The count of features available from the mle is organized by medical history of people
having heart disease only.
The following table shows the list of attributes on which we are working.
5
CHAPTER 3
SYSTEM ANAYLSIS
SYSTEM ANAYLSIS
3.1EXISTINc SvsTrM
Clinical decisions arc often made based on doctors' intuition and experience
father than on the knowledge rich data hidden in the database. This practice leads to
unwanted biases, errors and excessive iiicdical costs which affects lhe quality of
scrvice provided to patients. There are many ways that a medical misdiagnosis can
present itself. \Vltetlier a doctor is at fault, or hospital staff, a misdiagnosis of a serious
illness can have very extreme and hannfiil effects. The National Patient Safety
Foundation cites that 42% of medical patients feel they have had experienced a
lncdical error or missed diagnosis. Patient safcty is sometimes negligently }iivcn the back
scat for other concerns, such as the cost of medical tests, drugs, and operations.
Medical Misdiagnoscs am a serious fisk to our healthcare profession. If they continue,
then people will fear going to the hospital for treatment. \Ve can put an end to medical
inisdiagnosis by informing the public and filing claims and suils against the medical
practitioners at fault.
Dlsadv.outages:
• Prediction is not possible at carly stages.
• ln the Existing system, practical use of collected data is time consuming.
• Any faults occurred by the doctor or hospital staff n predicting would lead to fatnl
incidents.
• Highly expensive and laborious process needs to be performed before treating the
patient to find out if lie/she has any chances to get heart disease in future.
3.2PROrosro svSTEM
This section depicts the overview of i)ie proposed system arid illustratcs all of
the componcnts, techniques and tools are used for dcveloping the entire system. To
develop an inte 's••t and user-friendly heart disease prediction system, an efficient
software tool is needed in order to train huge datasets and compare miilliplc machine
learning algorithms.
After choosing the robust algorithm with best accuracy and performance mcasiires, it will
be implemented on the development of tllc smart phone-based application for
dctccting and pmdicting heart discasc risk level, Hardware components like
ArdUino/Raspberry Pi, different biomedical sensors, display monitor, buzzer etc. arc
needed to build the continuous patient monitoring system.
3.3 ALGORITHMS
LogR models lhe data points using the standard logistic function, which is an S-
shaped curve also called as sigmoid curve and is given by the equation:
• For a binary regression, the factor level 1 of the dependent variable should represent the
desired outcome.
•Truesamyles
R I’
l
O 02 OA O{ O8
3.3.2Random Forest
n ndom forest is a supervised learning algorithm which is used for both classification as well
as regression .But however ,it is mainly used for classification problems .As we know that
a forest is made up of trees and more trees means more robust forest .
Similarly ,mndom forest creates decision trees on data samples and then gets the
prediction froin each of them and finally selects the best solution by means of voting .It
is ensemble method which is better than a single decision tree because it reduces ihc over-
fitting by averaging the result .
7
Working of Random Forest with the help of following steps:
• First ,siart with the selection of random samples from a given dataset.
• Nexi ,ihis algorithm will construct a decision tree for every Sample .Then it will get
the prediction mstilt from every decision tree .
1xc-2 Tr-z
Class-B uass-B
Mijotity-Voting
COCOMO estimates the effort in person months of dimct labor. The primary
effort factor is the number of source lines of code (SLOC) expressed in thousands of
delivered SOUrCe iflsirlictions (KDSI).The model is developed in three versions of
different level of detail basic, intermediate, and detailed. The overall modeling process
takes into account three classes of systems.
9
1. Embedded: This class of system is characterized by tight consirnints,
changing environment, and unfamiliar surroundings. Projects of the embedded type am
model to the company and usually exhibit temporal constraints.
1 Organic: This category encompasses all systems thai are small relative to project size
and team size, and have a stable environment, familiar surroundings and relaxed
interfaces. These are simple business systems, data processing systems, and small
sofhvare libraries.
.1Semldetaclied: The software systems falling under this category are a mix of those
of organic and embedded in nattim.
Some examples of solvare of iliis class are operating systems, database management
system, rind inventory management systems.
Type w*(effort)d
For Intermediate and Detailed COCOMO Effort = a * (KLOC) b* EAF (EAF = product of
cost drivers)
Type of Product A B
Organic 2.4 1.02 2.5 0.39
Intermediate COCOMO inodvl is a refinement of the basic model, which comes in the
function of 15 attributcs of the product. For each of the attributes the user of the model
has to provide a mting using the following six point scale.
VL(Very Low)
HIHigh
LO (Lori')
VW(VcyHigh)
NM(Nominal)
XH (ExiraHigh)
The list of attributes is composed of several features of the software and includes product,
computer, personal and project attributes as follows.
1
0
ñ.5.1 Product Attri\›utes
• Data bytes per DSI (DATA): Thc lower rating comes with lower size of a database.
Complexity (CPLX): The aitribiitC expresses code complexity again ranging from
straight batch code (VG) to real time code with multiple resources scheduling (XH)
• Development turnaround time (TURN): This is a time from ivhcn a job is submitted
until output becomes received. LO indicated a highly interactive environment, VH
quantifies a situation vhen lhis time is longer than 12 hours.
• This describes skills of the developing team. The higher the skills, the higher the
ruting.
• These are used to quantify tlic number of experience in cach area by the
development team; morc e.xpcrience, hillier rating.
• JYlodern development practices (AIODP): deals with the amount of use of modern
sofhx'are practices such as structural ptogratniviinp artd objcct oriented apptoack.
1
1
• Use of software tools (TOOL): is used to measure a level of sophistication of
automated tools used in software development and a degree of integration among
the tools being used. Higher rating describes levels in both aspects.
LO HI VH XH
12
Our project is an organic system and for intermediate
COCOMO Effort = a * (KLOC) b *EAF
KLOC = 115
b = 1.02
= 1.034
= 2.5 * ( 1.034)^0.3S
= 2.71 months
Cost of programmer = Effort * cost of programmer per month
= 1.034 * 20000
= 20650
= 40650
13
CHAPTER 4
18
Pandas .is used in a wide range of fields including academic and commercial domains
including finance, economics, Statistics, analytics, etc.
Key Features of Pandas:
» Past and efficient Data Frame object witb default and customized indexing.
• Tools for loading data into in-memory data objects ftom different file formats.
• Data alignment and integrated handling of missing data.
• Reshaping and pivoting of date sets.
• Label-based slicing, indexing and sub setting of large data sets.
• Columns from a data structure can be deleted or inserted.
• Group by data for aggregation and transformations.
• High perfonuance merging and joining of data.
• Time Series functionality.
A4J NumPy
NumPy is a general-purpose array-processing package. It provides a high-
performance multidimensional array object, and tools for working with these arrays. It is
the fundamental package for scientific computing with Python.
It contains various features including these important ones:
19
CHAPTEłt 5
SYSTERIDESItIN
SYSTEM DESIGN
5.1 SYSTEM ARCHITECTURE
The below figure shows the process flow diagram or proposed work. First we collected the
Cleveland Heart Disease Database from UCI website then pre-processed the dataset and
select 16 important featums.
For feature selection we used Recursive feature Elimination Algorithm using Chi2
method and get 16 top features. After that applied ANN and Logistic algorithm individually
and compute the accuracy. Finally, we used proposed Ensemble Voting method and compute
best method for diagnosis of heart ñsease.
5.2 MODULES
They are:
a. Data Pre-processing
b. Feature
c. Classification
d. Prediction
a. Data Pre-processing:
This file contains all the pre-processing functions needed to process all input documents and
texts. Fint we read the train, test and validation data files then petfonned some preprocessing
like tokenizing, stemming etc. There are some exploratory data analysis is performed like
response variable disWibution and data quality checks like null or missing values etc.
b. Feature:
Extraction In this file we have performed feature extraction and selection methods from sci-
kit learn python libraries. For feature selection, we have used methods like simple bag-of-
wolds and n-grains and then term frequency like tf-tdf weighting. We have also used
20
word2vec and POS tagging to ex.tract the features, though POS tagging and word2vec has
not been used at this point in the prqject.
c. Classlflcaflon:
Here .we have built a)1 the classifiers for the breast cancer diseases detection. The
extrarted features are fed into different elassifiers. We have used Naive-ba.yes,
Logistic Regression, Linear SVM, Stochastic gradient decent and Random forest
ciassifiers from skleam. Each of the extracted features was used in all of the classifiers.
Once fitting the model, we compared the f1 score and checked the confusion matrix.
After fitting all the classifiers. 2 best performing models were selected as candidate
models for heart diseases classification. We have perfomfed parameter
tuning by implementing GridSearchCV methods on these candidate models and chosen
best performing parameters for these classifier.
Finally selected rriodel was used for heart disease detection with the
probability of truth. In Addition to this, we have also extracted the top 50 features from our
term-frequency tfidf Vectorizer to see what words are most and important in each of
the classes.
We have also used Precision-Recall and learning curves to see how training and test
set performs when we increase the amount of data in our classifiers.
d. Prediction:
Our finally selected and best performing classified was algorithm which was
them saved on Riskwith name heat medel.sav. Once you close tttis repository,.this
model will be copied to user's machine and will be used by prediction.py file to classify
the Heart diseases
. It takes a news article as input from user then model is used for final classification
output that is shown to user along with probability of truth.
SP DATA FLOW DIAGRAM
The data fiow diagram (DFD) is one of the most important tools used by system
analysis. Onta fipw diagrams ate made.lip of number of symbols, which represents system
components. Most data flow modeling methods use four kinds of symbols: Processes,
Data stores, Data flows and external entities.
These symbols are used to represent four kinds of system components. Circles in DFD
represent processes. Data Flow represented by a thin line in the DFD and each data store
has a unique name and square or rectangle represents external .entities.
Fig 5.3: Data Flow Diagr:im Level I
23
8.4 UML DIAGRAh4S
$A. I Use-Case Diagram
A use case diagram is a diagram that shows a set of use cases and actors and
their relationships. A use case diagram is just a special kind of diagram and shares the
same common properties as do all other diagrams, i.e a name and graphical contents
that are a projection into a model. What disúnguishes a use case diagram from all
other kinds of
diagrams is its particular content.
An aeńvity diagram shows the flow from activity to activity. An activity is an ongoing
non- atomic exectiăon witbìn a state machine. An activity diagram is basically a projection
of the elements (ound in an activity gmph, a special case of a state machine in which all
or most states are activity states and in which all or most transiüons are triggered by
completion of activities iø the source.
34
Fig 5.5 Activity Diagram
5.4.3 Sequence Diagram
A sequence diagram is an interaction diagram that emphasizes the time ordering of
messages. A sequence diagram shows a set of objects and the messages sent and
received by those objects. The objects are typically named or anonymous instances of
classes. but may also mpmsent instances of other things, such as collaborations,
components, and nodes. We use sequence diagrams to illustrate the dynamic view of a
system.
2S
5.4.4 Class diagram
User
+Datasets
cAttriDutes of Feat‹zres
+Dataset Collection() featureExtract@n[
+appIyAlg€srftPsms(}
perto mon<•e(}
Rg5.7C!assDi*gmm
26
CHAPTER 6
I IPLEMENTATIO
IMPLEMENTATION
6.I STEPS FOR IMPLEMENTATION
1. Install the mquired packages for building the ‘Passive Aggressive
Classified’.
62CODIWG
Sample code:
#importing modules
import pandas as pd
import numpy as np
import scabom as sns
#Rcading dataset(heart.csv)
df=pd.read_csv(’heart.csv’)
#visualizing how many persons have heartdisease based on gender using heaildiscasc
dataset sns.countplot(x-'sex'.hue-’iarget’,data=df)
#predicting whether a person have heart disease or not against new sample
sd.prediet(df)
#creating pickle module
import pickle
pickle.dump(sd,open('heart1pk','wb'))
Straining the data and predicting accuracy using Random forest
from skleara.ensemble import RaadomForestClassifier
if' RandornForestClassifiOf{).fit(X_uairi,y_train)
rf.score(X_ñairi,y_tmizt)
28
CH.APTER 7
SYSTEAI TESTINt•
SYSTEM TESTING
The term "WhiteBox" was used because of the see-through hex concept. The
clear box or WhiteBox name symbolizes the ability to see through the software's outer
shell (or "box") into its iiuier workings. Likewise, the "black box" in "Black Rox Testing"
symbolizes not being able to sec the inner workings of the software so that only the end-
user experience can be tested.
I.White box testing involves the testing of the software code for the following:
The testing can be done at system, integration and unit levels of software development.
One of the basic goals of white box testing is to verify a working fiow for an
application. It
involves testing a series of predefined inputs against expected or desired outputs so that
when a specific input does not result in the expected output, you have encountered a bug.
The first thing a tester will often do is learn and understand the source code of
the application. Since white box testing involves the testing of the inner workings
of an application, the tester must be very knowledgeable in the programming languages
used in the applications they arc testing. Also, the testing person must be highly aware of
secure coding practices. Security is often om of thc primary objectives of testing software.
The tester should be able to find sccui ity issues and prevent attacks from hackers and
naive users who might inject malicious code into the application either knowingly or
unknowingly.
The second basic step to white box testing involves testing the application's
source code for proper flow and structure. One way is by writing more code to test the
application's source code. The tester will develop little tests for each process or series of
processes in the application. This method requires that the tester must have intimate
knowledge of the code and is often done by the developer. Other methods include
s , trial, and error
testing and the use of testing tools as we will explain further on in this article.
Apon from above, titüfs are mimer lis cove types such as Condition
MUltiplc Condition Coverage. Path Coverage, Function Coverage etc. Each
Coverage,
technique has its oWn merits and attempts to test fcover) all parts of software code.
Using Statement and Branch coverage you generally aitain 50-90% code covcrage
which is siifficient.
1. Unit Tesöng:
lt ÎS OÖ0f1 the first type of testing donc on an application. Unit Test_n is
performed on each unit or block of code as ii is dcvcloped. Unit Testing is essentially
dorto by thc prograrruner. As a software developer, you develop a few lines of code, a
sipgJe function or an objcct and test it to make sure it works before continuing Unit
Testing helps identité a
majority of bugs, early in the software development likcycie. Buts identificd in this
stnpc arc cheap#r and easy to fix.
In tbis testing, the lester/developer has full information of the application's source
code, detailed network information, IP addresses involved and all server
infonnation the application runs on. The aiin is to attack the code from scv¢ral
angles to expose security threats
Mutation testing is often used to discover the best coding techniques to use for
expanding a sob solution.
31
g. White Box Tentlng Tools
Below is a list of top white box testing tools.
• Parasofi Jtest
» EclEmma
o NUnit
• HTMLI Unit
» Developers who usually execute white box test cases detest it. The white box
testing by developers is not detailed can load to production uirors.
• White box testing requires professiona) resources, with a detailed understanding
of programming and implementation.
• White-box testing is time-consuming, bigger programming applications take the
time to test fully.
Ending Notes:
White box testing can be quitu complex. Thu complexity involved has a lot to do with
the application being tested. A snail application that performs a single simple
opcrntion could be white box tested in few minutes, while larger programming
applications take days, weeks and even longer to fully test. White box testing should be
done on a software application as it is being developed after it is written and again after
each modification
33
7.2BIackBoxTesAng
The Black-Bon can be any software .system you want to test. For Example, an operating
system like Windows,.a website like Google, a database like Oracle or even your own
custom application Under Black Box Testing, you can test these applications by just
focusing on the inputs and outputs without knowing their internal code implementation.
There are mans types of Black Box Tésting but the following are the prominent ones :-
1.Functional testing - This black box testing type is related to the functional requirements
of a system; it is done by software testers.
2. Non-functional testing - This type of black box testing is not related to testing of specific
functionality, but non-functional requirements such.as performance, scalability, usability.
33
3. Regression testing - Regression Testing is done after code fixes, upgrades or any
other system maintenance to check the new code has not affected the existing code.
d. Tools used for Black Box Testing:
Tools used for Black box testing largely depends on the type of black box testing you are
doing.
1.Equivalence Testing
It is used to minimize the number of possible test cases to an optimum level while
maintains reasonable test coverage.
In this stage Test cases/scripts are created on the basis of software mquireinent documents
•t.Test Execution
ln this stage Test Cases prepared are executed. Bugs if any arc fixed and re-tested.
35
CHAPTER 8
SCREENSHOTS
8, SCREEN SHOTS
CONCLUSION
CONCLUSION
In this project, we introduce about the heart disease prediction system with different classificr
techniques for thc prcdiction of heart disease. The techniques arc Random Forest and Logistic
Rcprcssion: we hav‹: analyzc‹1 that the Random Forest has better accuracy as
compared! to Logistic Regression. Ottr ptirpt›sc is to iinprovc llie performance of the
Random Forcsi by removing unnecessary and irrelevant attributes from tlic dataset and
only picking ihosc that arc most infonnative for the classification task.
38
CHAPTER 10
FUTURE SCOPE
FUTURE SCOPE
As illustmtcd before ihc system can be used as a clinical assistant fOr any cliniCianS.
The discasc prediction through the risk factors can be hosted online and hence any internet
users can access the system through a web browser and understand the risk of heart
disease. The proposed minded can be implemented for any mal iiiric application .Using the
proposed model other type of hcari disease also can be determined. Diffcmnt heart
diseases as rlictiinaiie mean disease, hypertensive heart disease, ischciriic heart dissase,
cardiovascular disease and inflammatory heart disease can be identified.
Other health care systems can be formulated using this proposed model in order to
identify ihc diseases in the early stage. The proposed model requires an efficient
processor with good memory configuration to implement it in real time. The proposed
model has oxide ama of application like grid computing, cloud computing, robotic
modeling, etc. To increase the pcrforinancc of our classificr in future, we will »•ork on
ensembling Evo algoritluns callcd Random Forest and Adaboost. By cnsemblinp these
Evo algorithms we viII achieve hishperformance.
39
CHAPTER 11
REFERENCES
REFERENCES
[I] P.K. Anooj. —Clinical decision support system: Risk level prediction of heart discase
using weighted fuzzy rulesl; Journal of King Sand University — Coinputcr and
Infonnation Sciences (2012) 24, 27—40. Compiiter Science & Infonnaiion Technology
(US & lT) 59
[2] Nidhi Bliatla. Kiran Jyoti”An Analysis of Heart Disease Prediction using Different
Data Mining Techniques”.International Journal of Engineering Rcscarch Technology
[3] Jyoti Soni Ujma Ansari Dipesli Sharma, Stiniia Soni. “Prediciivc Data Mininp for
Medical Diapnosis: An Overvieix• of Heart Discese Prcdiction”.
[5] Dame Bertram. Amy Voida. Saul Greenberg, Kobcrt \Valker. “Communication,
Collaboration, and Bugs: The Social Nature of Issue Tracking in Small, Collocated
Teams”.
[7] Ankita Dewan, Meglina Shanna.” Prediction of Hcart Disease Using a Hybrid
Technique in Data Mininc Classification". 2nd International Conference on Computing for
Sustainable Global Development IEEE 2015 pp 704-706. [2].
[9] M Akhil Jabbar. BL Dcckshatulu, Priti Chandra.” Heart discasc classification using
nearest neighbor classified with feature subset selection”. Anale. Scria Iiifomiaiica, 1.1.
2013
[10] Shadab Adam Pattekari and Asiiia Parvenu.” PRÉDICTION SYSTEM FOR
HEART DISEASE USING NAIVE BAYES”, Iinternational Journal of Advanced
Coinpiitcr and MatliclTiaÎÎCOl SCÎCflCcs ISSN 2230-9624, Vol 3, Issue 3, 2012, pp 290-294.
[13] Data Mining Concepts and Tccliniqucs, Jiawci Han and Micheline Kainbcr, ELSEVIER.
Animcsli H:cru, Arkomita Mukherjee, Ainit Gupia, Prcdiction Using Machine Leaniing and
Dattt Minins Jiily2017, pp.2137-2159.
41