Paddy Leaf Disease Detection

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Paddy Leaf Disease Detection

A Project Report

Submitted in partial fulfillment of the requirements for

the award of the degree of

Bachelor of Technology
in

Department of Computer Science and Engineering


by

A Sandeep 170030023

D Harish 170030263

under the supervision of

Dr. P Sai Kiran


Professor

ACADEMIC YEAR 2020-2021


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
K L University

1
2
3
DEPARTMENT OF COMPUTER SCIENCE AND
ENGINEERING

DECLARATION

The Project Report entitled “Paddy Leaf Disease Detection” is a record of


bonafide work of A Sandeep (170030023), D Harish (170030263) submitted in
partial fulfilment for the award of B. Tech in “Computer Science and
Engineering” in K L E F. The results embodied in this report have not been
copied from any other departments/University/Institute.

A Sandeep 170030023

D Harish 170030263

4
K L University
DEPARTMENT OF COMPUTER SCIENCE AND
ENGINEERING

CERTIFICATE

This is to certify that the Project Report entitled “Paddy Leaf Disease
Detection” is being submitted by A Sandeep (170030023), D Harish
(170030263) in partial fulfilment for the award of B. Tech “Computer Science
and Engineering” to K L E F is a record of bonafide work carried out under the
efficient guidance and supervision.

The results embodied in this report have not been copied from any other
departments/University/Institute.

Signature of the Supervisor Signature of the In-charge

5
ACKNOWLEDGEMENTS

Our sincere thanks to Dr. P Sai Kiran for his outstanding support throughout the
project for the successful completion of the work.

We express our gratitude to Mr. V. Hari Kiran, Head of the Department for
Computer Science and Engineering for providing us with adequate facilities,
ways and means by which we are able to complete project work

We would like to place on record our deep sense of gratitude to the honorable
Vice Chancellor, K L University for providing the necessary facilities to carry the
concluded project work.

Last but not the least, we thank all Teaching and Non-Teaching Staff of our
department and especially my classmates and my friends for their support in the
completion of our work.

6
ABSTRACT

As one of the main ten rice delivering and devouring nations on the planet, India
relies enormously upon rice for its economy and for fulfilling its food needs. To
guarantee solid what's more, legitimate development of the rice plants it is
fundamental to distinguish any infection on schedule and before applying
expected therapy to the influenced plants. Since manual recognition of sicknesses
costs a huge measure of time and work, it is unavoidably reasonable to have a
mechanized framework. This project presents a rice leaf sickness discovery
framework utilizing AI draws near. Three of the most widely recognized rice
plant infections specifically Bacterial Leaf Blight, Leaf Smut, Brown Spot
infections are identified in this work. Clear pictures of influenced rice leaves with
white foundation were utilized as the info. After essential pre-handling, the
dataset was prepared on with a scope of various AI calculations counting that of
CNN(Convolutional Neural Network), MobileNet V2. We have collected
diseased leaf images from the kaggle. We have decide to use google colaboratory
for implementation of the project.

7
TABLE OF CONTENTS

S.NO NAME OF THE CONTENT PAGE


NUMBER

1 INTRODUCTION 10

2 2.0 LITERATURE SURVEY 12

2.1 PAPERS REVIEWED

2.2 SUMMARY OF LITERATURE

3 EXISTING MODELS 16

4 4.0 ANALYSIS OF PADDY LEAF 17

4.1 THEORETICAL ANALYSIS

4.2 SATISTICAL ANALYSIS

5 WORKFLOW 23

6 6.0 METHODOLOGY 24

6.1 DATASET

6.2 PRE-PROCESSING

6.2.1 CONVERT TO RGB IMAGE

6.2.2 IMAGE RESIZE

6.2.3 IMAGE AUGMENTATION

6.2.4 NORMALIZATION

8
6.3 MODEL

6.3.1 CONVOLUTIONAL NEURAL NETWORK

6.3.2 MOBILE NET

6.4 TRAINING AND PERFORMANCE

7 IMPLEMENTATION 31

8 RESULT AND ANALYSIS 43

9 CONCLUSION 47

10 REFERENCES 48

9
1. INTRODUCTION
India achieved its highest GDP, BDT 50.73 billion in 2019, from agricultural
sector. Half of the agricultural GDP is provided by rice production. This
consequently also contributes towards almost half of the rural employment
(48%). While providing a vital role in the country’ s economy, rice serves as a
staple food for the mass population and provides two-thirds of the per capita daily
calorie intake. As per the USDA’ s report, total rice yielding area and
corresponding production are projected to be 11.8 million hectares and 35.3
million metric tons separately for 2019-2020 (May to April). These monetary
turnouts obviously show that appropriate rice development is a high need for
India. Illness free rice development would assume a prevailing part in
guaranteeing stable financial development and keep up in the ideal targets.

Moreover, to keep pace with the emerging fourth industrial revolution, India
needs to work for its industrial advancements which will involve smart systems
that can take decisions without any human interventions. To that end, we have
come up with an automated system using machine learning techniques, a system
that will contribute in country’ s agricultural development by automatically
identifying and classifying diseases from the images of rice leaves.

Twenty rice diseases were revealed in India from a survey conducted in 1979-
1981, among which 13 diseases were detected as the major ones. Rice blast and
brown spot are considered as most prominent diseases then, but at present brown
spot and bacterial blight are considered as most affective and dangerous rice
diseases. In this project, we have focused on the identification of three rice leaf
disease detection (bacterial blight, brown spot and leaf smut). The reason for
choosing these three diseases is the prevalence of these diseases in India. These
three different diseases have their characteristic patterns and shapes. The features
of the diseases is described below and illustrated in fig[1]

10
• Leaf smut: small black linear lesions on leaf blades, leaf tips may turn
grey and dry.

• Bacterial blight: elongated lesions near the leaf tips and margins, and
turns white to yellow and then grey due to fungal attack.

• Brown spot: dark brown colored and round to oval shaped lesions on rice
leaves

Diseased leaf images, Figure-1

11
2. LITERATURE SURVEY

2.1. PAPERS REVIEWED


Title of the Paper Journal Name Publication Content/Outcome
Year

Detection and IEEE 2017 High-quality Deep


TRANSACTIONS Model for detection of
classification of
ON INFORMATION
rice plant diseases diseased leaves.
AND KNOWLEDGE

INFORMATION This survey provides a


Detection and
classification of FUSION ON 2016 thorough review of
rice plant diseases COMPUTER techniques for

VISION diseased leaf images.

A Survey on International Deep Learning based


Region automated detector
Journal of
Identification of
Rice Diseases Research and through 3HAN for
2018
Using Image fast, accurate detection
Scientific
Processing
Innovation (IJRSI) of disease.

Analysis of International
Automatic Journal of 2019 Recurrent neural
Rice Disease Engineering and network (RNN) that
Classification Advanced
Using Technology learns to classify if a
Image Processing (IJEAT)
Techniques

12
plant is diseased or
not.

Plant Disease International 2018 Classification of rice


detection and its Journal of Pure, plant leaf diseases
prevention using
image Applied
classification Mathematics

13
2.2 SUMMARY OF LITERATURE:

Associates intended to distinguish plant sicknesses utilizing Deep Learning


strategies that will help the ranchers to rapidly and effectively identify infections
which thus would empower the ranchers to make legitimate strides at beginning
phase. They utilized 2589 unique pictures in performing tests and 30880 pictures
for preparing their model utilizing the Caffe profound learning system. For
accomplishing a higher exactness in assessing a prescient model, the creators
utilized 10-overlap cross approval method on their dataset. The exactness of
expectation of this model is 96.77%.

Contingent upon just the extricated level of the RGB estimation of the influenced
region of rice leaf utilizing picture preparing, a model was created in to order the
illness. The RGB rates were taken care of into Naive Bayes classifier to at last
order the illnesses into three infection classes: Bacterial leaf scourge, Rice impact
and Brown spot. The exactness of the model to order the illnesses is more than
89%.

A higher precision was found in paper where a plant infection location model was
created utilizing CNN. This model can recognize 13 unique kinds of illnesses of
plants. The last exactness accomplished from this model is 96.3%.

In another examination, the influenced parts were isolated from the rice leaf
surface utilizing K-implies bunching and the model was then prepared with SVM
utilizing shading, surface and shape as the ordering highlights.

Maniyath et al. utilized irregular woodland, a troupe learning technique, to group


among solid and unhealthy leaf. For separating the highlights of a picture, the
creators utilized Histogram of Oriented Gradient (HOG). Their work has
guaranteed an exactness 92.33%.

14
Picture Processing and AI procedures were likewise utilized in [14] for the
recognition and order of rice plant sicknesses. Creators of this paper utilized K-
implies grouping for the division of the unhealthy region of the rice leaves what's
more, Support Vector Machine (SVM) for order. They accomplished a last
exactness, 93.33% and 73.33% on preparing and test dataset separately. The
equivalent dataset was additionally utilized in our work yet our technique brought
about a higher precision both in preparing and test dataset.

15
3. EXISTING MODELS

• K Mean Clustering
• Support Vector Machine (SVM)
• Histogram of Oriented Gradient (HOG)
• Navie Bayes Classider
• Random Forest + AdaBoost
• Random Forest + Gradient Boosting
• Convolutional Neural Networks

16
4. ANALYSIS OF PADDY LEAF

The main aim of this work is to create a paddy leaf disease detection model using
machine learning algorithms that can be useful for disease recognition.

4.1 THEORETICAL ANALYSIS

Brown Spot :

Brown colored Spot side effects from the outset appear as minimal indirect to
oval spots on the primary seedling leaves. They may vary in size, shape and
concealing depending upon the regular conditions, age of the spots, and level of
feebleness of the rice collection. Little spots are dim earthy colored to ruddy.

Leaf with Brown Spot Disease, Figure 2

17
Leaf Smut :

The sign of rice with leaf filth is the closeness of minimal dull spots on the leaves.
They are hardly raised and exact and give the leaves the presence of having been
sprinkled with ground pepper. Incorporation by these spots is by and large
completed on the most prepared leaves. The tips of specific leaves with the most
sickness may die.

Leaf with Leaf Smut Disease, Figure 3

Bacterial Leaf Blight :

It causes withering of seedling, yellowing and drying of leaves. On seedling, the


disease seems little water-doused spot at the edge of the leaves. Afterward,
bringing about the wavy edge and yellow leaf inside couple of days. This kind of
sickness happens as a rule during the beginning phase of planting from greatest
18
tillering to panicle inception. Seedling therapist can be found in 1-3 weeks
ensuing to relocating

Leaf with Bacterial Leaf Blight, Figure 4

4.2 SATISTICAL ANALYSIS

Convolutional Neural Network

Convolutional Neural Systems (CNN) are all over. It is seemingly the


foremost well known profound learning engineering. The later surge of intrigued
in profound learning is due to the monstrous ubiquity and viability of convnets.
The intrigued in CNN begun with AlexNet in 2012 and it has developed
exponentially ever since. In fair three a long time, analysts advanced from 8 layer
AlexNet to 152 layer ResNet.

19
Connected Neural Network Architecture, Figure 5

CNN is presently the go-to show on each picture related issue. In terms of
exactness they blow competition out of the water. It is additionally effectively
connected to recommender frameworks, characteristic language processing and
more. The most advantage of CNN compared to its forerunners is that it
consequently identifies the critical highlights without any human supervision. For
illustration, given numerous pictures of cats and mutts it learns particular
highlights for each course by itself.

CNN is additionally computationally proficient. It employments extraordinary


convolution and pooling operations and performs parameter sharing. This
empowers CNN models to run on any gadget, making them generally appealing.

CNN may be a feedforward organize, that's the stream of data is as it were one
heading, specifically from input to yield.

An Picture lattice of measurement (hxwxd) and channel or Part (fh x fw x d).


Outputs a volume measurement (h-fh+1)x(w-fw+1)x1.

Architecture:

20
CNN with two convolutional layers and max pooling layer, Figure 6

Input layers are associated with convolutional layers that perform numerous
assignments such as padding, striding, the working of bits for so numerous
exhibitions of this layer, this layer is considered as a building square of
convolutional neural systems.

Mobile Net

As the name applied, the MobileNet model is intended to be utilized in versatile


applications, and it is TensorFlow's first portable PC vision model. MobileNet
utilizes depthwise distinct convolutions. It fundamentally diminishes the quantity
of boundaries when contrasted with the organization with normal convolutions
with similar profundity in the nets. This outcomes in lightweight profound neural
organizations. A depthwise distinct convolution is produced using two tasks.

1. Depthwise convolution.
2. Pointwise convolution.

MobileNet is a class of CNN that was publicly released by Google, and hence,
this gives us an incredible beginning stage for preparing our classifiers that are
madly little and madly quick.

21
MobileNet Architecture, Figure 7

Architecture of MobileNet

22
5.WORKFLOW

Figure 8

This is the flow of our model. It contains 4 modules mainly.

➢ Data Collection
➢ Data Preprocessing
➢ Convolutional Neural Network

23
6. METHODOLOGY

6.1 Data Set

There are lot of datasets available in the digital world. They can be structured or
unstructured. To work with these types of models we need to pick the dataset
which best describes the problem statement. We are mainly focused on field
based leaf images which are taken using cameras and phones. We collected
these images for Kaggle.

Sample Images :

Bacterial Leaf Blight, Figure 9

Brown Spot, Figure 10

24
Leaf Smut, Figure 11

6.2 Pre – Processing

6.2.1 Convert to RGB Image

Using opencv we have converted the given image to RGB color format.

6.2.2 Image Resize

Using opencv we have resized the given image. Resultant image is 224 x
224 with 3 color channels.

6.2.3 Image Augmentation

Image augmentation artificially makes training images using different


ways of processing or combination of multiple processes, such as shifts, random
rotation, shear and flips.

6.2.4 Normalization

The objective of standardization is to change the estimations of numeric


segments in the dataset to utilize a typical scale, without contorting contrasts in
the scopes of qualities or losing data.

25
6.3 MODEL

6.3.1 Convolutional Neural Network

CNN is presently the go-to show on each picture related issue. In terms of
exactness they blow competition out of the water. It is additionally effectively
connected to recommender frameworks, characteristic language processing and
more. The most advantage of CNN compared to its forerunners is that it
consequently identifies the critical highlights without any human supervision. For
illustration, given numerous pictures of cats and mutts it learns particular
highlights for each course by itself.

CNN is additionally computationally proficient. It employments extraordinary


convolution and pooling operations and performs parameter sharing. This
empowers CNN models to run on any gadget, making them generally appealing.

CNN may be a feedforward organize, that's the stream of data is as it were one
heading, specifically from input to yield.

An Picture lattice of measurement (hxwxd) and channel or Part (fh x fw x d).


Outputs a volume measurement (h-fh+1)x(w-fw+1)x1.

26
CNN FrameWork

Figure 12

1. Filter processing

The primary handling of pictures was based on channels that allowed, for
occurrence, to induce the edges of an question in an picture utilizing the
combination of vertical-edge and horizontal-edge filters. Mathematically talking,
the vertical edge channel, VEF, on the off chance that characterized as follows:

Figure 13

For the purpose of straightforwardness, we consider grayscale 6x6 picture A, a


27
2D network where the esteem of each component speaks to the sum of light within
the comparing pixel.

2. 2D-Convolution

In arrange to extricate the vertical edges from this picture, we carry out a
convolutional item (⋆) which is essentially the whole of the elementwise item in
each block:

Convoluting with kernel, Figure 14

Figure 15

we get entirety of item in each piece where x is the image and y is kernel

in put of the particular channel we are able moreover utilize learned channel
which is by neural network

28
To unravel this issue, we regularly include cushioning around the picture in
arrange to require the pixels on the edges into consideration. In tradition, we
padde with zeros and indicate with p the cushioning parameter which speaks to
the number of components included on each of the four sides of the picture.

Figure 16

Pooling

Pooling, Figure 17

29
Max Pooling

given all the components within the channel, we return the most extreme value

Max pooling, Figure 18

Global Average Pooling

given all the components within the channel, we return the most extreme value

Global average pooling, Figure 19

30
7. IMPLEMENTATION

import tensorflow as tf
tf.__version__

import numpy as np
import pandas as pd
import os

import cv2

import albumentations as albu


from albumentations import Compose, ShiftScaleRotate, Resize
from albumentations.pytorch import ToTensor

from sklearn.utils import shuffle


from sklearn.model_selection import train_test_split

from sklearn.metrics import confusion_matrix


import itertools
from sklearn.metrics import classification_report

import shutil

import matplotlib.pyplot as plt


%matplotlib inline

IMAGE_HEIGHT = 224
IMAGE_WIDTH = 224
IMAGE_CHANNELS = 3

def plot_confusion_matrix(cm, classes,


normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
31
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')

print(cm)

plt.imshow(cm, interpolation='nearest', cmap=cmap)


plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)

fmt = '.2f' if normalize else 'd'


thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")

plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()

import albumentations as albu

def augment_image(augmentation, image):

"""
Uses the Albumentations library.

Inputs:
1. augmentation - this is the instance of type of augmentation to do
e.g. aug_type = HorizontalFlip(p=1)
# p=1 is the probability of the transform being executed.

2. image - image with shape (h,w)

Output:
Augmented image as a numpy array.

"""

32
# get the transform as a dict
aug_image_dict = augmentation(image=image)
# retrieve the augmented matrix of the image
image_matrix = aug_image_dict['image']

return image_matrix

aug_types = albu.Compose([
albu.HorizontalFlip(),
albu.OneOf([
albu.HorizontalFlip(),
albu.VerticalFlip(),
], p=0.8),
albu.OneOf([
albu.RandomContrast(),
albu.RandomGamma(),
albu.RandomBrightness(),
], p=0.3),
albu.OneOf([
albu.ElasticTransform(alpha=120, sigma=120 * 0.05, alpha_affine=1
20 * 0.03),
albu.GridDistortion(),
albu.OpticalDistortion(distort_limit=2, shift_limit=0.5),
], p=0.3),
albu.ShiftScaleRotate()
])

df_train = pd.read_csv('df_train.csv.gz')
print(df_train.head())

df_val = pd.read_csv('df_val.csv.gz')

df_combined = pd.read_csv('df_combined.csv.gz')

from google.colab.patches import cv2_imshow


def train_generator(batch_size=8):

while True:

# load the data in chunks (batches)


for df in pd.read_csv('df_train.csv.gz', chunksize=batch_size):

33
# get the list of images
image_id_list = list(df['image'])

# Create empty X matrix - 3 channels


X_train = np.zeros((len(df), IMAGE_HEIGHT, IMAGE_WIDTH, IMA
GE_CHANNELS), dtype=np.uint8)

# Create X_train
#================

for i in range(0, len(image_id_list)):

# get the image and mask


image_id = image_id_list[i]

# set the path to the image


path = 'sample_data/Leaf/'+image_id

# read the image


image = cv2.imread(path)

cv2_imshow(image)
# convert to from BGR to RGB
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# resize the image


image = cv2.resize(image, (IMAGE_HEIGHT, IMAGE_WIDTH))

# Create y_train
# ===============
cols = ['target_bacterial_leaf_blight', 'target_brown_spot', 'target_leaf_
smut']
y_train = df[cols]
y_train = np.asarray(y_train)

34
# change the shape to (batch_size, 1)
#y_train = y_train.reshape((-1, 1)) # -
1 tells numpy to automatically detect the batch size

# Augment the image and mask


# ===========================

aug_image = augment_image(aug_types, image)

# insert the image into X_train


X_train[i] = aug_image

# Normalize the images


X_train = X_train/255

yield X_train, y_train

# initialize
train_gen = train_generator(batch_size=8)

# run the generator


X_train, y_train = next(train_gen)

print(X_train.shape)
print(y_train.shape)

def test_generator(batch_size=1):

while True:

# load the data in chunks (batches)


for df in pd.read_csv('df_val.csv.gz', chunksize=batch_size):

# get the list of images


image_id_list = list(df['image'])

# Create empty X matrix - 3 channels


X_test = np.zeros((len(df), IMAGE_HEIGHT, IMAGE_WIDTH, IMAG
E_CHANNELS), dtype=np.uint8)

35
# Create X_test
#================

for i in range(0, len(image_id_list)):

# get the image and mask


image_id = image_id_list[i]

# set the path to the image


path = 'sample_data/Leaf/' + image_id

# read the image


image = cv2.imread(path)

# convert to from BGR to RGB


image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# resize the image


image = cv2.resize(image, (IMAGE_HEIGHT, IMAGE_WIDTH))

# insert the image into X_train


X_test[i] = image

# Normalize the images


X_test = X_test/255

yield X_test

# Test the generator

# initialize
test_gen = test_generator(batch_size=1)

# run the generator


X_test = next(test_gen)

36
print(X_test.shape)

def val_generator(batch_size=5):

while True:

# load the data in chunks (batches)


for df in pd.read_csv('df_val.csv.gz', chunksize=batch_size):

# get the list of images


image_id_list = list(df['image'])

# Create empty X matrix - 3 channels


X_val = np.zeros((len(df), IMAGE_HEIGHT, IMAGE_WIDTH, IMAG
E_CHANNELS), dtype=np.uint8)

# Create X_val
#================

for i in range(0, len(image_id_list)):

# get the image and mask


image_id = image_id_list[i]

# set the path to the image


path = 'sample_data/Leaf/' + image_id

# read the image


image = cv2.imread(path)

# convert to from BGR to RGB


image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# resize the image


image = cv2.resize(image, (IMAGE_HEIGHT, IMAGE_WIDTH))

# insert the image into X_train

37
X_val[i] = image

# Create y_val
# ===============

cols = ['target_bacterial_leaf_blight', 'target_brown_spot', 'target_leaf_


smut']
y_val = df[cols]
y_val = np.asarray(y_val)

# change the shape to (batch_size, 1)


#y_val = y_val.reshape((-1, 1)) # -
1 tells numpy to automatically detect the batch size

# Normalize the images


X_val = X_val/255

yield X_val, y_val

# Test the generator

# initialize
val_gen = val_generator(batch_size=5)

# run the generator


X_val, y_val = next(val_gen)

print(X_val.shape)
print(y_val.shape)

from tensorflow.keras.models import Model, load_model


from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import Adam

from tensorflow.keras.metrics import categorical_accuracy

38
from tensorflow.keras.callbacks import (EarlyStopping, ReduceLROnPlateau,
ModelCheckpoint, CSVLogger, LearningRateSchedule
r)

from tensorflow.keras.applications.mobilenet import MobileNet

model = MobileNet(weights='imagenet')

# Exclude the last 2 layers of the above model.


x = model.layers[-2].output

# Create a new dense layer for predictions


# 3 corresponds to the number of classes
predictions = Dense(3, activation='softmax')(x)

# inputs=model.input selects the input layer, outputs=predictions refers to the


# dense layer we created above.

model = Model(inputs=model.input, outputs=predictions)

model.summary()

TRAIN_BATCH_SIZE = 8
VAL_BATCH_SIZE = 5

num_train_samples = len(df_train)
num_val_samples = len(df_val)
train_batch_size = TRAIN_BATCH_SIZE
val_batch_size = VAL_BATCH_SIZE

# determine numtrain steps


train_steps = np.ceil(num_train_samples / train_batch_size)
# determine num val steps
val_steps = np.ceil(num_val_samples / val_batch_size)

# Initialize the generators


train_gen = train_generator(batch_size=TRAIN_BATCH_SIZE)
val_gen = val_generator(batch_size=VAL_BATCH_SIZE)

model.compile(
Adam(lr=0.0001),
loss='categorical_crossentropy',
metrics=['accuracy'],

39
)

filepath = "model.h5"

#earlystopper = EarlyStopping(patience=10, verbose=1)

checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1,


save_best_only=True, mode='max')

#reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=2,


#verbose=1, mode='min')

log_fname = 'training_log.csv'
csv_logger = CSVLogger(filename=log_fname,
separator=',',
append=False)

callbacks_list = [checkpoint, csv_logger]

history = model.fit_generator(train_gen, steps_per_epoch=train_steps, epochs=


5,
validation_data=val_gen, validation_steps=val_steps,
verbose=1,
callbacks=callbacks_list)

# Display the training log

train_log = pd.read_csv('training_log.csv')

train_log.head()

model.metrics_names

model.load_weights('model.h5')

val_gen = val_generator(batch_size=1)

val_loss, val_acc = \
model.evaluate_generator(val_gen,

40
steps=len(df_val))

print('val_loss:', val_loss)
print('val_acc:', val_acc)

# display the loss and accuracy curves

import matplotlib.pyplot as plt

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)

plt.plot(epochs, loss, 'bo', label='Training loss')


plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.figure()

plt.plot(epochs, acc, 'bo', label='Training acc')


plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()

plt.show()

test_gen = test_generator(batch_size=1)

preds = model.predict_generator(test_gen, steps=len(df_val), verbose=1)

# get y_pred as index values

y_pred = np.argmax(preds, axis=1)

y_pred

# get y_true as index values

cols = ['target_bacterial_leaf_blight', 'target_brown_spot', 'target_leaf_smut']

41
y_true = df_val[cols]
y_true = np.asarray(y_true)

y_true = np.argmax(y_true, axis=1)

y_true

# Compare y_true and y_pred

print(y_pred)
print(y_true)

import numpy as np
import cv2
from keras.models import load_model
from keras.preprocessing import image
from google.colab import files

IMAGE_HEIGHT = 224
IMAGE_WIDTH = 224
IMAGE_CHANNELS = 3
X_test = np.zeros((1, IMAGE_HEIGHT, IMAGE_WIDTH, IMAGE_CHANN
ELS), dtype=np.uint8)
im = "DSC_0365.JPG"
image = cv2.imread(im)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (IMAGE_HEIGHT, IMAGE_WIDTH))
X_test[0] = image
X_test = X_test/255

saved_model = load_model('model.h5')
s = saved_model.predict_generator(X_test,steps = 1,verbose=1)
ans = np.argmax(s, axis=1)

if ans == 0 :
print("Bacterial Leaf Blight")
elif ans == 1:
print("Brown Spot")
else:
print("Leaf Smut")

42
8. RESULT AND ANALYSIS

import numpy as np
import cv2
from keras.models import load_model
from keras.preprocessing import image
from google.colab import files

IMAGE_HEIGHT = 224
IMAGE_WIDTH = 224
IMAGE_CHANNELS = 3
X_test = np.zeros((1, IMAGE_HEIGHT, IMAGE_WIDTH, IMAGE_CHANN
ELS), dtype=np.uint8)
im = "DSC_0365.JPG"
image = cv2.imread(im)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (IMAGE_HEIGHT, IMAGE_WIDTH))
X_test[0] = image
X_test = X_test/255

saved_model = load_model('model.h5')
s = saved_model.predict_generator(X_test,steps = 1,verbose=1)
ans = np.argmax(s, axis=1)

if ans == 0 :
print("Bacterial Leaf Blight")
elif ans == 1:
print("Brown Spot")
else:
print("Leaf Smut")

43
Input 1 :

Output 1 : Bacterial Leaf Blight

Input 2 :

Output 2 : Brown Spot

44
Input 3 :

Ouptut 3 : Brown Spot

Input 4 :

Output 4 : Leaf Smut

45
Input 5 :

Output 5 : Leaf Smut

46
9. Conclusion

We have introduced a new model which helps to detect the diseased leaf images
using image processing and CNN. A significant convolution neural organization,
MobileNet is used by applying the pre-arranged loads and inclinations for
gathering the rice leaf sicknesses. The limit of eliminating features in picture
makes it stand separated from others as a best plan. In the event that there ought
to be an event of gigantic information, this method gives precise and speedy
results for leaf infection forecast. It has furthermore been watched are better if
there ought to emerge an event of profound learning Convolutional neural
frameworks.

47
10. REFERENCES

[1] https://arxiv.org/abs/1809.06839

[2] A Hybrid Deep Model for Fake Image Detection, IEEE TRANSACTIONS
ON INFORMATION AND KNOWLEDGE (2019).

[3] https://www.youtube.com/watch?v=HEQDRWMK6yY

[4] https://github.com/vbookshelf/Skin-Lesion-Analyzer

[5] https://blog.jayway.com/2020/03/06/using-ml-to-detect-fake-face-images-
created-by-ai/

[6] https://arxiv.org/pdf/1909.11573.pdf

[7] https://www.kaggle.com/robikscube/kaggle-deepfake-detection-introduction

[8] https://www.kaggle.com/vbookshelf/rice-leaf-disease-analyzer-tensorflow-
js-web-app

[9]https://www.researchgate.net/publication/318437440_Detection_and_
classification_of_rice_plant_diseases

48

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy