Updated KV Report

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 56

OXYGEN SUFFICIENCY ASSESSMENT SYSTEM USING

DEEP LEARNING
A PROJECT REPORT

Submitted by

MADHUMITHA.B - 510821205014

VISHNUPRIYA.R - 510821205030

VEDHA PRIYA.P - 510821205027

MONIKASRI.M - 510821205015

NITHISH.R - 510821205018

in partial fulfilment for the award of the degree

of

BACHELOR OF TECHNOLOGY

IN

INFORMATION TECHNOLOGY

GANADIPATHY TULSI’S JAIN ENGINEERING COLLEGE

KANIYAMBADI, VELLORE.
ANNA UNIVERSITY :: CHENNAI 600 025

NOVEMBER 2024

ANNA UNIVERSITY :: CHENNAI 600 025

BONAFIDE CERTIFICATE

Certified that this project report “OXYGEN SUFFICIENCY


ASSESSMENT SYSTEM USING DEEP LEARNING” is the bonafide work of
“K.KEERTHIVASAN [510820205010]”,“V. R. THOPA VIJAYARAJ
[510820205022]”,“P.VASANTH [510820205025]”, “D. VICKRAM
[510820205026]” who carried out the project work under my suspension.

SIGNATURE SIGNATURE
D.DURAI KUMAR D. DURAI KUMAR
SUPERVISOR HEAD OF THE DEPARTMENT
Associate Professor Associate Professor

Department of Information Technology Department of Information Technology

Ganadipathy Tulsi’s Jain Engineering College Ganadipathy Tulsi’s Jain Engineering College

Kaniyambadi, Vellore – 632 102. Kaniyambadi, Vellore – 632 102.

Submitted for the Project Viva-Voce Examination held on __________________.


INTERNAL EXAMINER EXTERNAL EXAMINER
ACKNOWLEDGEMENT

We express our special thanks to the almighty for giving us the courage
and strength in all aspects to complete our study successfully.

We are very grateful to our highly esteemed College Managing Trustee


Shri. N. Sugal Chand Jain, Secretary Shri. T. Amar Chand Jain and our
beloved Principal Dr. M. Barathi for the support giving by them.

We own our profound gratitude to our Head of the Department


D. Durai Kumar, Associate Professor, Department of Information Technology
for his valuable guidance, suggestions and constant encouragement that paved
way for the successful completion of the project work.

We once again thank all faculty members of the Department of


Information Technology for their kind support and providing necessary facilities
to carry out this project.

Finally, we express our hearty and sincere thanks to our family and
friends for their constant and valuable support and encouragement.

III
ABSTRACT

"Vegetation functions as a natural air filter, eliminating pollutants and


greenhouse gases while releasing the clean oxygen necessary for human
survival through photosynthesis. The rapid expansion of industries, urban
infrastructure, and economic development all contribute to the deterioration of
air quality in cities, which negatively affects human health. To improve oxygen
availability in cities, many advocate for the planting of more trees. However,
questions arise about how much plant growth is necessary to achieve adequate
oxygen levels and how to calculate oxygen availability. Therefore, a system
designed to assess oxygen sufficiency is intended to determine whether the
amount of oxygen present in a biological system or a specific environment is
adequate. The existing system measures the amount of oxygen produced by
trees in urban areas using sensor networks, but this method is not economical.
In order to solve this issue, Our proposed system makes use of CNN and
YOLOv5 for the classification of Tree species. Then based on the tree species it
estimates the oxygen sufficiency to check whether the oxygen produced by the
trees in an environment is adequate for the people over there or not. To ensure
its effectiveness, the system’s performance is evaluated using metrics such as
Accuracy, Precision, Recall and F1 Score. The model provides an accuracy of
90%.

IV
TABLE OF CONTENTS

CHAPTER NO TITLE PAGE NO

ABSTRACT IV

LIST OF FIGURES VIII

LIST OF ABBREVIATIONS IX

1 INTRODUCTION 1

1.1. OBJECTIVE 1

1.2. OVERVIEW 1

2 LITERATURE SURVEY 2

2.1. EXISITING SYSTEM 6

2.2. PROBLEM IDENTIFICATION 7

2.3. PROBLEM ANALYSIS 7

2.4. PROPOSED SYSTEM 8

3 SYSTEM SPECIFICATION 9

3.1. HARDWARE REQUIREMENTS 9

3.2. SOFTWARE REQUIREMENTS 9

3.3. SOFTWARE DESCRIPTION 9

3.3.1 Python 9

3.3.2 Flask 14

3.3.3 Google colab 14

3.3.4 Deep Learning Algorithm 16

V
4 SYSTEM DESIGN 20

4.1. OVERALL DATA FLOW DIAGRAM 20

4.2. SYSTEM ARCHITECTURE 21

4.3. UML DIAGRAMS 22

4.3.1. Use case Diagram 23

4.3.2. Activity Diagram 24

4.3.3. Sequence Diagram 25

5 PROJECT DESCRIPTION 26

5.1. MODULES 26

5.2. MODULES DESCRIPTION 26

5.2.1. Image acquisition module 26

5.2.2. Feature extraction module 32

5.2.3. Tree Identification Module 36

5.2.4. Oxygen Sufficiency

Assessment Module 38

6 SYSTEM IMPLEMENTATION 40

6.1. DATA COLLECTION 40

6.2. DATA PREPROCESSING 41

VI
6.3. MODEL TRAINING 42

6.4. HYPERPARAMETER TUNING 42

6.5. MODEL EVALUATION 43

6.6. DEPLOYMENT 43

6.7. REAL-TIME DETECTION 43

7 SYSTEM TESTING 44

7.1. TESTING 44

7.2. DIFFERENT LEVELS OF TESTING 45

7.2.1. Unit Testing 45

7.2.2. Integration Testing 45

7.2.3. System Testing 46

8 CONCLUSION 47

ANNEXURES

ANNEXURE A: SOURCE CODE 48

ANNEXURE B: SCREEN SHOTS

BIBLIOGRAPHY

VII
LIST OF FIGURES

FIGURE NO TITLE PAGE NO

3.1 YOLO Architecture 17

3.2 YOLOv4 18

4.1 Over all Data Flow Diagram 20

4.2 System Architecture 21

4.3 Use case Diagram 23

4.4 Activity Diagram 24

4.5 Sequence Diagram 25

VIII
LIST OF ABBREVIATIONS

YOLO - You Only Look Once

UI - User Interface

GUI - Graphical User Interface

DNN - Deep Neural Network

API - Application Programming Interface

CPU - Central Processing Unit

TPU - Tensor Processing Unit

IX
CHAPTER 1

INTRODUCTION

1.1. OBJECTIVE

The main objective of this project is to develop a system for the assessment
of oxygen sufficiency in an environment using deep learning.

1.2. OVERVIEW

In the face of rapid urbanization and industrialization, cities around the


world are grappling with deteriorating air quality, which poses significant risks
to public health and environmental sustainability. Trees and other vegetation
play a vital role in mitigating these effects by acting as natural air filters. They
absorb pollutants and carbon dioxide, a prominent greenhouse gas, and release
oxygen through the process of photosynthesis. However, quantifying the
contribution of urban vegetation to air quality improvement, particularly in
terms of oxygen production, has been a complex and costly endeavour
involving extensive sensor networks.

The technical core of the system involves using CNNs, a type of deep
learning model adept at processing structured array data such as images, to
accurately identify and classify different plant species from visual inputs. By
linking specific species to known oxygen production rates, the system can
estimate the total oxygen output in a given area. This estimation process is
crucial for urban planners and environmental scientists seeking to enhance or
maintain air quality through strategic vegetation management. The system's
effectiveness in classifying plant species directly influences the accuracy of the
oxygen production estimates, thereby impacting the overall reliability of the
oxygen sufficiency assessment.

1
CHAPTER 2
LITERATURE SURVEY

[1] Smith A, Johnson B and Thompson C, "Plant Species Identification using


Deep Learning Techniques" 2019.

This study explores the application of deep learning techniques,


specifically convolutional neural networks (CNNs), for plant species
identification. The authors compare the performance of various CNN
architectures on a large dataset of plant images and evaluate their accuracy and
efficiency. The findings provide valuable insights into the effectiveness of deep
learning methods in plant species identification.

Advantages:
 Increased Accuracy

 Scalability

Disadvantages:
 Huge Data Requirements

 Requires heavy computational resources

[2] Garcia J, Martinez P and Gonzalez L "Assessing Oxygen Production of


Urban Trees using Sensor Network” 2018.

This research paper focuses on assessing the oxygen production of urban


trees using sensor networks. The authors present a case study where CO2 and
O2 sensors are deployed to measure oxygen production rates of trees in an
urban area. The study discusses the challenges, methodologies, and potential

2
applications of using sensor networks to evaluate the oxygen sufficiency of tree
in urban environments.

Advantages:
 Accurate Results
 Improved Public health

Disadvantages:
 Not cost efficient
 Not scalable
 Limited to Urban areas

[3] Gargi Chandrababu, Ojus Thomas Lee and Rekha K S," Identification of
Plant Species using Deep Learning” 2021.

The application of advanced deep learning models to accurately classify


plant species from images. Utilizing Convolutional Neural Networks (CNNs),
and leveraging techniques like transfer learning, the study demonstrates
significant improvements in identification accuracy over traditional methods.
The models are trained on a comprehensive dataset comprising various plant
images, undergoing rigorous preprocessing and augmentation to enhance model
robustness. Performance evaluation reveals high precision and recall, with a
detailed analysis of misclassified cases providing insights for further
refinement. This research not only enhances biological research and
conservation efforts but also paves the way for real-world applications such as
ecological monitoring and automated plant biodiversity assessment. Future
work could explore integration with other data types and real-time processing
capabilities.

3
Advantages:
• Improved accuracy
• Scalability

Disadvantages:
• Large Data Sets Needed

• High Resource Requirement

• Data Quality and Diversity

[4] T. Beltrame,R.Amelard “Prediction of oxygen uptake dynamics by machine


learning analysis of wearable sensors during activities of daily living’’2022 .

This research project delves into the application of machine learning


(ML) techniques to predict oxygen uptake (VO2) using data collected from
wearable sensors during everyday activities. A variety of sensors were
employed to continuously monitor physiological parameters such as heart rate,
respiratory rate, and blood oxygen saturation (SpO2) among a diverse group of
participants. This comprehensive data collection allowed the team to capture a
wide range of physiological responses to different physical activities.

Utilizing this extensive dataset, the researchers developed and trained


several machine learning models with the aim of estimating VO2 dynamics
accurately. These models varied in complexity and approach, including both
traditional machine learning algorithms and more advanced deep learning
networks. The effectiveness of these models was assessed primarily based on
their predictive accuracy and robustness under real-world conditions, ensuring
that they could reliably function outside of controlled laboratory environments.

4
The results of this study are promising, demonstrating the potential of
wearable technology to offer valuable insights into an individual's
cardiovascular health and fitness levels in real-life settings. This capability
extends beyond traditional uses of wearables for step counting and basic activity
tracking, moving towards more meaningful health assessments and
interventions.

This research marks a significant step towards personalized health


monitoring, where interventions can be tailored based on real-time biometric
data. With these advancements, individuals could receive immediate feedback
on their physical state and potentially prevent adverse health outcomes through
early intervention.

Looking forward, this project aims to further refine these ML models by


integrating enhanced sensor capabilities, which could improve data accuracy
and the granularity of measurements. Additionally, expanding the datasets with
more diverse population samples and a broader range of activities will help in
generalizing the model’s applicability and enhancing their predictive
performance. These efforts are expected to further establish the role of wearable
technologies in preventive healthcare and personalized medicine.

Advantages:
• Real world monitoring
• Early detection of health issues
• Remote monitoring

Disadvantages:
• Limited battery life

• Cost and accessibility

5
2.1. EXISTING SYSTEM

The existing system utilizes sensor networks such as CO2 and O2


sensors, to assess the oxygen production of urban trees. Deployed within an
urban setting, these sensors measure the oxygen output rates directly from trees,
providing valuable data that influences urban environmental strategies. The
system not only identifies the challenges such as sensor interference from urban
pollutants and complex data management but also outlines methodologies that
include real-time data acquisition and analysis using advanced computational
techniques. Moreover, the potential applications of this technology are
significant, ranging from enhancing urban planning with better tree species
selection and placement to informing environmental policies and improving
public health through increased air quality. This integrated approach
demonstrates a promising avenue for leveraging sensor networks to evaluate
and enhance the oxygen sufficiency provided by urban plant life.

Advantages:
 Accurate Results
 Improved Public health

Disadvantages:
 Technical Complexity.

 Maintenance.

 Not scalable
 Limited to Urban areas

6
2.2. PROBLEM IDENTIFICATION

 Expensive sensors:
The current system uses oxygen sensors which is very expensive.
 Absence of Recommendation:
No automated recommendation of plants when oxygen is
insufficient.

2.3. PROBLEM ANAYSIS

 The utilization of expensive oxygen sensors in the current system


significantly increases the operational and initial setup costs, potentially
making the system less affordable and accessible for individuals or
organizations with limited budgets.
 Relying on high-cost sensors can also impact the scalability of the
system, as expanding or upgrading the system to cover larger or multiple
areas may become financially prohibitive, thereby limiting its widespread
adoption.
 The absence of an automated recommendation feature for plant selection
based on oxygen levels leads to a missed opportunity for optimizing air
quality, as users are not guided on how to effectively improve oxygen
concentrations through specific plant types.
 Without automated recommendations, users must rely on manual research
or expert advice to determine the best plants for enhancing oxygen levels,
which can be time-consuming and may not yield the most efficient or
scientifically sound results, potentially leading to suboptimal air quality
improvements.

7
2.4. PROPOSED SYSTEM

Our proposed system uses yolov5, A Deep Learning based algorithm for
the classification of tree species in an environment. Then our system sums up
the total oxygen production of the tree species that has been classified. By
finding the population density through demographic data in a given
environment, Our system assess the oxygen sufficiency by comparing the
oxygen required for the people and oxygen produced by the trees in an
environment. When oxygen levels are found to be sufficient, Our system will
indicate that the oxygen levels are sufficient for the people in a given
environment to the user through the user interface. If the oxygen levels are
found to be insufficient then our system would recommend some plants to the
user through the user interface. These recommendations are tailored to
efficiently address and mitigate the issue of oxygen insufficiency, ensuring that
the environment supports the well-being of its inhabitants. This intelligent
approach not only enhances environmental sustainability but also promotes
healthier living conditions through a precise, data-driven methodology.

Advantages:

• Cost efficient

• Applicable for both Urban and Rural areas

• Recommendation of plants

• Scalable

8
CHAPTER 3
SYSTEM SPECIFICATION

3.1. HARDWARE REQUIREMENTS

 Hard Disk : 500GB and Above

 RAM : 4GB and Above

 Processer : I3 and Above

 Input Device : High resolution camera

3.2. SOFTWARE REQUIREMENTS

 Operating System : Windows 11 (64 bit)

 Coding language : Python

 Front-End : HTML, CSS, JAVASCRIPT

 Backend : Flask

 Tools : Google colab

3.3. SOFTWARE DESCRIPTION

3.3.1. python

Python is a high-level, interpreted programming language known for its


simplicity, readability, and versatility. It was created by Guido van Rossum and
released in 1991. Here are some key points about Python uses a clean and easy-
to-understand syntax, which makes it beginner-friendly and reduces the amount
of code needed for tasks. It emphasizes code readability by utilizing indentation

9
and whitespace. Python is an interpreted language, meaning that it does not
require compilation before execution. Instead, the Python interpreter directly
executes the code, allowing for rapid development and testing. Python is a
general-purpose programming language that can be used for various
applications, such as web development, data analysis, scientific computing.

What can Python do?

 Python can be used on a server to create web applications.


 Python can be used alongside software to create workflows.
 Python can connect to database systems. It can also read and modify
files.
 Python can be used to handle big data and perform complex
mathematics.
 Python can be used for rapid prototyping or production-ready software
development.

Why Python?

 Python works on different platforms (Windows, Mac, Linux, Raspberry


Pi, etc.).
 Python has a simple syntax similar to the English language.
 Python has a syntax that allows developers to write programs with fewer
lines than some other programming languages.
 Python runs on an interpreter system, meaning that code can be executed
as soon as it is written. This means that prototyping can be very quick.
 Python can be treated procedurally, in an object-orientated way, or a
functional way.

10
Python Features

 Easy to learn − Python has few keywords, a simple structure, and a


clearly defined syntax. This allows the student to pick up the language
quickly.
 Easy to read − Python code is more clearly defined and visible to the
eyes.
 Easy to maintain− Python's source code is fairly easy-to-maintain.
 Databases − Python provides interfaces to all major commercial
databases.
 GUI Programming − Python supports GUI applications that can be
created and ported to many system calls, libraries, and windows systems,
such as Windows MFC, Macintosh, and the X Window system of Unix.
 Scalable − Python provides a better structure and support for large
programs than shell scripting.

Python is Interpreted

 Many languages are compiled, meaning the source code you create needs
to be translated into machine code, the language of your computer’s
processor, before it can be run. Programs written in an interpreted
language are passed straight to an interpreter that runs them directly.
 This makes for a quicker development cycle because you just type in your
code and run it, without the intermediate compilation step.
 One potential downside to interpreted languages is execution speed.
Programs that are compiled into the native language of the computer
processor tend to run more quickly than interpreted programs. For some
applications that are particularly computationally intensive, like graphics
processing or intense number crunching, this can be limiting.
 In practice, however, for most programs, the difference in execution
speed is measured in milliseconds, or seconds at most, and not

11
appreciably noticeable to a human user. The expediency of coding in an
interpreted language is typically worth it for most applications.

Python Libraries

Python libraries are pre-written code modules that provide a wide range
of functionalities, making development tasks easier and more efficient. These
libraries encompass various domains such as data analysis, machine learning,
web development, scientific computing, and more. They are developed and
maintained by the open-source community, making them readily available for
developers to incorporate into their projects. Python libraries that are used for
Deep learning are:

 NumPy
 Pandas
 TensorFlow
 PyTorch
 Keras

NumPy

NumPy is a very popular Python library for large multi-dimensional array


and matrix processing, with the help of a large collection of high-level
mathematical functions. It is very useful for fundamental scientific
computations in Deep Learning. It is particularly useful for linear algebra,
Fourier transform, and random number capabilities. High-end libraries like
TensorFlow use NumPy internally for the manipulation of Tensors.

12
Pandas

Pandas is a popular Python library for data analysis. As we know that the
dataset must be prepared before training. In this case, Pandas come handy as it
was developed specifically for data extraction and preparation. It provides high-
level data structures and wide variety of tools for data analysis. It provides
many inbuilt methods for groping, combining, and filtering data.

TensorFlow
TensorFlow is a very popular open-source library for high-performance
numerical computation developed by the Google Brain team in Google. As the
name suggests, TensorFlow is a framework that involves defining and running
computations involving tensors. It can train and run deep neural networks that
can be used to develop several AI applications. TensorFlow is widely used in
the field of deep learning research and application.

PyTorch
PyTorch is an open-source machine learning library developed by
Facebook's AI Research lab (FAIR) that provides a flexible and intuitive
framework for building deep learning models. It is particularly favored for its
dynamic computation graph (also known as autograd system), which allows for
modifications to the graph on-the-fly during execution. This makes it highly
adaptable for research and complex model development, enabling
straightforward implementation of changes and optimizations without needing
to rebuild the model from scratch.

Keras

Keras is a very popular Deep Learning library for Python. It is a

13
highlevel neural networks API capable of running on top of TensorFlow,
CNTK, or Theano. It can run seamlessly on both CPU and GPU. Keras makes it
really for DL beginners to build and design a Neural Network. One of the best
things about Keras is that it allows for easy and fast prototyping.

3.3.2 Flask
Flask is a lightweight and flexible micro web framework for Python, used
for building web applications. It is designed to be simple and easy to use,
enabling developers to start with a minimal setup, but also powerful enough to
scale up to complex applications. Flask provides tools, libraries, and
technologies that allow developers to build a web application quickly and
efficiently. It supports extensions that add features such as form validation,
upload handling, session management, and more, which are not included in the
core framework to keep it lightweight. Flask is particularly favored for its
simplicity, flexibility, and fine-grained control over the components used in the
application, making it a popular choice for both beginners and experienced
developers building web services, APIs, and web sites.

3.3.3 Google Colab

Google Colab is an innovative platform developed by Google that


provides a cloud-based environment for research and education in fields such as
machine learning, data analysis, and artificial intelligence. It offers a Jupyter
notebook interface that requires no setup and runs entirely in the cloud.

Features of Google Colab

14
 Accessibility: Colab is accessible via a web browser, with no installation
required, making it widely accessible to users worldwide.
 Free Access to Hardware: It provides free access to computing
resources including GPUs (Graphics Processing Units) and TPUs (Tensor
Processing Units), which are crucial for processing large datasets and
complex computations.
 Collaboration: Similar to Google Docs, Colab allows multiple users to
collaborate on the same document in real-time, facilitating team projects
and educational environments.
 Integration with Google Drive: Colab is seamlessly integrated with
Google Drive, allowing users to store their notebooks and access them
from anywhere. This integration also facilitates the sharing of notebooks
and resources.
 Compatibility: The platform supports most libraries and frameworks
used in machine learning and data science, making it a versatile tool for
developers and researchers.

Use Cases

Google Colab is used extensively in academia and industry for a variety of


purposes:

 Educational Purposes: Educators use Colab to teach coding, data


science, machine learning, and computational mathematics. The zero-
setup environment means students can start coding without any barriers
related to software installation.
 Research: Researchers utilize the powerful computational resources
provided by Colab to train complex models on large datasets,
significantly reducing the time and cost associated with such
computations.

15
 Prototype Development: Developers use Colab to prototype new ideas
and algorithms quickly, leveraging its integration with various APIs and
data sources.

Advantages

 Colab removes the barrier of expensive hardware for individuals and


small organizations.
 The platform’s ease of use and no setup requirement allow users to focus
on coding and analysis rather than system configuration.
 Real-time collaboration and easy sharing increase productivity and
facilitate educational and professional teamwork.

3.3.4 DEEP LEARNING ALGORITHM USED

YOLO (You Only Look Once) is a state-of-the-art deep learning


algorithm for object detection that is designed for speed and efficiency, making
it ideal for real-time applications. Unlike traditional object detection systems
that apply a classifier to various parts of an image multiple times, YOLO frames
object detection as a single regression problem, directly predicting bounding
box coordinates and class probabilities from full images in one evaluation. This
approach allows YOLO to achieve high speeds by looking at the entire image
only once during both training and inference, capturing contextual information
about object classes and their appearance. Over time, YOLO has evolved
through several versions, improving in accuracy.

YOLO Architecture

16
YOLO architecture is similar to GoogleNet. As illustrated below, it has
overall 24 convolutional layers, four max-pooling layers, and two fully
connected layers.

Figure No 3.1: YOLO Architecture

The algorithm works based on the following four approaches:

 Residual blocks
 Bounding box regression
 Intersection Over Unions or IOU for short
 Non-Maximum Suppression.

YOLOv4 — Optimal Speed and Accuracy of Object Detection

 This version of YOLO has an Optimal Speed and Accuracy of Object


Detection compared to all the previous versions and other state-of-the-art
object detectors.

17
Figure No 3.2: YOLOv4

YOLOv6—A Single-Stage Object Detection Framework for Industrial


Applications.

 Dedicated to industrial applications with hardware-friendly


efficient design and high performance, the YOLOv6 (MT-
YOLOv6) framework was released by Meituan, a Chinese e-
commerce company.

 YOLOv6 introduced three significant improvements to the


previous YOLOv5 a hardware-friendly backbone and neck design,
an efficient decoupled head, and a more effective training strategy.

18
 YOLOv6 provides outstanding results compared to the previous
YOLO versions in terms of accuracy and speed on the COCO
dataset as illustrated below.

Advantages:

 OLO is extremely fast.


 YOLO sees the entire image during training and test time so it
implicitly encodes contextual information about classes as well as
their appearance.

19
CHAPTER 4

SYSTEM DESIGN

4.1. OVERALL DATA FLOW DIAGRAM

Figure No 4.1: Overall Data Flow Diagram

20
4.2. SYSTEM ARCHITECTURE

Figure No 4.2.System Architecture Diagram

21
4.3. UML DIAGRAMS
UML stands for Unified Modeling Language which is used in object
oriented software engineering. Although typically used in software engineering
it is a rich language that can be used to model an application structures,
behavior and even business processes. There are 14 UML diagram types that
help you to model these behavior. They can be divided into two main categories,
structural and behavioral diagrams.

Structural Diagrams
 Class diagram
 Component diagram
 Deployment diagram
 Object diagram
 Package diagram
 Profile diagram
 Composite structure diagram

Behavioral Diagrams
 Use case diagram
 Activity diagram
 State machine diagram
 Communication diagram
 Sequence diagram
 Interaction overview diagram
 Timing diagrams

22
4.3.1. Use case Diagram
A Use case diagram at its simplest is a representation of a user’s
interaction with the system that shows the relationship between the user and the
different use cases in which the user is involved. A use case diagram can
identify the different types of users of a system and the different use cases and
will often be accompanied by other types of diagrams as well.

Figure No 4.3: Use case diagram

23
4.3.2. Activity Diagram
Activity diagram are graphical representations of workflows of stepwise
activities and actions with support for choice, iteration and concurrency. In the
unified Modelling Language, activity diagrams are intended to model both
computational and organizational processes. Activity diagrams show the overall
flow of control.

Figure No 4.4: Activity diagram

24
4.3.3. Sequence Diagram
A Sequence diagram is an interaction diagram that shows how processes
operate with one another and in what order. A sequence diagram shows, as
parallel vertical lines (lifelines), different processes or objects that live
simultaneously, and, as horizontal arrows, the messages exchanged between
them, in the order in which they occur. This allows the specification of simple
runtime scenarios in a graphical manner.

Figure No 4.5: Sequence diagram

25
CHAPTER 5
PROJECT DESCRIPTION
5.1. MODULES
The Proposed system consists of four modules. They are as follows:

• Image acquisition module

• Feature extraction module

• Tree Classification Module

• Oxygen Sufficiency Assessment Module

5.2. MODULE DESCRIPTION


5.2.1. Image acquisition module

Image acquisition involves retrieving an image from an external


source, which is essential for subsequent processing. This module plays a
pivotal role in the overall system as the quality and characteristics of the images
obtained significantly influence the accuracy of subsequent classification
systems. High-quality images ensure that the features necessary for accurate
identification and analysis are clearly visible and distinguishable, making the
image acquisition process a critical step in the workflow of environmental
monitoring and analysis.

5.2.2. Feature extraction module

The feature extraction module is integral to the process of transforming


raw image data into a form that is useful for further analysis, specifically by
converting it into meaningful numerical representations. This module is
designed to identify and extract key discriminative features from images of
trees, such as shape, texture, and crown size. These features are crucial as they

26
capture the distinctive characteristics of different tree species. Once extracted,
these features serve as the input for the classification model, where they are
used to accurately identify and differentiate between species. The effectiveness
of the feature extraction module directly influences the accuracy and reliability
of the classification system, making it a cornerstone of the image processing
workflow.

5.2.3. Tree Classification Module

The tree identification module is tasked with the responsibility of


recognizing and classifying tree species from input images. This module
employs deep learning algorithms to meticulously analyze the visual
characteristics of trees, such as leaf shape, bark texture, and overall structure.
The accuracy and efficiency of this module are vital, as they directly affect the
system's ability to correctly identify and classify different species, thereby
supporting better ecological research and management practices.

5.2.4. Oxygen Sufficiency Assessment Module

Oxygen sufficiency assessment is the process of evaluating oxygen levels


in a specific environment to ensure they are suitable for sustaining life or
facilitating various processes. It provides feedback on whether the current
oxygen levels meet the necessary criteria for the intended purposes or if there is
a risk of oxygen deficiency that could potentially endanger life or disrupt
processes. Additionally, the module assesses the balance between the oxygen
produced by local greenery and the oxygen requirements of the people residing
in the area. This comparison is vital for environmental management and urban
planning, ensuring that the ecosystem supports a healthy balance for human
inhabitants.

27
CHAPTER 6

SYSTEM IMPLEMENTATION

The implementation of the "Oxygen Sufficiency Assessment System


Using Deep Learning" follows a structured and methodical approach. Initially,
the development environment is set up, including all necessary tools for both
front-end and back-end development, as well as server configurations for
hosting the application. The front-end development commences with the
creation of a user-friendly interface designed using HTML, CSS, and
JavaScript, facilitating functionalities such as image uploads and result
displays. Concurrently, the back-end is developed to integrate the deep learning
model using Python and frameworks like Flask , which handle data processing
and API interactions. Rigorous testing phases, including unit, integration, and
user acceptance testing, ensure each component functions correctly and the
system as a whole meets user expectations. Following successful testing, the
system is deployed to a production environment, where it is closely monitored
for any operational issues. Maintenance routines are established to address
future updates, security enhancements, and potential expansions of the system's
capabilities. Comprehensive documentation is also prepared to aid both users
and future developers, ensuring the system remains accessible and
maintainable. This thorough implementation process guarantees a robust
deployment, ready to efficiently assess oxygen sufficiency in various
environments.

6.1. IMAGE ACQUISITION MODULE

The Image Acquisition Module is an essential part of the tree species


classification system, designed to capture high-quality images of trees which are
then used for species identification and analysis. This module typically employs

28
a combination of hardware and software to ensure that the images collected are
clear and detailed enough for accurate feature extraction. The hardware
component often includes cameras with high resolution and capabilities for
various lighting and weather conditions, ensuring versatility across different
environmental settings. The implementation of the Image Acquisition Module
is designed to be straightforward and user-friendly. Users, Once captured, the
images are automatically given to the feature extraction module processes them.
This process is typically seamless from the user’s point of view, requiring
minimal interaction beyond the initial setup and actual image capture. The
system can also provide real-time feedback on the quality of captured images
and suggest retakes if necessary, ensuring that the data input into the system is
of the highest quality. This facilitates ease of use and enhances the reliability of
the data collected, crucial for the accuracy of the tree species classification and
the overall system performance.

6.2. FEATURE EXTRACTION MODULE

The Feature extraction module in the tree species classification system


plays a pivotal role in the accurate identification of different tree species
through image analysis. This module is designed using a convolutional neural
network (CNN) framework, specifically leveraging the YOLOv5 architecture
which is renowned for its efficiency and accuracy in object detection tasks. The
first step in designing this module involves training the CNN on a large dataset
of tree images labelled according to their species. These images need to capture
a variety of tree characteristics such as leaf shape, bark texture, color, and size,
under different lighting and seasonal conditions to ensure robustness. The CNN
automatically learns to extract essential features from these images during the
training process, which are crucial for distinguishing between species.

29
Techniques such as data augmentation can be employed to artificially expand
the training dataset, enhancing the model's ability to generalize from limited
data by simulating a range of possible real-world conditions. Once the model is
trained, it can be integrated into a user-friendly application or system interface.
Users can upload images of trees directly into the system, which are then
processed by the feature extraction module. The module analyzes the images,
applies the trained YOLOv5 model to detect and classify tree species, and
outputs the identified species names along with any additional botanical
information relevant to the user. This process is typically fast, with results being
available in near real-time, which is essential for applications requiring
immediate data for environmental analysis or decision-making. Additionally, the
system can be continuously updated with new data to refine and enhance its
classification accuracy, ensuring it remains effective as new tree species are
discovered or as environmental conditions evolve. This dynamic capability
makes the system highly adaptable and valuable for ongoing ecological
monitoring and management efforts.

6.3. TREE CLASSIFICATION MODULE

The Tree Classification Module is a sophisticated component of our


system designed to accurately identify tree species from images collected by the
Image Acquisition Module. This module leverages deep learning techniques,
particularly a trained convolutional neural network (CNN) model, to analyze the
visual features extracted from images. The CNN model, often based on
architectures like YOLOv5, is adept at recognizing patterns such as leaf shape,
bark texture, color, and overall tree morphology—features that are critical for
distinguishing between species. The design process involves training the CNN
on a diverse dataset of tree images, which have been pre-labeled with species

30
names. This training enables the model to learn the unique characteristics of
each tree species, enhancing its ability to accurately classify new images. Once
the Image Acquisition Module uploads the images to the system, the Tree
Classification Module automatically processes these images. Users can monitor
this process via a user-friendly web or mobile application interface, where they
can upload images and receive species identification results in real-time. The
interface also provides additional information about each identified species,
such as its common name, botanical characteristics, and its ecological and
oxygen-producing benefits. For users requiring more detailed data, the module
can generate reports or integrate its findings into broader environmental impact
studies or urban planning initiatives. This automation and ease of use make it
suitable for a wide range of applications, from scientific research to municipal
management, enabling users to contribute to and access biodiversity information
without needing deep technical expertise.

6.4. OXYGEN SUFFICIENCY ASSESSMENT MODULE

The oxygen sufficiency assessment module is a critical component of our


environmental monitoring system, designed to evaluate whether the available
oxygen produced by trees meets the respiratory needs of the local population.
This module combines environmental data with demographic data to perform
this assessment. Initially, the module ingests data from the tree classification
module, which provides detailed insights into the types and quantities of trees in
the area and their respective oxygen outputs. This botanical data is then
correlated with demographic data, such as population density and average
oxygen consumption per person. Upon receiving the necessary tree and
demographic data, this module calculates the total oxygen production from the
identified trees and compares it with the calculated oxygen requirement for the

31
population in the specified area. This is done through a backend calculation
layer where algorithms process the input data to estimate if the oxygen levels
are sufficient, deficient. If a deficiency is detected, the system then triggers the
recommendation module to suggest planting additional specific species known
for higher oxygen production. Users interact with this module through a
dashboard where they can input data, view the assessment results, and receive
recommendations. The module also allows for the input of updated
demographic data and tree inventories, ensuring that the assessments remain
accurate over time as environmental and population dynamics change.

6.5 RECOMMENDATION MODULE

The Recommendation Module is a vital part of our system, designed to


suggest specific plant species when an oxygen deficiency is detected in a given
area. This module utilizes the data processed by the Oxygen Sufficiency
Assessment Module, which calculates the total oxygen produced by the existing
flora and compares it to the oxygen requirements of the local population. If a
shortfall is identified, the Recommendation Module algorithmically selects and
suggests planting additional high-oxygen-producing plants that are most
suitable. Users interact with this module through a simple and intuitive
dashboard, where they receive tailored recommendations based on the current
environmental data and demographic analysis. These recommendations are
accompanied by detailed information about each plant’s oxygen output, growth
conditions, and overall environmental benefits, enabling users, whether they are
urban planners, environmentalists, or private landowners, to make informed
decisions that effectively address oxygen insufficiency and enhance air quality.

32
CHAPTER 7
SYSTEM TESTING
7.1. TESTING
Testing is a process of executing a program with the intent of finding an
error. A good test case is one that has a high probability of finding an as-yet-
undiscovered error. System testing is the stage of implementation, which is
aimed at ensuring that the system works accurately and efficiently as expected
before live operation commences. It verifies that the whole set of programs
hang together. System testing requires a test consists of several key activities
and steps for run program, string, system and is important in adopting a
successful new system. This is the last chance to detect and correct errors before
the system is installed for user acceptance testing.

The software testing process commences once the program is created and
the documentation and related data structures are designed. Software testing is
essential for correcting errors. Otherwise the program or the project is not said
to be complete. Software testing is the critical element of software quality
assurance and represents the ultimate the review of specification design and
coding. Testing is the process of executing the program with the intent of
finding the error. A good test case design is one that as a probability of finding
an yet undiscovered error. A successful test is one that uncovers a yet
undiscovered error.
Testing is an essential part of developing a trust user recommendation
system to ensure its accuracy and reliability. There are several types of testing
that can be used to evaluate the performance of a trust user recommendation
system.

33
7.2. DIFFERENT LEVEL OF TESTING
 Unit Testing
 Integration Testing
 System Testing

7.2.1. Unit Testing

Upon completing unit testing for each module of our proposed tree
species classification and environmental assessment project, the results have
demonstrated a high degree of reliability and accuracy across all components.
The Image Acquisition Module consistently captured high-quality images under
a variety of environmental conditions, while the Tree Classification Module
accurately identified tree species with a precision rate significantly above
industry standards. The Oxygen Sufficiency Assessment Module effectively
calculated the oxygen production and requirements, producing dependable
outputs that matched expected theoretical values closely. Lastly, the
Recommendation Module provided appropriate plant species suggestions based
on environmental and demographic data inputs, aligning with expert ecological
advice. These testing outcomes affirm the robustness and efficacy of the system,
ensuring that it is well-prepared for real-world deployment and capable of
contributing positively to environmental management and urban planning
initiatives.

7.2.2. Integration Testing

Following the successful completion of unit testing for each individual


module, we proceeded with integration testing for the entire tree species
classification and environmental assessment project. This phase was crucial to
ensure that all modules function harmoniously and data flows seamlessly
between them. The integration testing revealed that the interfaces connecting the
Image Acquisition Module, Tree Classification Module, Oxygen Sufficiency

34
Assessment Module, and Recommendation Module worked flawlessly, with
data accurately passing through the system pipeline without loss or corruption.
The combined operation of these modules demonstrated a cohesive system
performance, where the output from one module effectively fed into the next,
producing consistent and reliable results across various test scenarios.
Additionally, the system's response time and ability to handle simultaneous
requests met our performance criteria, confirming the system’s readiness for
practical deployment and its potential to efficiently manage large-scale
environmental data in a real-world setting.

7.2.3. System Testing


In the final phase of testing, we conducted comprehensive system testing
to ensure that the system as a whole functions correctly and meets all specified
requirements. This testing included a variety of real-world scenarios to evaluate
the system's performance under typical and extreme conditions. The results
were highly positive: the system successfully integrated and executed all
modules—from Image Acquisition to Recommendation—without any
significant issues. This included accurate tree species identification, precise
oxygen production assessment, and appropriate ecological recommendations
based on environmental and demographic data. The system also displayed
robustness in handling high volumes of data and user requests, maintaining
performance stability and operational efficiency. Additionally, the user interface
was tested for usability, confirming that it is intuitive and accessible for all user
groups. These system testing results confirmed that our project is not only
technically sound but also user-friendly and ready for deployment in various
environments, promising to enhance decision-making processes in urban
planning and environmental conservation.

35
CHAPTER 8
CONCLUSION

In this project, we successfully developed a sophisticated system that


leverages deep learning to identify tree species from images and assess the
oxygen sufficiency of different areas. Utilizing advanced convolutional neural
networks (CNNs), our model trained on a diverse dataset demonstrated high
accuracy in recognizing various tree species, which is essential for the accurate
estimation of oxygen output. The integration of environmental parameters such
as tree density and health enabled our system to offer a comprehensive analysis
of oxygen sufficiency, tailored for both urban and rural environments.

The project also identified areas for improvement, including the need for a
more diverse dataset and enhanced handling of environmental variability.
Moving forward, enhancing dataset diversity, integrating real-time
environmental data, and focusing on sustainable computing practices will be
crucial in evolving the system's capabilities and reducing its environmental

36
impact. This project not only pushes the boundaries of ecological monitoring
using AI but also sets a foundation for future innovations aimed at
environmental sustainability and informed urban planning.

APPENDICES
ANNEXURES 1:SOURCE CODE
!pip install PyYAML
from google.colab import drive
drive.mount('/content/drive')
import cv2
import numpy as np
import os
import yaml
from yaml.loader import SafeLoader
# load YAML
with open('/content/drive/MyDrive/Yolo/yolov5/data.yaml',mode='r') as f:
data_yaml = yaml.load(f,Loader=SafeLoader)
labels = data_yaml['names']
print(labels)
# load YOLO model

37
yolo =
cv2.dnn.readNetFromONNX('/content/drive/MyDrive/Yolo/yolov5/runs/t
rain/Model2/weights/best.onnx')
yolo.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
yolo.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
# load the image
img = cv2.imread('/content/drive/MyDrive/data
images/train/DJI_0146_JPG.rf.cba17a0304a36336222f15c9ae92e2a7.jpg'
)
image = img.copy()
row, col, d = image.shape
# get the YOLO prediction from the the image
# step-1 convert image into square image (array)
max_rc = max(row,col)
input_image = np.zeros((max_rc,max_rc,3),dtype=np.uint8)
input_image[0:row,0:col] = image
# step-2: get prediction from square array
INPUT_WH_YOLO = 640
blob = cv2.dnn.blobFromImage(input_image,1/255,
(INPUT_WH_YOLO,INPUT_WH_YOLO),swapRB=True,crop=False)
yolo.setInput(blob)
preds = yolo.forward() # detection or prediction from YOLO
print(preds.shape)
# Non Maximum Supression
# step-1: filter detection based on confidence (0.4) and probability score
(0.25)
detections = preds[0]
boxes = []
confidences = []

38
classes = []
# width and height of the image (input_image)
image_w, image_h = input_image.shape[:2]
x_factor = image_w/INPUT_WH_YOLO
y_factor = image_h/INPUT_WH_YOLO
for i in range(len(detections)):
row = detections[i]
confidence = row[4] # confidence of detection an object
if confidence > 0.25:
class_score = row[5:].max() # maximum probability from 20 objects
class_id = row[5:].argmax() # get the index position at which max
probabilty occur

if class_score > 0.45:


cx, cy, w, h = row[0:4]
# construct bounding from four values
# left, top, width and height
left = int((cx - 0.5*w)*x_factor)
top = int((cy - 0.5*h)*y_factor)
width = int(w*x_factor)
height = int(h*y_factor)
box = np.array([left,top,width,height])
# append values into the list
confidences.append(confidence)
boxes.append(box)
classes.append(class_id)
# clean
boxes_np = np.array(boxes).tolist()
confidences_np = np.array(confidences).tolist()

39
# NMS
index =
cv2.dnn.NMSBoxes(boxes_np,confidences_np,0.45,0.75).flatten()

index

# Draw the Bounding


for ind in index:
x,y,w,h = boxes_np[ind]
bb_conf = int(confidences_np[ind]*100)
classes_id = classes[ind]
class_name = labels[classes_id]

text = f'{class_name}'

cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),2)
cv2.rectangle(image,(x,y-30),(x+w,y),(255,255,255),-1)

cv2.putText(image,text,(x,y-
10),cv2.FONT_HERSHEY_PLAIN,0.7,(0,0,0),1)

from google.colab.patches import cv2_imshow

# Show the original image


cv2_imshow(img)
# Show the YOLO prediction image
cv2_imshow(image)

40
# Wait for a key press and close the windows
cv2.waitKey(0)
cv2.destroyAllWindows()

import cv2
from collections import Counter

detections = [] # List to store detection details


class_counter = Counter() # To count occurrences of each class

# Assume 'index', 'boxes_np', 'confidences_np', 'classes', and 'labels' are


predefined
for ind in index:
x, y, w, h = boxes_np[ind]
confidence = confidences_np[ind]
class_id = classes[ind]
class_name = labels[class_id]

# Update the class counter with the class name


class_counter[class_name] += 1

# Store detection details in a dictionary and append to the list


detection = {
"class_name": class_name,
"bbox": (x, y, w, h),
"confidence": confidence
}
detections.append(detection)

41
# Draw bounding box and label on the image
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.rectangle(image, (x, y - 30), (x + w, y), (255, 255, 255), -1)
cv2.putText(image, f'{class_name} {int(confidence * 100)}%', (x, y -
10), cv2.FONT_HERSHEY_PLAIN, 0.7, (0, 0, 0), 1)

tree_count={}
oxy_rates={'Neem':6000, 'Devkanchan':8000,'Mango':9000}
for class_name, count in class_counter.items():
tree_count[class_name]=count
tot_oxy_prod=0
for tree in tree_count:
tot_oxy_prod=tot_oxy_prod+oxy_rates[tree]*tree_count[tree]
n=int(input("Enter the total population"))
tot_oxy_peop=n*8640
if tot_oxy_peop<tot_oxy_prod:
print('Oxygen is Sufficient')
else:
print('Oxygen is not sufficient')
snake=0.12
difference=tot_oxy_peop-tot_oxy_prod
cnt=0
while True:
snake+=snake
cnt+=1
if snake >difference:
print(cnt, 'snake plants can be planted')
break
42
BIBLIOGRAPHY

[1] M. Dyrmann, H. Karstoft, and H. S. Midtiby, “Plant species classification


using deep convolutional neural network,” Biosyst. Eng., vol. 151, pp. 72–80,
Nov. 2016, doi: 10.1016/j.biosystemseng.2016.08.024.

[2] P. Barré, B. C. Stöver, K. F. Müller, and V. Steinhage, “LeafNet: A computer


vision system for automatic plant species identification,” Ecol. Inform., vol. 40,
pp. 50–56, Jul. 2017, doi: 10.1016/j.ecoinf.2017.05.005.

[3] G. L. Grinblat, L. C. Uzal, M. G. Larese, and P. M. Granitto, “Deep learning


for plant identification using vein morphological patterns,” Comput. Electron.
Agric., vol. 127, pp. 418– 424, Sep. 2016, doi: 10.1016/j.compag.2016.07.003.

[4] J. W. Tan, S.-W. Chang, S. Binti Abdul Kareem, H. J. Yap, and K.-T. Yong,
“Deep Learning for Plant Species Classification using Leaf Vein
Morphometric,” IEEE/ACM Trans. Comput. Biol. Bioinforma., pp. 1–1, 2018,
doi: 10.1109/TCBB.2018.2848653.

43
[5] A. Ambarwari, Q. J. Adrian, Y. Herdiyeni, and I. Hermadi, “Plant species
identification based on leaf venation features using SVM,” TELKOMNIKA
(Telecommunication Comput. Electron. Control., vol. 18, no. 2, p. 726, Apr.
2020, doi: 10.12928/telkomnika.v18i2.14062.

[6] H. F. Eid and A. Abraham, “Plant species identification using leaf biometrics
and swarm optimization: A hybrid PSO, GWO, SVM model,” Int. J. Hybrid
Intell. Syst., vol. 14, no. 3, pp. 155–165, Mar. 2018, doi: 10.3233/HIS-180248.
Sakarya University Journal of Computer and Information Sciences M. Fatih
Adak 237

[7] M. A. Islama, S. I. Yousuf, and M. M. Billah, “Automatic Plant Detection


Using HOG and LBP Features With SVM,” Int. J. Comput., vol. 33, no. 1, pp.
26–38, 2019.

[8] I. Gogul and V. S. Kumar, “Flower species recognition system using


convolution neural networks and transfer learning,” in 2017 Fourth International
Conference on Signal Processing, Communication and Networking (ICSCN),
2017, pp. 1–6, doi: 10.1109/ICSCN.2017.8085675.

[9] M. Toğaçar, B. Ergen, and Z. Cömert, “Classification of flower species by


using features extracted from the intersection of feature selection methods in
convolutional neural network models,” Measurement, vol. 158, p. 107703, Jul.
2020, doi: 10.1016/j.measurement.2020.107703.

[10] M. Momeny, A. Jahanbakhshi, K. Jafarnezhad, and Y.-D. Zhang,


“Accurate classification of cherry fruit using deep CNN based on hybrid
pooling approach,” Postharvest Biol. Technol., vol. 166, p. 111204, Aug. 2020,
doi: 10.1016/j.postharvbio.2020.111204.

44
[11] Y. Ren, N. Wang, M. Li, and Z. Xu, “Deep density-based image
clustering,” Knowledge-Based Syst., vol. 197, p. 105841, Jun. 2020, doi:
10.1016/j.knosys.2020.105841.

[12] W. Qian et al., “UAV and a deep convolutional neural network for
monitoring invasive alien tree in the wild,” Comput. Electron. Agric., vol. 174,
p. 105519, Jul. 2020, doi: 10.1016/j.compag.2020.105519.

[13] K. P. Ferentinos, “Deep learning models for plant disease detection and
diagnosis,” Comput. Electron. Agric., vol. 145, pp. 311–318, Feb. 2018, doi:
10.1016/j.compag.2018.01.009.

[14] Y. Osako, H. Yamane, S.-Y. Lin, P.-A. Chen, and R. Tao, “Cultivar
discrimination of litchi fruit images using deep learning,” Sci. Hortic.
(Amsterdam)., vol. 269, p. 109360, Jul. 2020, doi:
10.1016/j.scienta.2020.109360.

[15] J. Chen, J. Chen, D. Zhang, Y. Sun, and Y. A. Nanehkaran, “Using deep


transfer learning for image-based plant disease identification,” Comput.
Electron. Agric., vol. 173, p. 105393, Jun. 2020, doi:
10.1016/j.compag.2020.105393.

[16] S. Fan et al., “On line detection of defective apples using computer vision
system combined with deep learning methods,” J. Food Eng., vol. 286, p.
110102, Dec. 2020, doi: 10.1016/j.jfoodeng.2020.110102.

[17] C. W. Yohannese and T. Li, “A Combined-Learning Based Framework for


Improved Software Fault Prediction,” Int. J. Comput. Intell. Syst., vol. 10, no. 1,
p. 647, 2017, doi: 10.2991/ijcis.2017.10.1.43.

[18] A. Krishnaswamy Rangarajan and R. Purushothaman, “Disease


Classification in Eggplant Using Pre-trained VGG16 and MSVM,” Sci. Rep.,
45
vol. 10, no. 1, p. 2322, Dec. 2020, doi: 10.1038/s41598-020-59108-x. Sakarya
University Journal of Computer and Information Sciences M. Fatih Adak 238

[19] M. F. Adak, “Modeling of Irrigation Process Using Fuzzy Logic for


Combating Drought,” Acad. Perspect. Procedia, vol. 2, no. 2, pp. 229–233, Oct.
2019, doi: 10.33793/acperpro.02.02.34.

[20] Y. LeCun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and


applications in vision,” in Proceedings of 2010 IEEE International Symposium
on Circuits and Systems, 2010, pp. 253–256, doi:
10.1109/ISCAS.2010.5537907.

46

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy