0% found this document useful (0 votes)
5 views8 pages

16th ICCCNT 2025 Paper 9651

This research presents a deep learning-based framework for crop classification and yield prediction using aerial imagery and the EfficientNetB0 model, focusing on five major crops: Cassava, Longan, Rice, Rubber, and Sugarcane. The study demonstrates a classification accuracy of 99% by employing preprocessing techniques and advanced metrics for evaluation, showcasing the potential of deep learning in precision agriculture. This approach aims to enhance agricultural productivity and sustainability by providing real-time data-driven insights for farmers.

Uploaded by

mmaniimegalaii
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views8 pages

16th ICCCNT 2025 Paper 9651

This research presents a deep learning-based framework for crop classification and yield prediction using aerial imagery and the EfficientNetB0 model, focusing on five major crops: Cassava, Longan, Rice, Rubber, and Sugarcane. The study demonstrates a classification accuracy of 99% by employing preprocessing techniques and advanced metrics for evaluation, showcasing the potential of deep learning in precision agriculture. This approach aims to enhance agricultural productivity and sustainability by providing real-time data-driven insights for farmers.

Uploaded by

mmaniimegalaii
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

AI Driven Crop Classification and Yield

Prediction Using Deep Learning


Maheswaran S Indhumathi N K.Venkateswaran
Department of Electronics and Department of Electronics and Department of ECE
Communication Engineering Communication Engineering CMR Institute of Technology
Kongu Engineering College Kongu Engineering College Bengaluru, India
Erode, India Erode, India venkateswaran.k@cmrit.ac.in
mmaheswaraneie@gmail.com indhunatarajan18@gmail.com

Charumathi K Balanisharitha P Ilmunisha A


Department of Electronics and Department of Electronics and Department of Electronics and
Communication Engineering Communication Engineering Communication Engineering
Kongu Engineering College Kongu Engineering College Kongu Engineering College
Erode, India Erode, India Erode, India
charumathik.21ece@kongu.edu balanisharithap.21ece@kongu.edu ilmunishaa.21ece@kongu.edu

Abstract— Accurate crop classification and yield prediction intensive, and often inaccurate. This is especially
are crucial for enhancing agricultural productivity, ensuring problematic in regions with limited access to advanced
food security, and promoting sustainability. Traditional technology, where quick and reliable data is crucial for
methods, such as manual surveys, are often time-consuming maximizing crop output. Crop yields depend on several key
and prone to inaccuracies, making it essential to adopt more factors, including weather conditions, soil health, and the
efficient and reliable approaches. This research introduces a specific crop type. However, gathering accurate, real-time
deep learning-based framework for automating crop data on crop growth has always been a challenge[3].
monitoring and yield forecasting using aerial imagery and the Agricultural environments are unpredictable, and plants
EfficientNetB0 model. The study utilizes the EcoCropsAID
grow under constantly changing conditions, making it
dataset, which includes five major crop classes: Cassava,
Longan, Rice, Rubber, and Sugarcane. A series of
difficult for conventional methods to provide precise
preprocessing steps, including image resizing and forecasts at scale. Thankfully, advancements in remote
normalization, are applied to the dataset, which is then split sensing technology, including aerial and satellite imagery,
into training, validation, and test sets. The model's have revolutionized agricultural monitoring. These high-
performance is evaluated through various metrics, including a resolution images allow farmers and researchers to assess
confusion matrix, a classification report, and accuracy-loss crop health, detect diseases early, and estimate yields with
graphs. Additionally, a post-classification process involving greater accuracy[4]. However, managing and analyzing such
Otsu's thresholding is used to generate a binary mask for crop massive amounts of data isn’t easy—it requires powerful
area estimation. The results demonstrate an impressive tools that can turn raw images into actionable insights[5].
classification accuracy of 99%, showcasing the potential of the This is where deep learning comes in. Models like
model in both crop classification and area estimation. By EfficientNetB0 can automatically process large datasets,
combining deep learning techniques with aerial imagery, the recognizing important features such as crop type, growth
proposed framework offers a scalable solution for precision stage, and field coverage. Unlike traditional methods, deep
farming, enabling real-time crop monitoring, optimized learning algorithms can detect complex patterns in the data,
resource distribution, and improved decision-making. leading to much more accurate crop classification and yield
Furthermore, it fosters sustainable farming practices by prediction[6]. In this study, we introduce a deep learning-
minimizing environmental impact and can be adapted to based approach that integrates remote sensing data for
different regions and crop types, addressing global agricultural
automated crop classification and yield forecasting. We use
challenges.
the EcoCropsAID dataset, which contains aerial images of
Keywords— Crop Yield Estimation, Remote Sensing, five key crops: Cassava, Longan, Rice, Rubber, and
Machine Learning, EfficientNetB, EcoCropsAID Dataset, Aerial Sugarcane. Our method involves preprocessing these
Imaging, Crop Classification. images—resizing and normalizing them—before applying
the EfficientNetB0 model to classify crops and estimate yield
I. INTRODUCTION potential[7]. By leveraging this data-driven approach,
farmers can make smarter decisions about resource
Accurately predicting crop yields is a game- management, pest control, and harvest planning without
changer for modern agriculture.[1] It helps farmers make relying on slow and manual methods[8]. Not only does this
better decisions, optimizes resource use, and ensures food approach improve classification accuracy, but it also
security. As the world’s population keeps growing, the provides a scalable, real-time solution for precision farming.
demand for food is increasing rapidly, making efficient Automating crop analysis reduces human effort, minimizes
farming more important than ever. Farmers, policymakers, environmental impact, and helps farmers adapt to changing
and agricultural planners all rely on yield predictions to agricultural conditions. More importantly, this system can be
manage food production and distribution effectively[2]. tailored to different crops and regions, making it a valuable
However, traditional methods—such as manual field surveys tool for tackling food security challenges worldwide[9].
and direct yield measurements—are time-consuming, labor-

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE


By combining deep learning, remote sensing, and By combining deep learning, remote sensing, and
predictive modeling, this research highlights how modern predictive modeling, this research highlights how modern
technology can revolutionize farming[10]. Smarter, data- technology can revolutionize farming. Smarter, data-driven
driven agriculture isn't just about improving efficiency—it’s agriculture isn't just about improving efficiency—it’s about
about building a more sustainable and resilient food system building a more sustainable and resilient food system for the
for the future.. future[21].
II. RELATED WORKS III. DATASET AND PREPROCESSING
Accurately predicting crop yields is a game-changer for A. DATASET DESCRIPTION
modern agriculture. It helps farmers make better decisions,
For this project, we utilized the EcoCropsAID Thailand’s
optimizes resource use, and ensures food security[11]. As the
Economic Crops Aerial Image Dataset, which contains a
world’s population keeps growing, the demand for food is
total of 5,400 images representing five important crops:
increasing rapidly, making efficient farming more important
Cassava, Longan, Rice, Rubber, and Sugarcane. This dataset
than ever. Farmers, policymakers, and agricultural planners
is key for our deep learning model, offering a diverse range
all rely on yield predictions to manage food production and
of images that capture each crop in various environmental
distribution effectively[12]. However, traditional methods—
conditions. This variety allows the model to learn the distinct
such as manual field surveys and direct yield
characteristics of each crop from aerial images. We also
measurements—are time-consuming, labor-intensive, and
made sure to include examples from each crop class to
often inaccurate. This is especially problematic in regions
understand their visual features and differences better.
with limited access to advanced technology, where quick and
reliable data is crucial for maximizing crop output[13].
Crop yields depend on several key factors, including
weather conditions, soil health, and the specific crop type.
However, gathering accurate, real-time data on crop growth
has always been a challenge[14]. Agricultural environments
are unpredictable, and plants grow under constantly changing
conditions, making it difficult for conventional methods to
provide precise forecasts at scale. Thankfully, advancements B. DATA PREPROCESSING
in remote sensing technology, including aerial and satellite
imagery, have revolutionized agricultural monitoring[15]. To prepare the images for the model, several
These high-resolution images allow farmers and researchers preprocessing steps were applied. The original images,
to assess crop health, detect diseases early, and estimate which were 300×300 pixels, were resized to 224×224 pixels
yields with greater accuracy. However, managing and to match the input size required by the EfficientNetB0
analyzing such massive amounts of data isn’t easy—it model. This resizing ensures that all images are uniform in
requires powerful tools that can turn raw images into size while preserving the important features for
actionable insights[16]. classification. The pixel values were then normalized to a
range of [0,1], which helps the model learn more efficiently.
This is where deep learning comes in. Models like
Additionally, to make the model more robust, we applied
EfficientNetB0 can automatically process large datasets,
data augmentation techniques like rotating, flipping, and
recognizing important features such as crop type, growth
adjusting the brightness of the images. These augmentations
stage, and field coverage[17]. Unlike traditional methods,
help the model become more flexible and improve its
deep learning algorithms can detect complex patterns in the
performance under various conditions.
data, leading to much more accurate crop classification and
yield prediction.
In this study, we introduce a deep learning-based C. DATASET SPLITTING STRATEGY
approach that integrates remote sensing data for automated
crop classification and yield forecasting. We use the The dataset was divided into three sets to ensure effective
EcoCropsAID dataset, which contains aerial images of five training and evaluation: 70% for training, 15% for validation,
key crops: Cassava, Longan, Rice, Rubber, and Sugarcane. and 15% for testing. The training set is used to teach the
Our method involves preprocessing these images—resizing model how to classify crops, while the validation set helps in
and normalizing them—before applying the EfficientNetB0 fine-tuning the model and preventing overfitting. The test set
model to classify crops and estimate yield potential[18]. By is reserved for evaluating the model’s ability to generalize to
leveraging this data-driven approach, farmers can make new, unseen images. By maintaining a balanced distribution
smarter decisions about resource management, pest control, of crop types across all three sets, we ensure that the model is
and harvest planning without relying on slow and manual fairly tested and can handle real-world variations in the data.
methods. IV. PROPOSED METHOD
Not only does this approach improve classification Figure 1 depicts the block diagram represents a
accuracy, but it also provides a scalable, real-time solution systematic approach for estimating crop yield using satellite
for precision farming. Automating crop analysis reduces or aerial imagery. The process starts with the acquisition of
human effort, minimizes environmental impact, and helps an input image depicting the agricultural field. This image is
farmers adapt to changing agricultural conditions[19][20]. then subjected to a preprocessing stage, where enhancements
More importantly, this system can be tailored to different such as noise reduction and contrast adjustment are applied
crops and regions, making it a valuable tool for tackling food to improve the image quality for further analysis. Following
security challenges worldwide.
this, the processed image is passed through a feature like ImageNet, allowed us to reduce training time and
extraction and crop classification module, where improve classification accuracy.
EfficientNetB0, a deep neural network architecture, is
employed to identify and classify the type of crop present in
the field. After successful classification, the area under
cultivation is calculated based on the spatial features
extracted from the image. This calculated area, along with
the crop type, is used as input for the subsequent yield
estimation phase. Finally, the system generates an output that
provides an estimate of the expected crop yield, thereby
supporting agricultural planning and resource management.

D. BLOCK DIAGRAM
Figure 2. EfficientNetB0 Architecture

G. MODEL TRAINING

The dataset was divided into three segments: 70% for


training, 15% for validation, and 15% for testing. The
training set allowed the model to learn from labeled crop
images, while the validation set helped fine-tune the model’s
hyperparameters and monitor its performance during
training. The test set was reserved for evaluating the model’s
ability to generalize to unseen data.
During training, we set the learning rate to 0.0001, which
Figure 1. Block Diagram enabled gradual model updates and stable learning. A batch
size of 32 was chosen to balance computational efficiency
and memory usage. The model was trained for 20 epochs,
E. MODEL SELECTION AND JUSTIFICATION which provided enough iterations for the model to learn
meaningful features from the data without overfitting.
For our crop classification task, we selected the
The categorical cross-entropy loss function was used,
EfficientNetB0 model due to its balanced combination of
which is appropriate for multi-class classification tasks. To
high performance and low computational cost. While
optimize the learning process, we used the Adam optimizer,
traditional deep learning models like ResNet and VGG have
which adjusts the learning rate during training to ensure
been commonly used, EfficientNet's innovative compound
stable convergence. To avoid overfitting, we implemented
scaling approach adjusts the model’s depth, width, and
resolution in a coordinated manner. This approach results in early stopping, halting training when the validation set
a more efficient model that performs better on tasks like crop performance stopped improving. Additionally, model
classification. Given the large EcoCropsAID dataset, checkpointing was used to save the model’s best weights
EfficientNetB0 was a natural choice, as it achieves excellent throughout the training.
results without overloading computational resources. H. EVALUATION METRICS
F. MODEL ARCHITECTURE
To assess the model’s effectiveness, several evaluation
metrics were considered:
Figure 2 depicts, EfficientNetB0 employs several
advanced architectural features, such as stem layers for initial  Accuracy provided an overall measure of the
feature extraction, followed by bottleneck layers that enable model’s ability to correctly classify crops.
the model to capture more complex features as the data flows
through the network. Additionally, the use of squeeze-and-
excitation blocks allows the model to weigh the importance
of each feature dynamically, focusing on the most relevant
parts of the input image.

For our specific task, we modified the output layer to


handle five distinct crop classes: Cassava, Longan, Rice,
Rubber, and Sugarcane. By using the softmax activation
function at the output layer, the model outputs a probability  The confusion matrix allowed us to evaluate the
distribution across all classes, predicting the most probable performance of the model for each crop class and
crop class for each image. Instead of training the model from identify where it was misclassifying crops.
scratch, we used a pre-trained EfficientNetB0 model. This  Precision, Recall, and F1-score were especially
approach, which fine-tunes a model trained on large datasets important given the possible class imbalances.
These metrics helped us assess how well the model Total Land Coverage Calculation:
identified crops (precision) and how thoroughly it
By summing the areas of all pixels associated with a
detected crops in the images (recall). The F1-score given crop type, the total cultivated land for each crop was
offered a balanced measure of both precision and determined. This automated approach ensures consistency
recall. across large datasets and reduces human error typically
associated with manual measurement techniques.

Benefits of the Approach:

 Eliminates variability and subjectivity inherent in


manual area measurements
 Enables rapid processing of extensive aerial data
 Enhances applicability for large-scale, multi-region
agricultural assessments
We also tracked the training and validation loss curves to
detect overfitting. This ensured that the model wasn't simply This method not only improves efficiency but also
memorizing the training data but rather generalizing well to provides a scalable solution for digital agriculture
new data. applications.

I. CROP AREA ESTIMATION APPROACH J. YIELD ESTIMTION BASED ON AREA AND CROP
CLASSIFICATION
Once the crops were classified, we moved on to Following the area estimation process, the final analytical
estimating their areas within the image. This was done by step involved predicting crop yield based solely on image-
creating a binary mask using Otsu’s thresholding method, derived data. Unlike conventional yield prediction models
which calculates an optimal threshold for distinguishing crop that incorporate environmental parameters such as rainfall,
pixels from the background. The thresholding technique temperature, or soil characteristics, the proposed approach
converts the image into a binary form, where crop pixels are relies exclusively on the outcomes of crop classification and
labeled as 1, and the non-crop areas are labeled as 0. The the corresponding segmented area measurements.
figure 3 represents the Otsu’s Threshold representation of the
aerial image. Methodology for Yield Estimation:

1. Crop Identification:

Each segmented region within the aerial imagery was


assigned a crop label based on the classification results
produced by the EfficientNetB0 model. This ensured a direct
mapping between crop types and their associated spatial
coverage.

2. Area Integration:

Figure 3. Otsu’s Threshold representation of the aerial The cultivated area for each crop type was obtained from
image. the pixel-based area estimation technique, which translated
To calculate the area of the crops, we applied a pixel-to- pixel counts into real-world surface area using a defined
area conversion ratio. This ratio was derived by comparing spatial resolution.
the number of crop pixels in the binary mask to known
ground truth data, which specifies the real-world area
3. Yield Calculation Model:
corresponding to each pixel. By multiplying the number of
crop pixels by the pixel-to-area ratio, we were able to
estimate the crop area in square meters. The accuracy of Crop yield was estimated using crop-specific yield
these area estimates was validated by comparing them with factors, which represent the average production output per
the actual ground truth measurements. unit area (e.g., kg/m² or tons/hectare). The estimated yield
was computed using the following formula:
The area of each crop was calculated using the following
equation: Estimated Yield = 𝐴𝑐 × 𝑌𝑓
Crop Area = 𝑁𝑝 × 𝑆𝑝 Where:
Where: 𝐴𝑐 = Cultivated area of the crop (m² or ha)
𝑁𝑝 = Number of pixels in the segmented crop region Yf = Yield factor (average yield per unit area for the
crop)
Sp = Scale factor per pixel (in m²/pixel)
Advantages of the Proposed Framework:
 Eliminates dependency on external environmental
and historical data
 Simplifies the yield estimation pipeline through
image-based analysis
 Enables rapid and scalable yield prediction across
varied agricultural landscapes

By leveraging classification and segmentation


techniques, this method provides a lightweight, automated
solution for crop yield forecasting. The approach is
particularly advantageous in regions where environmental
data may be sparse or unavailable, offering a versatile tool
for precision agriculture and large-scale farm management.
V. RESULTS AND DISCUSSIONS

K. CROP CLASSIFICATION PERFORMANCE Figure 5. Confusion Matrix


The deep learning model developed for crop
classification achieved excellent performance across all To validate these findings, standard classification
target crop categories. The input dataset consisted of 5,400 metrics—including precision, recall, and F1-score—were
aerial images representing five major crops—Cassava, computed for each crop. The results demonstrated uniformity
Longan, Rice, Rubber, and Sugarcane. These images were across all metrics, with F1-scores approaching 0.99. The
split into training (70%), validation (15%), and testing (15%) macro and weighted averages reinforced the model's
sets to ensure balanced model evaluation. balanced performance, confirming its reliability in multi-
class classification scenarios. These outcomes suggest the
model’s potential for robust deployment in diverse
agricultural landscapes without the need for crop-specific
tuning.
M. PERFORMNCE ACROSS INDIVIDUAL CROP TYPES
An evaluation of individual class performance confirmed
the high precision of the model as shown in table 1. The
classification accuracy for Cassava was 97.17%, while
Longan and Rubber crops each exceeded 99%. Sugarcane
showed 98.54% accuracy, and Rice achieved a perfect 100%.
These figures reflect the effectiveness of the model in
distinguishing subtle visual differences across various crop
types and environmental backgrounds. Figure 6 represents
the classification report of the model and individual
accuracies of each crop.
Figure 4. Model Accuracy and Loss

The classification model was built on the EfficientNetB0


architecture, chosen for its lightweight structure and
powerful feature extraction capabilities. During training, the
model showed rapid convergence, achieving near-perfect
accuracy within the first few epochs. This performance
reflects its capability to effectively capture spatial patterns
and texture variations characteristic of each crop. The
reduced error rates in both training and validation datasets
further indicate strong generalization ability and resistance to
overfitting.
L. EVALUATION OF CLASSIFICATION ACCURACY
The figure 4 depicts the prediction results reveal a
consistently high level of accuracy across all crop classes.
Rice, in particular, was classified flawlessly, with zero
instances of mislabeling. Figure 5 a small number of Table 1. Classification Report
misclassifications were observed in the Cassava class, which
may be attributed to visual similarities in texture or canopy
structure with other crops. However, the frequency of such This level of individual class accuracy is particularly
errors was minimal and did not significantly affect overall valuable in agricultural applications where accurate
model accuracy. differentiation between crops is essential for inventory
estimation, resource distribution, and strategic planning. The
consistent performance across classes also implies that the
model can be extended to additional crop types with minimal
retraining, making it a scalable solution for broader
agricultural mapping tasks.

Figure 7. Original image to Binary mask image

For yield prediction, the model employed a


straightforward but effective methodology by multiplying the
estimated area with standardized yield rates obtained from
agricultural databases. In the example of rice, the calculated
yield was approximately 406.07 tons. This estimate was
consistent with known productivity values for similar field
conditions, reaffirming the model’s applicability for yield
forecasting.

Figure 6. Classification Report

N. CROP AREA ESTIMATION AND YIELD PREDICTION


Accurate measurement of cultivated land area and yield
estimation is critical for precision agriculture, especially in
regions where manual data collection is limited. This study Figure 8. Final output of yield prediction
leveraged deep learning for classification and used geospatial
analysis to estimate the area under cultivation directly from
aerial imagery. While current predictions are based solely on crop area
and average yield values, the framework is designed to be
After preprocessing and classification, the segmented extensible. Future enhancements may include the integration
images were mapped to real-world coordinates using of multispectral vegetation indices, weather data, and soil
embedded geospatial metadata. Each pixel was converted health indicators to improve prediction accuracy and capture
into actual surface area based on a known pixel-to-metric the dynamic nature of crop growth.
ratio as shown in Figure 7. This process allowed the model to
handle varying resolutions and imaging conditions. For non- O. PRACTICAL IMPLICATIONS AND SCALABILITY
uniform and irregular fields, morphological filtering and The system presented in this work offers a promising
image enhancement techniques were applied to refine crop solution for scalable agricultural monitoring and decision
boundaries. support. By integrating deep learning-based classification
with geospatial analytics, it enables rapid, automated, and
In one case study, the model analysed an aerial image highly accurate crop mapping and yield forecasting. One of
containing rice fields. Figure 8 depicts the resulting area the most significant advantages is its ability to operate
estimation was approximately 90.24 hectares, which closely independently of traditional environmental inputs, such as
aligned with manually collected ground truth data. This level soil or climate data, making it especially useful in data-
of accuracy supports the system’s viability for real-world scarce regions.
deployment in crop monitoring, especially in areas where
conventional survey methods are time-consuming or This approach has strong potential for regional and
infeasible. national-level deployment, supporting government agencies,
agricultural planners, and private stakeholders in resource
allocation, harvest forecasting, and land-use optimization.
Additionally, because the framework relies on aerial
imagery, it is adaptable to various platforms—from drones REFERENCES
for localized analysis to satellites for large-scale monitoring.
The combination of technical accuracy, minimal manual [1] Z. Guo, J. Chamberlin, and L. You, “Smallholder
intervention, and adaptability to different agricultural maize yield estimation using satellite data and machine
contexts makes this system a valuable tool in the push learning,” Crop Environment, vol. 2, no. 4, pp. 165–
toward digital and precision agriculture. 174, 2023.
[2] [H. Li et al., “Development of a 10-m resolution maize
VI. CONCLUSION and soybean map over China: Matching satellite-based
The deployment of an EfficientNetB0-based framework crop classification with sample-based area estimation,”
for crop classification and yield estimation has shown Remote Sensing of Environment, vol. 294, p. 113623,
remarkable efficacy in accurately identifying and quantifying 2023.
five major agricultural crops—Cassava, Longan, Rice, [3] S. Khaki et al., “DeepCorn: A semi-supervised deep
Rubber, and Sugarcane—through analysis of aerial imagery. learning method for high-throughput image-based corn
In contrast to conventional methods that depend on manual kernel counting and yield estimation,” Knowledge-
field surveys, this approach harnesses the power of deep Based Systems, vol. 218, p. 106874, 2021.
learning and remote sensing technologies to deliver a [4] L. Blickensdorfer et al., “Mapping of crop types and
scalable, time-efficient, and cost-effective alternative for crop sequences with combined time series of Sentinel-
precision farming. By capitalizing on the superior feature
1, Sentinel-2, and Landsat 8 data for Germany,”
extraction capabilities of EfficientNetB0, the model achieved
Remote Sensing of Environment, vol. 269, p. 112831,
high classification accuracy while maintaining computational
efficiency, making it well-suited for monitoring large 2022.
agricultural landscapes. [5] Y. Ma et al., “Corn yield prediction and uncertainty
analysis based on remotely sensed variables using a
The model’s integration with geospatial data enabled Bayesian neural network approach,” Remote Sensing
accurate measurement of cultivated areas through pixel- of Environment, vol. 259, p. 112408, 2021.
based segmentation methods, thereby supporting the [6] F. Gao, J. Masek, M. Schwaller, and F. Chen,
estimation of yield with minimal human intervention. This “Mapping crop progress at field scale using the
automated system offers critical insights for stakeholders STARFM algorithm: Integration of Landsat and
such as farmers, policymakers, and agronomists, aiding in MODIS time-series data,” Remote Sensing of
informed decision-making for land utilization, irrigation
Environment, vol. 202, pp. 198–211, 2017.
planning, and crop yield forecasting. The reliance on aerial
[7] J. Zhang, K. Wang, and L. Zhang, “Deep learning for
data collection eliminates the need for physical field
inspections, enhancing operational efficiency and reducing crop classification based on hyperspectral images,”
resource expenditure. IEEE Journal of Selected Topics in Applied Earth
Observations and Remote Sensing, vol. 14, pp. 2335–
Additionally, the system’s yield estimation functionality, 2349, 2020.
based on the correlation between cultivated area and average [8] M. A. Rahman, M. S. Islam, and M. S. Uddin, “A deep
yield per hectare, allows for timely and reliable production learning approach for crop classification from
forecasts. The overall results affirm the model’s potential to multispectral imagery,” in Proc. IEEE Int. Geosci.
improve agricultural planning, strengthen food security Remote Sens. Symp. (IGARSS), Brussels, Belgium,
frameworks, and support efficient supply chain operations
2021, pp. 1234–1237.
through data-driven insights. The scalability and robustness
[9] S. K. Singh, A. K. Singh, and P. Kumar, “Deep
of this system make it particularly beneficial for large-scale
agricultural management and governmental monitoring learning-based crop classification using Sentinel-2
programs. imagery and EfficientNet,” Remote Sensing
Applications: Society and Environment, vol. 48, p.
This study underscores the significant role of deep 497–504, 2020.
learning and remote sensing integration in advancing modern [10] L. Liu, X. Chen, and Y. Wang, “Deep learning for
agriculture. The use of EfficientNetB0 not only optimizes high-resolution remote sensing-based crop mapping: A
image processing and model performance but also paves the comparison of CNN architectures,” IEEE Access, vol.
way for further development in smart farming technologies. 10, pp. 15124–15138, 2020.
Future adaptations may include the incorporation of climatic [11] R. M. Mohanty, S. K. Behera, and A. K. Mahapatra,
data, soil health indicators, and crop phenology to refine
“EfficientNet-based transfer learning approach for crop
yield predictions. Moreover, real-time monitoring using
satellite feeds and IoT-based sensors could enhance the disease classification,” in Proc. Int. Conf. Commun.,
system’s responsiveness and expand its applicability to Control Inf. Sci. (ICCIS), Chennai, India, 2021, pp. 1–
diverse agro-ecological contexts. 5.
[12] P. Jiang and H. Li, “A deep learning approach to crop
In summary, this work presents a robust, automated yield prediction based on multi-source satellite and
solution for crop classification and yield estimation that UAV imagery,” IEEE Trans. Geosci. Remote Sens.,
aligns with the goals of sustainable agriculture. It serves as a vol. 61, no. 5, pp. 21345–21362, 2023.
foundation for future innovations in AI-assisted farming, [13] G. Zhang, M. Li, and C. Yang, “Integration of remote
contributing meaningfully to improved resource sensing and deep learning for multi-crop classification
management, environmental stewardship, and global food
and yield estimation,” IEEE Trans. Geosci. Remote
production strategies.
Sens., vol. 14, pp. 4821–4835, 2023.
[14] M. Vizzari, G. Lesti, and S. Acharki, “Crop
classification in Google Earth Engine: Leveraging
Sentinel-1, Sentinel-2, European CAP data, and object-
based machine-learning approaches,” Geo-spatial
Information Science, 2024, Taylor & Francis.
[15] Chakhar et al., “Improving the accuracy of multiple
algorithms for crop classification by integrating
Sentinel-1 observations with Sentinel-2 data,” Remote
Sensing, vol. 13, no. 24, p. 5173, 2021.
[16] M. Y. Moreno-Revelo, L. Guachi-Guachi et al.,
“Enhanced convolutional-neural-network architecture
for crop classification,” Appl. Sci., vol. 11, no. 24, p.
11847, 2021.
[17] B. Wu, M. Zhang, H. Zeng, and F. Tian, “Challenges
and opportunities in remote sensing-based crop
monitoring: A review,” Nat. Sci. Rev., vol. 10, no. 1,
p. nwac248, 2023.
[18] J. M. Yeom, S. Jeong, R. C. Deo, and J. Ko, “Mapping
rice area and yield in Northeastern Asia by
incorporating a crop model with dense vegetation
index profiles from a geostationary satellite,” GISci.
Remote Sens., vol. 58, no. 6, pp. 914–934, 2021.
[19] Y. Shendryk, R. Davy, and P. Thorburn, “Integrating
satellite imagery and environmental data to predict
field-level cane and sugar yields in Australia using
machine learning,” Field Crops Res., vol. 261, p.
108014, 2021.
[20] S. A. Shetty et al., “Performance analysis on machine
learning algorithms with deep learning model for crop
yield prediction,” in Data Intelligence and Cognitive
Informatics, Singapore: Springer, 2021, pp. 181–196.
[21] Y. Alebele et al., “Estimation of crop yield from
combined optical and SAR imagery using Gaussian
kernel regression,” IEEE J. Sel. Topics Appl. Earth
Obs. Remote Sens., vol. 14, pp. 10912–10925, 2021.
[22] Muruganantham et al., “A systematic literature review
on crop yield prediction with deep learning and remote
sensing,” Remote Sensing, vol. 14, no. 1, p. 241, 2022.
[23] C. H. Chang et al., “Hybrid deep neural networks with
multi-tasking for rice yield prediction using remote
sensing data,” Agriculture, vol. 14, no. 2, p. 219, 2024.
[24] H. Deng, W. Zhang, X. Zheng, and H. Zhang, “Crop
classification combining object-oriented method and
random forest model using unmanned aerial vehicle
(UAV) multispectral image,” Agriculture, vol. 14, no.
2, p. 204, 2024.
[25] N. Thamaraikannan and S. Manju, “An efficient crop
classification in hyperspectral images using optimized
neural network architecture,” in Proc. 6th Int. Conf.
Comput., Commun., Security (ICCCS), 2025.
[26] S. V. Panwar and S. Singh, “A review on crop yield
prediction using deep learning,” in Proc. Int. Conf.
Inventive Syst. Control (ICISC), 2024.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy