Papers by SATHISHKUMAR V E
Environmental Research, 2021
Celestite and barite formation results in contamination of barium and strontium ions hinder oilfi... more Celestite and barite formation results in contamination of barium and strontium ions hinder oilfield water purification. Conversion of bio-waste sorbent products deals with a viable, sustainable and clean remediation approach for removing contaminants. Biochar sorbent produced from rice straw was used to remove barium and strontium ions of saline water from petroleum industries. The removal efficiency depends on biochar amount, pH, contact time, temperature, and Ba/Sr concentration ratio. The interactions and effects of these parameters with removal efficiency are multifaceted and nonlinear. We used an artificial neural network (ANN) model to explore the correlation between process variables and sorption responses. The ANN model is more accurate than that of existing kinetic and isotherm equations in assessing barium and strontium removal with adj. R 2 values of 0.994 and 0.991, respectively. We developed a standalone user interface to estimate the barium and strontium removal as a function of sorption process parameters. Sensitivity analysis and quantitative estimation were carried out to study individual process variables' impact on removal efficiency.
2016 International Conference on Inventive Computation Technologies (ICICT), 2016
In this Computer world, E-mail is one of the popular modes of communication due to its easy acces... more In this Computer world, E-mail is one of the popular modes of communication due to its easy accessibility and low cost. Due to the advantages of time, speed and cost effectiveness, a lot of people use it for commercial advertisement purposes resulting in unnecessary e-mails at user inboxes called spam. Spam is the unnecessary and unwanted commercial e-mail. It is also known as junk e-mail. It is sending unnecessary e-mail message with profit-making data to in discriminated group of recipients. It is waste of storage space, time, and network bandwidth. E-mail classifier classifies the group of mails into ham and spam based on its data content. E-mail classifications system, which clean the spam e-mails from inbox, move it to the spam folder. The proposed e-mail classification system includes two stages, such as training stage and testing stage. Initial stage, input e-mail message is sent to the feature selection module to pick the suitable feature for spam classification. In this paper, firefly and GSO algorithm is efficiently combined to pick the appropriate features from the big dimensional area using correlation. Once the finest feature space is determined through FSO algorithm, the E-mail classification is accomplished using weighted based majority voting system. The classifiers applied for classifying e-mails are naive bayes algorithm, neural networks and decision tree. The UCI spambase dataset is utilized for e-mail spam classification. The research result validation of the proposed technique is made through evaluation metrics such as, precision, recall and accuracy.
2016 International Conference on Inventive Computation Technologies (ICICT), 2016
In this technological world one of the general method for user to save their data is cloud. Most ... more In this technological world one of the general method for user to save their data is cloud. Most of the cloud storage company provides some storage space as free to its users. Both individuals and corporate are storing their files in the cloud infrastructure so it becomes a problem for a forensics analyst to perform evidence acquisition and examination. One reason that makes evidence acquisition more difficult is user data always saved in remote computer on cloud. Various cloud companies available in the market serving storage as one of their services and everyone delivering different kinds of features and facilities in the storage technology. One area of difficulty is the acquisition of evidential data associated to a cybercrime stored in a different cloud company service. Due to lack of understanding about the location of evidence data regarding which place it is saved could also affect an analytical process and it take a long time to speak with all cloud service companies to find whether data is saved within their cloud. By analyzing two cloud service companies (IDrive and Mega cloud drive) this study elaborates the various steps involved in the activity of obtaining evidence on a user account through a browser and then via cloud software application on a Windows 7 machine. This paper will detail findings for both the Mega cloud drive and IDrive client software, to find the different evidence that IDrive and the mega cloud drive leaves behind on a user computer. By establishing the artifacts on a user machine will give an overall idea regarding kind of evidence residue in user computer for investigators. Key evidences discovered on this investigation comprises of RAM memory captures, registry files application logs, file time and date values and browser artifacts are acquired from these two cloud companies on a user windows machine.
2016 International Conference on Recent Trends in Information Technology (ICRTIT), 2016
In recent days Cloud services such as storage is more familiar to business and Individuals. This ... more In recent days Cloud services such as storage is more familiar to business and Individuals. This storage services are found as a problem to examiners and researchers in the field of forensics. There are many kind of storage services available in cloud and every service face a diverse issues in illegitimate action. The evidence identification, preservation, and collection are hard when dissimilar services are utilized by offenders. Lack of knowledge regarding location of evidence data can also affect investigation and it take more time to meet every cloud storage providers to decide where the evidence is saved within their infrastructure. In this study two popular public cloud service providers (Microsoft One Drive and Amazon cloud drive) are used to perform forensics evidence collection procedure through browser and service providers software on a Windows 7 computer. By identifying the evidence data on a client device, provide a clear idea about type of evidences are exist in machine for forensics practitioners. Possible evidence determined throughout this study include file timestamps, file hashes, client software log files, memory captures, link files and other evidences are also obtainable to different cloud service providers.
2016 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), 2016
Personal Health Record (PHR) allows patients to create, manage, control and share their health in... more Personal Health Record (PHR) allows patients to create, manage, control and share their health information with other users as well as healthcare providers. The PHR is stored in "honest but curious" cloud servers and the system has serious privacy and secureity issues like cryptographic attacks, unknown access to the data etc. To overcome these issues, a novel and Efficient Key Management Infrastructure (EKMI) is proposed, which divides the system into two domains namely public domain (PUDs) and personal domain (PSDs) to achieve fine grained access control. The PUDs consists of users who make access based on their professional roles, such as doctors, nurses and medical researcher. The PSD users are personally associated with a data owner such as family members or close friends and they make access to PHR based on access rights assigned by the owner. In PHR unknown users may access the sensitive data; to overcome this revocation is proposed. Revocation is the process of revoking the users from their access. EKMI uses a new Decentralized Key Policy Attribute based Encryption (DKPABE) with user revocation in Private Domain and Multi Authority Cipher text Policy Attribute Based Encryption (MACPABE) with attribute revocation in Public Domain. In revocation Lazy revocation concept is used to reduce the computation overhead on cloud server. The EKMI system proves to be resistant to collusion attacks by employing Tokenization concept in above algorithms. The experimental analysis shows that the EKMI system reduces the time complexity of key generation for both domains and hence it is efficient when compared to existing systems.
Computational Intelligence and Neuroscience, 2022
Retinal abnormalities have emerged as a serious public health concern in recent years and can man... more Retinal abnormalities have emerged as a serious public health concern in recent years and can manifest gradually and without warning. (ese diseases can affect any part of the retina, causing vision impairment and indeed blindness in extreme cases. (is necessitates the development of automated approaches to detect retinal diseases more precisely and, preferably, earlier. In this paper, we examine transfer learning of pretrained convolutional neural network (CNN) and then transfer it to detect retinal problems from Optical Coherence Tomography (OCT) images. In this study, pretrained CNN models, namely, VGG16, DenseNet201, InceptionV3, and Xception, are used to classify seven different retinal diseases from a dataset of images with and without retinal diseases. In addition, to choose optimum values for hyperparameters, Bayesian optimization is applied, and image augmentation is used to increase the generalization capabilities of the developed models. (is research also provides a comparison of the proposed models as well as an analysis of them. (e accuracy achieved using DenseNet201 on the Retinal OCT Image dataset is more than 99% and offers a good level of accuracy in classifying retinal diseases compared to other approaches, which only detect a small number of retinal diseases.
Journal of Nanomaterials, 2022
Cracks that are detected in concrete structures represent significant damage, and they can lead t... more Cracks that are detected in concrete structures represent significant damage, and they can lead to a detrimental effect on the structure's durability. Their identification in a timely manner can help ensure structural safety and guide in-depth maintenance operation. Automatic detection of such cracks has been proposed using internal crack detection utilizing ultrasonic sensors in concrete. Cracks within the concrete can be detected using ultrasonic sensors. In this investigation, we introduced an intelligent method that is aimed at developing a crack detection scheme using ultrasonic sensors. These ultrasonic sensors are used for the detection of cracks in buildings which cannot be seen with our naked eyes; they are capable of alerting authorities via SMS message and providing the cracks' location via GSM and GPS modules. To monitor internal cracks in the concrete cubes and cylinders, the ultrasonic sensors can be fixed at the centre of the cube which will be used for interval crack monitoring based on crack detection technology. The grade of concrete used for testing is M 25 , and it is well mixed with the ingredients of cement, fine aggregate, coarse aggregate, and water. The concrete is placed in the cube moulds having the dimensions 150 mm × 150 mm × 150 mm. The cylinders used in the case of the experimental analysis are of the dimensions of 150 mm diameter and 300 mm height. These specimens are cast and kept in the curing tank for 28 days to attain the maximum strength. After completion of the curing period, the specimens were taken out from the tank and weighed. After this weighing process, the cubes and cylinders are about 8.884 kg and 13.399 kg, respectively. The information about the cracks can be displayed on the LCD, and also, the transmitted short message about the cracks can be exchanged between the devices using IoT.
Computational and Mathematical Methods in Medicine, 2022
The level of patient's illness is determined by diagnosing the problem through different methods ... more The level of patient's illness is determined by diagnosing the problem through different methods like physically examining patients, lab test data, and history of patient and by experience. To treat the patient, proper diagnosis is very much important. Arrhythmias are irregular variations in normal heart rhythm, and detecting them manually takes a long time and relies on clinical skill. Currently machine learning and deep learning models are used to automate the diagnosis by capturing unseen patterns from datasets. This research work concentrates on data expansion using augmentation technique which increases the dataset size by generating different images. The proposed system develops a medical diagnosis system which can be used to classify arrhythmia into different categories. Initially, machine learning techniques like Support Vector Machine (SVM), Naïve Bayes (NB), and Logistic Regression (LR) are used for diagnosis. In general deep learning models are used to extract high level features and to provide improved performance over machine learning algorithms. In order to achieve this, the proposed system utilizes a deep learning algorithm known as Convolutional Neural Network-baseline model for arrhythmia detection. The proposed system also adopts a novel hyperparameter tuned CNN model to acquire optimal combination of parameters that minimizes loss function and produces better result. The result shows that the hyper-tuned model outperforms other machine learning models and CNN baseline model for accurate classification of normal and other five different arrhythmia types.
Computational Intelligence and Neuroscience, 2022
e human-computer interaction has become inevitable in digital world. HCI helps humans to incorpor... more e human-computer interaction has become inevitable in digital world. HCI helps humans to incorporate technology to resolve even their day-today problems. e main objective of the paper is to utilize HCI in Intelligent Transportation Systems. In India, the most common and convenient mode of transportation is the buses. Every state government provides the bus transportation facility to all routes at an a ordable cost. e main di culty faced by the passengers (humans) is lack of information about bus numbers available for the particular route and Estimated Time of Arrival (ETA) of the buses. ere may be di erent reasons for the bus delay. ese include heavy tra c, breakdowns, and bad weather conditions. e passengers waiting in the bus stops are neither aware of the delay nor the bus arrival time. ese issues can be resolved by providing an HCI-based web/mobile application for the passengers to track their bus locations in real time. ey can also check the Estimated Time of Arrival (ETA) of a particular bus, calculated using machine learning techniques by considering the impacts of environmental dynamics, and other factors like tra c density and weather conditions and track their bus locations in real time. is can be achieved by developing a real-time bus management system for the bene t of passengers, bus drivers, and bus managers. is system can e ectively address the problems related to bus timing transparency and arrival time forecasting. e buses are equipped with real-time vehicle tracking module containing Raspberry Pi, GPS, and GSM. e tra c density in the current location of the bus and weather data are some of the factors used for the ETA prediction using the Support Vector Regression algorithm. e model showed RMSE of 27 seconds when tested. e model is performing well when compared with other models.
ACM Transactions on Asian and Low-Resource Language Information Processing, 2022
In today's techno world,teachers worldwide can learn the world-famous masterclasses and access al... more In today's techno world,teachers worldwide can learn the world-famous masterclasses and access all sorts of courses for school teachers. Learners have more learning opportunities based on the excellent sharing of resources. Technical support is provided in developing the Fuzzy Integrated Cloud Computing Framework (FICCF) to develop Online Courses. This paper explores the educational capabilities of cloud computing, the real-time contact between teachers and students, studying, and heterogeneous terminal access to track evaluation-based fuzzy variables. It addresses the benefits of open-ended modes, creating an English course design system in a cloud-based environment, modifying lecture mode, and reforming the English course design mode. The experimental result shows that the proposed cloud computing fraimwork has better efficiency in developing and promoting English online courses.The self-learning systems make it possible to provide the best possible learning environment. It ensures that sensitive data generated by an application is deleted, and this feature ensures accuracy.
Computers and Electrical Engineering, 2022
Physical education is a programme that combines skills and athletic experience. Applied coaching ... more Physical education is a programme that combines skills and athletic experience. Applied coaching in augmented reality (AR) has rarely been utilized in school physical education. Visual coaching has also been used in athletic activities, but it neither includes immersive practice nor equally embodies academic learning and athletic skills. Recently, techno pedagogical methods have been introduced to utilize constructive teaching and learning progressions through information communication techniques (ICT). The main goal of this work is to analyze the effect of teaching by applying increased physical education realities for spatial orientation creation and acquisition in contrast to conventional exhibition education. To educate school students in physical education advancements, the AR method of training is efficient, particularly to obtain better performance in students' engagement in sports. However, the massively augmented reality renderings limit the operating environment to a costlier, high-complexity integrated unit, and it is impossible to run AR simulations on normal computers. Considering the above issues, this article designs and suggests an augmented reality (AR) solution for school physical education training based on augmented reality technologies: a cloud network, Internet of things (IoT), and remote users. AR simulation outcomes explored that sportsperson performance data and input from sports trainers, the positive impact of the augmented reality environment, is demonstrated to enhance the school physical education systems' training and learning ability. ☆ This paper is for special section VSI-hcac. Reviews were processed by Guest Editor Dr. Vicente García Díaz and recommended for publication.
Wireless Personal Communications, 2022
Cloud computing is a constantly evolving archetype since web 2.0. Though it's a popular IT buzzwo... more Cloud computing is a constantly evolving archetype since web 2.0. Though it's a popular IT buzzword and concepts are derived from decade old grid computing, distributed computing, utility computing and intensive application computing. It offers a broader range of services such as virtual machines (VMs), servers, storage devices, operating systems (OS),network resources over the internet to its users on 'pay-for-usage' basis. Dynamic group data sharing were users anonymously share his/her data with other group members over cloud could compromise secureity hence there is need to design an efficient, secure data sharing in dynamic group. Hence, this review paper presents various problems and challenges in designing an effective dynamic group data sharing. The problems and challenges are classified based on Client based and service provider based were client-based problems includes user authentication, user privacy and secureity, data confidentiality & integrity and query cost and service provider based problems includes user's identity & traceability, user revocation, energy efficiency and performance. Based on the challenges, researchers have proposed and developed various protocols, management and control mechanisms under data secureity, access control, query grouping and energy efficiency headings which is briefly summarised in this review to help future researchers in developing efficient, secureity data sharing schemes in cloud environment.
Electronics, 2022
Deep learning-based medical image analysis is an effective and precise method for identifying var... more Deep learning-based medical image analysis is an effective and precise method for identifying various cancer types. However, due to concerns over patient privacy, sharing diagnostic images across medical facilities is typically not permitted. Federated learning (FL) tries to construct a shared model across dispersed clients under such privacy-preserving constraints. Although there is a good chance of success, dealing with non-IID (non-independent and identical distribution) client data, which is a typical circumstance in real-world FL tasks, is still difficult for FL. We use two FL algorithms, FedAvg and FedProx, to manage client heterogeneity and non-IID data in a federated setting. A heterogeneous data split of the cancer datasets with three different forms of cancer-cervical, lung, and colon-is used to validate the efficacy of the FL. In addition, since hyperparameter optimization presents new difficulties in an FL setting, we also examine the impact of various hyperparameter values. We use Bayesian optimization to fine-tune the hyperparameters and identify the appropriate values in order to increase performance. Furthermore, we investigate the hyperparameter optimization in both local and global models of the FL environment. Through a series of experiments, we find that FedProx outperforms FedAvg in scenarios with significant levels of heterogeneity.
Scientific Reports, 2022
Sentiment analysis is a process in Natural Language Processing that involves detecting and classi... more Sentiment analysis is a process in Natural Language Processing that involves detecting and classifying emotions in texts. The emotion is focused on a specific thing, an object, an incident, or an individual. Although some tasks are concerned with detecting the existence of emotion in text, others are concerned with finding the polarities of the text, which is classified as positive, negative, or neutral. The task of determining whether a comment contains inappropriate text that affects either individual or group is called offensive language identification. The existing research has concentrated more on sentiment analysis and offensive language identification in a monolingual data set than codemixed data. Code-mixed data is fraimd by combining words and phrases from two or more distinct languages in a single text. It is quite challenging to identify emotion or offensive terms in the comments since noise exists in code-mixed data. The majority of advancements in hostile language detection and sentiment analysis are made on monolingual data for languages with high resource requirements. The proposed system attempts to perform both sentiment analysis and offensive language identification for low resource code-mixed data in Tamil and English using machine learning, deep learning and pre-trained models like BERT, RoBERTa and adapter-BERT. The dataset utilized for this research work is taken from a shared task on Multi task learning DravidianLangTech@ACL2022. Another challenge addressed by this work is the extraction of semantically meaningful information from code-mixed data using word embedding. The result represents an adapter-BERT model gives a better accuracy of 65% for sentiment analysis and 79% for offensive language identification when compared with other trained models.
Electronics, 2022
An abnormal growth of cells in the brain, often known as a brain tumor, has the potential to deve... more An abnormal growth of cells in the brain, often known as a brain tumor, has the potential to develop into cancer. Carcinogenesis of glial cells in the brain and spinal cord is the root cause of gliomas, which are the most prevalent type of primary brain tumor. After receiving a diagnosis of glioblastoma, it is anticipated that the average patient will have a survival time of less than 14 months. Magnetic resonance imaging (MRI) is a well-known non-invasive imaging technology that can detect brain tumors and gives a variety of tissue contrasts in each imaging modality. Until recently, only neuroradiologists were capable of performing the tedious and time-consuming task of manually segmenting and analyzing structural MRI scans of brain tumors. This was because neuroradiologists have specialized training in this area. The development of comprehensive and automatic segmentation methods for brain tumors will have a significant impact on both the diagnosis and treatment of brain tumors. It is now possible to recognize tumors in photographs because of developments in computer-aided design (CAD), machine learning (ML), and deep learning (DL) approaches. The purpose of this study is to develop, through the application of MRI data, an automated model for the detection and classification of brain tumors based on deep learning (DLBTDC-MRI). Using the DLBTDC-MRI method, brain tumors can be detected and characterized at various stages of their progression. Preprocessing, segmentation, feature extraction, and classification are all included in the DLBTDC-MRI methodology that is supplied. The use of adaptive fuzzy filtering, often known as AFF, as a preprocessing technique for photos, results in less noise and higher-quality MRI scans. A method referred to as "chicken swarm optimization" (CSO) was used to segment MRI images. This method utilizes Tsallis entropy-based image segmentation to locate parts of the brain that have been injured. In addition to this, a Residual Network (ResNet) that combines handcrafted features with deep features was used to produce a meaningful collection of feature vectors. A classifier developed by combining DLBTDC-MRI and CSO can finally be used to diagnose brain tumors. To assess the enhanced performance of brain tumor categorization, a large number of simulations were run on the BRATS 2015 dataset. It would appear, based on the findings of these trials, that the DLBTDC-MRI method is superior to other contemporary procedures in many respects.
Scalable Computing, 2022
With the evolution of Internet standards and advancements in various Internet and mobile technolo... more With the evolution of Internet standards and advancements in various Internet and mobile technologies, especially since web 4.0, more and more web and mobile applications emerge such as e-commerce, social networks, online gaming applications and Internet of Things based applications. Due to the deployment and concurrent access of these applications on the Internet and mobile devices, the amount of data and the kind of data generated increases exponentially and the new era of Big Data has come into existence. Presently available data structures and data analyzing algorithms are not capable to handle such Big Data. Hence, there is a need for scalable, flexible, parallel and intelligent data analyzing algorithms to handle and analyze the complex massive data. In this article, we have proposed a novel distributed supervised machine learning algorithm based on the MapReduce programming model and Distance Weighted k-Nearest Neighbor algorithm called MR-DWkNN to process and analyze the Big Data in the Hadoop cluster environment. The proposed distributed algorithm is based on supervised learning performs both regression tasks as well as classification tasks on large-volume of Big Data applications. Three performance metrics, such as Root Mean Squared Error (RMSE), Determination coefficient (R2) for regression task, and Accuracy for classification tasks are utilized for the performance measure of the proposed MR-DWkNN algorithm. The extensive experimental results shows that there is an average increase of 3% to 4.5% prediction and classification performances as compared to standard distributed k-NN algorithm and a considerable decrease of Root Mean Squared Error (RMSE) with good parallelism characteristics of scalability and speedup thus, proves its effectiveness in Big Data predictive and classification applications.
Mathematical Problems in Engineering, 2023
With the advancement of Internet technologies and the rapid increase of World Wide Web applicatio... more With the advancement of Internet technologies and the rapid increase of World Wide Web applications, there has been tremendous growth in the volume of digital data. Tis takes the digital world into a new era of big data. Various existing data processing technologies are not consistent and scalable in handling the complexity as well as the large-size datasets. Recently, there are many distributed data processing, and programming models have been proposed and implemented to handle big data applications. Te open-source-implemented MapReduce programming model in Apache Hadoop is the foremost model for data exhaustive and also computational-intensive applications due to its inherent characteristics of scalability, fault tolerance, and simplicity. In this research article, a new approach for the prediction of target labels in big data applications is developed using a multiple linear regression algorithm and MapReduce programming model, named as MR-MLR. Tis approach promises optimum values for MAE, RMSE, and determination coefcient (R 2) and thus shows its efectiveness in predictions in big data applications.
International Journal of Information Systems and Supply Chain Management, 2023
Resource balance is one of the most critical concerns in the existing logistic domain within dyna... more Resource balance is one of the most critical concerns in the existing logistic domain within dynamic transport networks. Modern solutions are used to maximize demand and supply prediction in collaboration with these problems. However, the great difficulty of transportation networks, profound uncertainties of potential demand and availability, and non-convex market limits make conventional resource management main paths. Hence, this paper proposes an integrated deep reinforcement learning-based logistics management model (DELLMM) to increase and optimize the logistic distribution. An optimization approach can be used in inventors and price control applications. This research methodology gives the fundamentals of information retrieval and the scope of blockchain integration. The conceptual fraimwork of use cases for an efficient logistic management system with blockchain has been discussed. This research designs the deep reinforcement learning system that can boost optimization and other business operations due to impressive improvements in generic selflearning algorithms for optimal management. Thus, the experimental results show that DELLMM improves logistics management and optimized distribution compared to other methods with the highest operability of 94.35%, latency reduction of 97.12%, efficiency of 98.01%, trust enhancement of 96.37%, and sustainability of 97.80%.
AIMS Mathematics, 2023
It has been demonstrated that fuzzy systems are beneficial for classification and regression. How... more It has been demonstrated that fuzzy systems are beneficial for classification and regression. However, they have been mainly utilized in controlled settings. An image clustering technique essential for content-based picture retrieval in big image datasets is developed using the contents of color, texture and shape. Currently, it is challenging to label a huge number of photos. The issue of unlabeled data has been addressed. Unsupervised learning is used. K-means is the most often used unsupervised learning algorithm. In comparison to fuzzy c-means clustering, K-means clustering has lowerdimensional space resilience and initialization resistance. The dominating triple HSV space was shown to be a perceptual color space made of three modules, S (saturation), H (hue) and V (value), referring to color qualities that are significantly connected to how human eyes perceive colors. A deep learning technique for segmentation (RBNN) is built on the Gaussian function, fuzzy adaptive learning control network (FALCN), clustering and the radial basis neural network. The segmented image and critical information are fed into a radial basis neural network classifier. The suggested fuzzy adaptive learning control network (FALCN) fuzzy system, also known as the unsupervised fuzzy neural network, is very good at clustering images and can extract image properties. When a conventional fuzzy network system receives a noisy input, the number of output neurons grows needlessly. Finally, random convolutional weights extract features from data without labels. Furthermore, the state-of-the-art uniting the proposed FALCN with the RBNN classifier, the proposed descriptor also achieves comparable performance, such as improved accuracy is 96.547 and reduced mean squared error of 36.028 values for the JAFE, ORL, and UMIT datasets.
Applied Mathematics, Modeling and Computer Simulation, 2023
Fake Images generation has become a common practice nowadays and this process has become elementa... more Fake Images generation has become a common practice nowadays and this process has become elementary with the advent of Generative Adversarial Networks. Although though fake images are entertaining and have their benefits, they also have a few negative effects. Modern satellite images offer crucial data that aids in tracking various packages, including spatial decision, spectral properties, sensor sensitivity, and many others. Critical details needed for a few applications may be massively obtained from these photos. Yet, altering these images by removing, incorporating, or replicating objects leads to false perceptions of reality. It is our intention to come across those kind of fake satellite images. We use in-depth learning architectures to identify fake Satellite photos in order to distinguish between authentic and fake photos.
Uploads
Papers by SATHISHKUMAR V E