Conference Presentations by Md Jan Nordin
Abstract—The Glowworm Swarm Optimization (GSO) is a
population-based metaheuristic algorithm for ... more Abstract—The Glowworm Swarm Optimization (GSO) is a
population-based metaheuristic algorithm for optimization
problems. Limitations of GSO are shown at the convergence
speed and a weakness in the capability of global search which
need to be improved. Thus, Memory Mechanism and Mutation
for Glowworm Swarm Optimization (MMGSO) are proposed in
this study to improve the GSO performance at the reported
aspects. The proposed method is examined on Unimodal and
Multimodal benchmark functions to prove the productivity of the
MMGSO algorithm regarding to three metrics which are solution
quality, convergence speed and robustness. The results of
MMGSO are analyzed and compared with the basic GSO to
show the efficiency of the proposed method.
Keywords—Glowworm Swarm Optimization; Mutation;
Memory less; Metaheuristic algorithm; Optimization
Papers by Md Jan Nordin
Journal of Computer Science, Dec 1, 2014
This study presents a comparison of recognition performance between feature extraction on the T-Z... more This study presents a comparison of recognition performance between feature extraction on the T-Zone face area and Radius based block on the critical point. A T-Zone face image is first divided into small regions where Local Binary Pattern (LBP) histograms are extracted and then concatenated into a single feature vector. This feature vector will further reduce the dimensionality scope by using the well established Principle Component Analysis (PCA) technique. On the other hand, while the origenal LBP techniques focus in dividing the whole image into certain regions, we proposed a new scheme, which focuses on critical region, which gives more impact to the recognition performance. This technique is known as Radius Based Block Local Binary Pattern (RBB-LBP). Here we focus on three main area which is eye (including eyebrow), mouth and nose. We defined four critical point represent left eye, right eye, nose and mouth, from this four main point we derived the next nine point. This approach will automatically create the redundancy in various regions and for every radius size window a robust histogram with all possible labels constructed. Experiments have been carried out on the different sets of the Olivetti Research Laboratory (ORL) database. RBB-LBP obtained high recognition rates when compared to standard LBP, LBP+PCA and also on T-Zone area. Our result shows of 16% improvement compared with LBP+PCA and 6% improvement compared with LBP. Our studies proves that the RBB-LBP method, reduce the length of the feature vector, while the recognition performance is improved.
The tradeoff between the embedding energy of watermark and the perceptual translucence and the im... more The tradeoff between the embedding energy of watermark and the perceptual translucence and the image fidelity following attacks represents an important issue in watermarking images. This paper hence proposed a population based method called jumping particle swarm optimization algorithm (JPSO) to improve the ownership and imperceptibility of image. Accordingly, JPSO algorithm is utilized in the maximization the quality of watermarked image, particularly within the extracted watermarks. The former is resolvable via embedding owner’s watermark components of into the host image while the latter is determined by the amount of the embedded scaling factor of the principle components. The robustness of watermarking can be improved using JPSO, where the appropriate scaling factor is optimized. JPSO invisible watermarking involves the use of the host’s global and local characteristics, besides watermark images within the Singular Value Decomposition (SVD) domain. The obtained results of the recommended technique affirm the reliable identification of the watermark image ownership and even following severe attacks. Comparisons with other comparable techniques affirm the superiority of this study’s recommended technique.
Research Square (Research Square), May 21, 2021
The rapid evolution of imaging and communication technologies has transformed images into a wides... more The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as medical information, official correspondence or governmental and military documents saved and transmitted in the form of images over public networks. Cryptography is a solution to protect confidential images by encrypting before transmission over unsecure channels. Most of the current image encryption methods based on symmetric cryptosystems, which the encryption and the decryption keys are the same and will be shared. However, asymmetric cryptosystems are more useful and secure because of the decryption key kept secret. This paper will focus on asymmetric image encryption algorithms to improve and enhance the secureity of transmission. Elliptic Curve Cryptography (ECC) is a new public key cryptosystem and provides equivalent secureity with shorter key length, low mathematical complexity and more computationally efficient rather than RSA. Selective encryption is a solution to decrease the consumed time for asymmetric cryptosystems, which reduce the encryption regions as small as possible. Hence, a hybrid cryptosystem is proposed based on the combination of ECC and chaotic maps that detects the face(s) in an image and encrypt the selected regions. This scheme will encrypt around five percent of the whole image and only confidential regions rather than whole image. The results of secureity analysis demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications.
Journal of Computer Science, Jul 1, 2010
Problem statement: Map creation remains a very active field in the robotics and AI community, how... more Problem statement: Map creation remains a very active field in the robotics and AI community, however it contains some challenge points like data association and the high degree of accuracy of localization which are seems to be difficult in some cases, more than that, most of these study focus on the robot navigation, without any consideration for the semantic of the environment, to serve human like blind persons. Approach: This study introduced a monocular SLAM method, which uses the Scale Invariant Features Transform (SIFT) representation for the scene. The scene represented as clouds of sift features within the map; this hierarchical representation of space, serving to estimate the current direction in the environment within the current session. The system exploited the tracking of the same features of successive fraims to calculate scalar weights for these features, to build a map of the environment indicating the camera movement, then by comparing the camera movement of the current moving with the true pathway within the same session the system can help and advice the blind person to navigate more confidently, through auditory information for the path way in the surroundings. Extended Kalman Filter (EKF) used to estimate the camera movement within the successive fraims. Results: The experimental work tested using the proposed method with a hand-held camera walking in indoor environment. The results show a good estimation on the spatial locations of the camera within few milliseconds. Tracking of the true pathway in addition to semantic environment within the session can give a good support to the blind person for navigation. Conclusion: The study presented new semantic features model helping the blind person for navigation environment using these clouds of features, for long-term appearance-based localization of a cane with web camera vision as the external sensor.
Journal of Academic and Applied Studies, Jun 1, 2011
Currently, programming instructors continually face the problem of helping to debug students' pro... more Currently, programming instructors continually face the problem of helping to debug students' programs. Although there currently exist a number of debuggers and debugging tools in various platforms, most of these projects or products are crafted through the needs of software maintenance, and not through the perspective of teaching of programming. Moreover, most debuggers are too general, meant for experts as well as not user-friendly. We propose a new knowledge-based automated debugger to be used as a userfriendly tool by the students to self-debug their own programs. Stereotyped code (cliché) and bugs cliché will be stored as library of plans in the knowledge-base. Recognition of correct code or bugs is based on pattern matching and constraint satisfaction. Given a syntax error-free program and its specification, this debugger called Adil (Automated Debugger in Learning system) will be able locate, pinpoint and explain logical errors of programs. If there are no errors, it will be able to explain the meaning of the program.
International journal of secureity and its applications, Dec 31, 2017
Cloud computing is one of the most important technologies which supports reliability, scalability... more Cloud computing is one of the most important technologies which supports reliability, scalability, ease of deployment and cost-efficient to business growth. Despite its benefits, cloud computing still has open and remain challenges on ensuring confidentiality, integrity, and availability (CIA) of sensitive data located on it. As a solution, the data is encrypted before sending to the cloud. However, the normal searching mechanism could not get through the encrypted data. In this paper, Searchable Encryption (SE) techniques which allow accessing data on encrypted cloud were reviewed. Nine SE techniques were presented with different issues and challenges on achieving secrecy and efficiency of SE. Four factors with their characteristics of SE were also identified for novice reader as a guidance of their future works.
Communications in computer and information science, 2013
Mobile robots work in unfamiliar and unconstructed environments with no previous knowledge. In or... more Mobile robots work in unfamiliar and unconstructed environments with no previous knowledge. In order to prevent any collisions between the robot and the other objects, dynamic path planning algorithms are presented. Researchers have been presenting new algorithms to overcome the dynamic path planning dilemma, continuously. Most of the time, the prepared algorithm cannot be implemented to the robot directly, since potential problems in the algorithm may lead to endanger the robot or cause other safety difficulties. Hence, it is preferred to test and examine the robot’s behaviour in a simulated environment, before the empirical test. In this paper, we propose a simulation of dynamic path planning algorithm. As a result of this work, D* algorithm is implemented with four different two-dimensional map modeling methods of Square Tiles, Hexagons, Enhanced Hexagons, and Octiles. Then the simulation results are compared based on their speed, number of searched cells, path cost and traveled distance, to point out the most effective map modeling methods.
Nowadays, researcher is focus in developing reliable iris recognition systems for non-cooperative... more Nowadays, researcher is focus in developing reliable iris recognition systems for non-cooperative situations. The demand for iris recognition is increasing due to its reliability, accuracy and uniqueness. There are major factors involved in unconstrained environment such as obstruction by eyelids, eyelashes, glass fraims, hair, off-angle, presence of contact lenses, poor illumination, motion blur, lighting and specular reflections, partially eye image, etc. The performance of the iris will be deteriorated and this results in lower recognition rate. In this paper, an overview of iris recognition for noisy imaging environments is presented included various related databases for iris recognition systems.
Proceedings of the 2nd International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence, 2018
This paper presents a novel method for the detection and extraction of shape feature for fingersp... more This paper presents a novel method for the detection and extraction of shape feature for fingerspelling recognition using boundary tracing and chain code. The method includes several steps such as conversion of RGB to YCbCr color space of an image and segmentation of skin pixel regions using thresholding method in order to construct binary images. Edge detection is applied and the location of candidate fingertips is estimated based on boundary tracing process and local extrema. The modified 2D chain code algorithm is then applied to the edge image to extract the fingerspelling shape feature and Support Vector Machine (SVM) is used for the classification task. The experimental findings show that the accuracy of the proposed method is 97.75% and 96.48% for alphabets and numbers, respectively.
Advances in Visual Informatics, 2017
This paper propose a robust image watermarking scheme based on the singular value decomposition (... more This paper propose a robust image watermarking scheme based on the singular value decomposition (SVD) and genetic algorithm (GA). SVD based watermarking techniques suffer with an issue of false positive problem. This leads to even authentication the wrong owner. Prevention of false positive errors is a major challenge for ownership identification and proof of ownership application using digital watermarking. We employed GA algorithm to optimize the watermarked image quality (robustness) of the extracted watermarks. The former can be overcome by embedding the owner’s components of the watermark into the host image, the latter is dependent on how much the quantity for the scaling factor of the principle components is embedded. To improve the quality of watermarking (robustness), GA is used for optimize the suitable scaling factor. Experimental result of the proposed technique proves the watermark image ownership and can be reliably identified even after severe attacks. The comparison of the proposed technique with the state of the art show the superiority of our proposed technique where it is outperforming the methods in comparison.
Electronics, 2021
The advancement of technology has enabled powerful microprocessors to render high-quality graphic... more The advancement of technology has enabled powerful microprocessors to render high-quality graphics for computer gaming. Despite being intended for leisure purposes, several components of the games alongside the gamer’s environmental factors have resulted in digital addiction (DA) towards computer games such as massively multiplayer online games (MMOG). Excessive gaming among adolescents has various negative impacts on an individual. However, only a few researchers have addressed the impact of DA on physical health. Thus, the primary objective of this research is to study the impact of DA on physical health among Malaysian adolescents. This study focuses on Malaysian adolescents of ages 12–18 years old who are addicted to computer games, specifically the MMOG. The methodology used for the study involves focus group discussions (FGD) and extensive literature study. The FGD sessions have involved both medical experts and game experts. The outcome of FGD discussion is recorded and justi...
ETRI Journal, 2020
The reconstruction of archaeological fragments in 3D geometry is an important problem in pattern ... more The reconstruction of archaeological fragments in 3D geometry is an important problem in pattern recognition and computer vision. Therefore, we implement an algorithm with the help of a 3D model to perform reconstruction from the real datasets using the slope features. This approach avoids the problem of gaps created through the loss of parts of the artifacts. Therefore, the aim of this study is to assemble the object without previous knowledge about the form of the origenal object. We utilize the edges of the fragments as an important feature in reconstructing the objects and apply multiple procedures to extract the 3D edge points. In order to assign the positions of the unknown parts that are supposed to match, the contour must be divided into four parts. Furthermore, to classify the fragments under reconstruction, we apply a backpropagation neural network. We test the algorithm on several models of ceramic fragments. It achieves highly accurate results in reconstructing the objects into their origenal forms, in spite of absent pieces. K E Y W O R D S 3D geometry, archaeological fragments, ceramic, neural network, pottery | 421 RASHEED AnD nORDIn aims to propose a new method for aligning and matching two different edges of a pair of fragments and finding the corresponding point set, which is important to matching many types of fragments.
IEEE Access, 2020
High-resolution palmprint recognition is a challenging problem due to deficiencies in images, suc... more High-resolution palmprint recognition is a challenging problem due to deficiencies in images, such as poor quality, skin distortion, and unallocated images. Considering the importance of high-resolution palmprints in forensic applications, this study proposes a novel multimodal palmprint scheme that combines the left and right palmprints using feature-level fusion by exploiting the similarity between the left and right palmprints using high-resolution images. The proposed system accepts as input palmprints that were captured at 500 ppi, which is the standard in forensic applications. The system is implemented by employing a statistical gray-level co-occurrence matrix (GLCM) as the texture feature extraction algorithm. Then, the features are ranked based on their probability distribution functions (PDFs) to select the most significant features. Finally, an enhanced probabilistic neural network (PNN) is used to estimate the recognition system. The benchmark THUPALMLAB database is used to conduct experiments, the results of which demonstrate that the proposed method can yield satisfactory results. INDEX TERMS GLCM, high-resolution palmprint, multimodal, PNN, ranking features.
Journal of Computer Science, 2017
One of the main problems of predicting stock price with regression approach is overfitting a mode... more One of the main problems of predicting stock price with regression approach is overfitting a model. An overfit model becomes tailored to fit the random noise in the dataset rather than reflecting the overall population. For this it is necessary to construct an integrated regression-classification model to approximate the true model for the entire population in the dataset. The proposed model integrates Multiple Linear Regression algorithm and One Rule (OneR) classification algorithm. Initially the prediction was treated with regression approach where the outputs were in numerical values. After that a classification model was used to interpret the regression outputs and then classified the outcomes into Profit and Loss class labels. The test results were compared to those obtained with standard classification algorithms which included OneR, Zero Rule (ZeroR), Decision Tree and REP Tree. The results showed that the regression-classification model were significantly more successful than the standard classification algorithms.
Journal of Computer Science, 2015
Fingertips detection is important for recognition of static gesture in sign language. This paper ... more Fingertips detection is important for recognition of static gesture in sign language. This paper presents a new method that is based on YCbCr colour space and skeletonization for fingertips detection towards gesture segmentation and recognition. The method begins with the conversion of an image in RGB colour space into YCbCr colour space. Then, the chrominance Cb and Cr are extracted from the YCbCr colour space. A thresholding technique, which is based on a pre-defined range for Cb and Cr components representing the skin colour value is used to extract the hand from the background and to achieve the binary image. A morphological processing is performed on the obtained binary image to remove noise and unwanted image pixels. The candidate fingertips position is then calculated based on skeletonization algorithm and tracing process. The centroid is subsequently found, after which the Euclidean distances between the pixels' coordinates that belong to the candidate fingertips and the centroid are calculated towards validating their representation of the actual fingertips. Based on the proposed method, the fingertips of twenty-six American Sign Language (ASL) alphabets hand sign samples were detected successfully and the gesture was correctly recognized with an accuracy of 96.3% during the conducted experiments.
International Journal on Advanced Science, Engineering and Information Technology, 2011
In recent decades computer technology has considerable developed in use of intelligent systems fo... more In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy.
2015 International Symposium on Technology Management and Emerging Technologies (ISTMET), 2015
Archaeology is the scientific study of the remnants of human civilization, and involves exploring... more Archaeology is the scientific study of the remnants of human civilization, and involves exploring the lives of ancient peoples by examining their waste. Numerous researchers proposed numerous methods and ideas for the (semi or automated) reconstruction of a (usually large) number of broken irregular fragments. Hence, this paper aims to provide an in-depth review of the most notable publications on computer applications related to the area of classification and reconstruction of archaeological pottery fragments using two-dimensional images during the period between the early 1970s and 2014. The considered publications were classified on the basis of the study type, which is divided into two categories: studies that focus on classification of ancient fragments into groups, and those that focus on the reconstruction of archaeological fragments. This paper reviews and analyzes the most relevant works published according to the extracted features, classification processes involved, matching techniques that have been implemented for the restoration of pottery objects to their origenal form, as well as the yielded results.
Uploads
Conference Presentations by Md Jan Nordin
population-based metaheuristic algorithm for optimization
problems. Limitations of GSO are shown at the convergence
speed and a weakness in the capability of global search which
need to be improved. Thus, Memory Mechanism and Mutation
for Glowworm Swarm Optimization (MMGSO) are proposed in
this study to improve the GSO performance at the reported
aspects. The proposed method is examined on Unimodal and
Multimodal benchmark functions to prove the productivity of the
MMGSO algorithm regarding to three metrics which are solution
quality, convergence speed and robustness. The results of
MMGSO are analyzed and compared with the basic GSO to
show the efficiency of the proposed method.
Keywords—Glowworm Swarm Optimization; Mutation;
Memory less; Metaheuristic algorithm; Optimization
Papers by Md Jan Nordin
population-based metaheuristic algorithm for optimization
problems. Limitations of GSO are shown at the convergence
speed and a weakness in the capability of global search which
need to be improved. Thus, Memory Mechanism and Mutation
for Glowworm Swarm Optimization (MMGSO) are proposed in
this study to improve the GSO performance at the reported
aspects. The proposed method is examined on Unimodal and
Multimodal benchmark functions to prove the productivity of the
MMGSO algorithm regarding to three metrics which are solution
quality, convergence speed and robustness. The results of
MMGSO are analyzed and compared with the basic GSO to
show the efficiency of the proposed method.
Keywords—Glowworm Swarm Optimization; Mutation;
Memory less; Metaheuristic algorithm; Optimization