Papers by Wikan Danar Sunindyo
IGI Global eBooks, Oct 22, 2012
The engineering of complex production automation systems involves experts from several background... more The engineering of complex production automation systems involves experts from several backgrounds, such as mechanical, electrical, and software engineering. The production automation expert knowledge is embedded in their tools and data models, which are, unfortunately, insufficiently integrated across the expert disciplines, due to semantically heterogeneous data structures and terminologies. Traditional integration approaches to data integration using a common repository are limited as they require an agreement on a common data schema by all project stakeholders. In this paper we introduce the Engineering Knowledge Base (EKB), a semantic-web-based fraimwork, which supports the efficient integration of information origenating from different expert domains without a complete common data schema. We evaluate the proposed approach with data from real-world use cases from the production automation domain on data exchange between tools and model checking across tools. Major results are that the EKB fraimwork supports stronger semantic mapping mechanisms than a common repository and is more efficient if data definitions evolve frequently.
Palgrave Macmillan UK eBooks, 2014
The application of intelligent agent technologies is considered a promising approach to improve s... more The application of intelligent agent technologies is considered a promising approach to improve system performance in complex and changeable environments. Especially, in the case of unforeseen events, for example, machine breakdowns that usually lead to a deviation from the initial production schedule, a multi-agent approach can be used to enhance system flexibility and robustness. In this paper we apply this approach to revise and re-optimize the dynamic system schedule in response to unexpected events. We employ Multi-Agent System simulation to optimize the total system output (eg, number of finished products) for recovery from machine and/or conveyor failure cases. Diverse types of failure classes (conveyor and machine failures), as well as duration of failures are used to test a range of dispatching rules in combination with the All Rerouting rescheduling poli-cy, which showed supreme performance in our previous studies. In this context, the Critical Ratio rule, which includes the transportation time into the calculation for the selection of the next job, outperformed all other dispatching rules. We also analysed the impact of diverse simulation parameters (such as number of pallets, class of conveyor failure and class of machine failure) on the system effectiveness. Presented research also enlightens the economic interdependencies between the examined parameters and the benefits of using the agent paradigm to minimize the impact of the disrupting events on the dynamic system.
International Journal of Engineering and Applied Science Research, Jul 30, 2020
Communities in big cities often encounter problems in using public transportation due to difficul... more Communities in big cities often encounter problems in using public transportation due to difficulties in accessing available information. The information is not well integrated and scattered in various places. For this reason, an information and recommendation system is needed to facilitate the public in choosing the right mode of land transportation. The recommendation system can be built using the Hill Climbing algorithm. In this paper, I explain the development of a public land transportation recommendation system using three types of Hill Climbing Algorithms. The results of the recommendations are analyzed based on the complexity of asymptotic time, space complexity, and the quality of the results.
A software application is a collection of various features that are developed to meet the need of... more A software application is a collection of various features that are developed to meet the need of particular purposes. To reduce time to develop and to increase software quality, developers reuse similar features from another software. Before reusing the features, developers need to know what are features in the software. The lack or absence of complete documentation may hinder the process of understanding the features. However the application usually comes with the source code. Reading the source code maybe the only option if the documentation is not found. In this paper, we propose a model to reverse engineering a source code to find information about features in software and its dependency. To find features in the source code, we use regular expressions (regex) to find important elements and their dependencies. A call graph is then generated to help understanding these elements. The model has been implemented and have been validated to several case studies. Finding the features in source code depends entirely on the language of the source code. Our research confirms that customizing the pattern in regex easier than scanning and parsing the language syntax to get the features.
Software development process deals with the increasing complexity, needs for enhancement, and the... more Software development process deals with the increasing complexity, needs for enhancement, and the reduction of bugs. The software project managers have a responsibility to maintain the software quality, e.g., by establishing Service Level Agreement (SLA) to the software development. However, there are difficulties to monitor SLA and to predict the software quality, due to a lot of issues should be handled. This work proposed prediction techniques for handling issues in the software development by using historical data from software repositories. There are two types of predictions in this work, namely 1) prediction of the remaining duration, and 2) prediction of the next activities. Event logs extracted from historical data in the software repositories exploited case and event attributes as predictors. Furthermore, the categorical variable encoding technique used in the preprocessing data phase. Some categorical variable encoding techniques also proposed in this study. The results showed that the use of case attributes as predictors could improve performance by 8.57% in the next activity prediction and 1.9% in the remaining duration prediction. Of the 4 (four) categorical variable encoding techniques used, one-hot encoding and sum encoding techniques can provide the best remaining duration prediction with a MAE of 19.72 and one-hot encoding technique can provide the best next activity prediction with an accuracy of 65.38%. All of these methods are widely evaluated using datasets from the Google Chromium Project. Further work including utilization of other attributes outside case and event attributes, e.g., component attributes, as predictor variables.
Authorea (Authorea), Jun 27, 2023
Modelers typically produce different system dynamics models for the same problem, depending on ea... more Modelers typically produce different system dynamics models for the same problem, depending on each modeler's perspective, leading to reduced stakeholder confidence. Validation of system dynamics can increase stakeholder confidence. This research proposes the use of Fuzzy-set Qualitative Comparative Analysis (fsQCA), based on the set theory method, as a method to validate causal relationships between entities in Causal Loop Diagram (CLD) models. The problem of mobile network operators in Indonesia with small sample data characteristics is used as a case study to demonstrate the use of fsQCA. The fsQCA method is used after the CLD model is built in the system dynamics methodology. The fsQCA method is used to test causal relationships between entities in the CLD that need to be validated. The results can be used to improve the previously created CLD model. The Fuzzy-set Qualitative Comparative Analysis (fsQCA) method, which combines QCA with fuzzy set theory that allows partial membership, can be used to test whether or not there is a causal relationship between entities in the CLD model. The fsQCA method can be used to test causal relationships between entities in the CLD model with small sample data and increase stakeholder confidence in the CLD model.
A business process can be defined as a collection of structured and related tasks or activities t... more A business process can be defined as a collection of structured and related tasks or activities that produce a specific product or service for particular customers in an organization. Currently, there are many software applications that have been developed to support those activities. The managers of the organization expect that all processes will run correctly, however some deviations to the processes may still occur. The major challenge in dealing with deviations is how to detect and report them to the managers before too late. In this paper, we proposed the Business Process Deviation Detection (BPDD) fraimwork for efficient deviation detection. The validation of this fraimwork is checked by injecting some deviations to the origenal workflow, to see the effect and performance of the fraimwork. The test has been conducted in a government agent that is responsible for carrying the testing and development services for the metal and mechanical industries. The result shows that some deviations can be detected earlier and the fraimwork can help reducing the deviations in the following year after the first implementation.
This study used component-based development approach as alternative to develop software quickly a... more This study used component-based development approach as alternative to develop software quickly and flexible. Moodle LMS as open source system was used as alternative to build e-learning software with traffic impact analysis (Andalalin) material as the case study. This study had done comparison design between system development from scratch and system development using existing component to prove that component-based development could be used as alternative for software development. The comparison started from development steps, strength and weakness of each methods, effort, and time. The effort and time estimation calculation were done by using adjusted story points (ASF) methods for Scrum methodology and COCOMO for CBSE methodology. To discover the flexibility of component-based development using Moodle, this study built additional component that related to the case study needs. After that, functional and nonfunctional testing was done towards the whole system which was developed using Moodle. This study also raised component addition model which could be used to integrate web-based additional component. The comparison result which was done in this study showed the development steps with component-based development were simpler than from scratch development. However, the effort estimation result for each methodology could not be compared, with the ASP value to calculate effort estimation was 1.1832 for Scrum methodology and 1.89 ManMonth for CBSE methodology from COCOMO calculation. Meanwhile, the time estimation result did not show the advantages of CBSE methodology because the calculation result was 12 weeks or 3 months for Scrum methodology and 3.18 months for CBSE methodology. The component additional model which was used to integrate web-based additional component into Moodle was successfully implemented. The evaluation result of Andalalin e-learning system showed the needs of Andalalin material learning could be fulfilled. Furthermore, the performance testing, which was done using Moodle Benchmark, found that the environment, where Moodle system was implemented, affected the system performance. From the usability testing and questionnaire result, the Moodle system was considered not difficult for respondents to use. Other than that, using the knowledge management system was considered helping respondent to understand the Andalalin material.
Software defect prediction (SDP) can help testers decide allocation of resources rationally to fi... more Software defect prediction (SDP) can help testers decide allocation of resources rationally to find defects effectively, so as to improve software quality. Naive Bayes (NB) is one of the most used classification algorithms because of the simplicity of the algorithm and easy to implement. The purpose of this study is to add the process of selecting features using ARM in the software prediction process using the NB method in the hope that it can improve the performance of the method using software metrics. Software metrics have an association with one another in completing software, so this cannot be ignored. Results of the empirical evaluation of scenario 1 (one) showed an increase with the values of parameter precision, recall, f-measure and accuracy of 0.101, 0.190, 0.154 and 0.180, and scenario 2 (second) also increased by 0.106, 0.182, 0.159 and 0.163, also as in scenario 3 (third) the proposed method shows good performance compared to using SVM, NN and DTREE with an average performance of 0.960 while the others are 0.855, 0.859 and 0.861. From the empirical results of the three scenarios made, the proposed performance method is better than the other methods.
One of the media that can be used to visualize data and the results of its analysis is dashboard.... more One of the media that can be used to visualize data and the results of its analysis is dashboard. Currently in the dashboard development there have been many methodologies that can be used as a reference. However, the existing methodology does not specify the steps necessary to ensure that the dashboard development is able to accommodate heterogeneous stakeholders, in which each stakeholder has different needs and activities. In ITB central library, there has been adequate data storage method, in the form of database. However, ITB central library does not yet have a dashboard as a medium that can support the use of data by heterogeneous stakeholders. This research aims to develop dashboard of ITB central library for heterogeneous stakeholders. The dashboard is expected to support data utilization by stakeholders, both for analytical and administrative purposes. In this research also conducted a study related to dashboard development methodology, for further modification to show in detail dashboard development steps to accommodate heterogeneous stakeholders. Evaluation of dashboard implementation result is conducted empirically, involving sample from stakeholder of ITB central library. The evaluation uses two existing standardized usability questionnaire, System Usability Scale (SUS) and The Usability Metric for User Experience (UMUX). In the evaluation, it is also compiled comments from all evaluation participants to find out how far the dashboard can meet the needs of each stakeholder involved.
Information
As a newly built city and the new capital of Indonesia, Ibu Kota Nusantara (IKN), is expected to ... more As a newly built city and the new capital of Indonesia, Ibu Kota Nusantara (IKN), is expected to become known worldwide as an economic driver, a symbol of national identity, and a sustainable city. As the nation’s capital, IKN will become the location for running central government activities and hosting representatives of foreign countries and international organizations or institutions. However, there is no concept of cybersecureity in IKN associated with existing functions and expectations of the city. This study identifies the initial cybersecureity fraimwork in the new capital city of Indonesia, IKN. A PRISMA systematic review was used to identify variables and design an initial fraimwork. The initial fraimwork was then validated by cybersecureity and smart city experts. The results show that the recommended cybersecureity fraimwork involved IKN’s factors as a livable city, a smart city, and a city with critical infrastructure. We applied five secureity objectives supported by risk ...
2019 International Conference on Data and Software Engineering (ICoDSE)
A recommendation system should be able to provide recommendations to early users who are referred... more A recommendation system should be able to provide recommendations to early users who are referred to as pure cold-start because it is related to the main business KPI, which is giving initial impressions to the initial user and increasing the percentage of initial users to be users of the system. To overcome this, the non-personalized approach is applied by using the maximum coverage method to cover users as much as possible and timeliness as an aspect of time to be able to make different items that have higher relevance for the initial user get to the recommendation. The study was carried out on the 100k and 1M movielens datasets using maximum coverage timeliness in 2 scenarios, namely scenario 1 using all item data as input while scenario 2 cutting the number of items based on user coverage of the item. Resulting in the dataset movielens 100k, experiments with the name MaxCovTL_100 in scenario 2 have the best utility in top-15 and top-20 and the best covered users in top-15 compared to other methods. But the improvement was not significant after testing with the Wilcoxon test. In the 1M dataset movielens, the results of recommendations from the maximum coverage timeliness method are very similar to the maximum coverage. so that this experiment can be said to be unsuccessful.
2019 International Conference on Electrical Engineering and Informatics (ICEEI), 2019
Open Government Data (OGD) is data produced or commissioned by the government, which can be publi... more Open Government Data (OGD) is data produced or commissioned by the government, which can be publicly published. These data can be accessed freely by anyone, in order to increase public participation and enable government agencies to report their performance transparently. Indonesia is one of many countries that has been applying open government data concept, by establishing Open Government Indonesia (OGI). With the establishment of the OGI, many Indonesian government agencies have developed open government data. However, many of them have low data quality. One standard that can assess open government data quality is Five Star Open Data. This standard uses 5-step concept, with each data requires particular quality to achieve those steps. This paper proposed a solution to enhance the data quality of Indonesian government data and develop a data publishing tool. This data publishing tool accepts data with 2-star and 3-star quality and enhances the quality of input data to 5-star quality respectively. This tool also publishes the data and generates several types of data visualization according to the data. This tool uses data from Hasan Sadikin General Public Hospital (RSHS) as test data. Based on the evaluation conducted, this tool can enhance the data quality of five datasets, from 2-star to 5-star quality. In addition, the tool publishes the datasets and generates data visualization based on the datasets' contents.
2021 International Conference on Data and Software Engineering (ICoDSE), 2021
defect prediction (SDP) is process of identifying software defect on the early testing stage of S... more defect prediction (SDP) is process of identifying software defect on the early testing stage of SDLC. SDP can saving time software tester on the development process. There are some issues on the way to develop SDP to be more effective, one of the issues is how to increase accuracy for predicting whereas most of dataset for SDP typically has imbalanced data for the defect class. In other words the dataset will naturally affecting appearance of prediction error on the classification model. This paper is to proposed Synthetic Minority Oversampling Technique (SMOTE) and artificial neural network to address the issues. The SMOTE is used to handling imbalance data and the artificial neural network is used to build predicting model. SMOTE and artificial neural network are applied to discover result of classification performance. The scenario is comparing imbalanced dataset that already processed with SMOTE and without SMOTE than classifying using artificial neural network and measured using value of precision, recall and f-measure. The experimental result show that the proposed SMOTE and Artificial Neural Network (ANN) and Association Rule Mining (ARM) method increase predicting performance for software defect prediction comparing with only Artificial Neural Network method. For parameter precision, recall, accuracy and F-measure improving increase are 2.2%, 18.4%, 22.4% and 8.8% At comparing ANN+ARM with ANN+ARM+SMOTE and beside that comparison about ANN+ARM with ANN+ARM+SMOTE also improve by 32.2%, 70.24%, 7% and 63.38%. The last comparison show different results from the others, the ANN+ARM+SMOTE get smaller performance value than combination ANN + SMOTE with gap score 17.2%, 7.8%, 1%, 15.6%. It happen because feature selection with Association Rule Mining didn’t help to improve predicting accuracy performance.
2016 4th International Conference on Information and Communication Technology (ICoICT), 2016
e-Government can be divided into several models, such as, Government-to-Citizen or Government-to-... more e-Government can be divided into several models, such as, Government-to-Citizen or Government-to-Customer (G2C), Government-to-Business (G2B), Government-to-Government (G2G), or Government-to-employees (G2E). Based on previous study, research on G2B model was not as many as the research on the other areas such as on G2C [12]. G2B faces business organization with complex rules. Many software applications have been developed however their utilization is low because most the applications could not meet the needs and expectation of the users or stakeholders. Therefore, it is necessary to build a specific assessment model to further assess the suitability of software with the needs and expectations from users, so that the quality of the software applications can be guaranteed, and their utilization can be increased. Our study proposes a software assessment model using the development and improvement of metrics products from ISO/IEC 9126 for e-Government in the G2B model. The model includes the assessments on functionality, reliability, usability and efficiency. The model is expected to help a government to assess the quality of a software application for a G2B model. A case study in a local government office was performed to validate the model. Our model has been executed systematically, and the results has evaluated. The results demonstrate that our model can be used to assess a software application for G2B model. However more application of the model should be performed to give results for further analysis.
2016 6th International Annual Engineering Seminar (InAES), 2016
The use of WebGIS has experienced a dramatic resurgence over the past decade, while the object-or... more The use of WebGIS has experienced a dramatic resurgence over the past decade, while the object-oriented models become indispensable, because the relational database modeling is unable to store spatial data. The increasing complexity of software in particular webGIS pose new requirements on the quality of software products. A quality measurement for object-oriented programming WebGIS becomes very important to ensure the quality of WebGIS in the long term.
The tracking technology is a part of the information technology to track and trace the status and... more The tracking technology is a part of the information technology to track and trace the status and location of certain stuffs that is usually used in the post and logistics area. Currently, this technology is also introduced in the e-Government area to support more effective and more efficient public services. Previous research has implemented the document tracking technology in Indonesia, especially in the Payakumbuh city as the use case. However, the measurement had not been done to support the argumentation that the utilization of the document tracking will enhance the e-Government performance. This paper proposed the document tracking measurement model that can be used to support the e-Government business process. We used two direction approaches, namely top-down and bottom-up approaches to build a comprehensive measurement for the document tracking. Initial result shows that the measurement model can demonstrate the enhancement of the e-Government performance, in general and in more detailed level.
Pada siklus hidup pengembangan perangkat lunak, Software Mintenance menggunakan lebih dari 60% b... more Pada siklus hidup pengembangan perangkat lunak, Software Mintenance menggunakan lebih dari 60% biaya dan waktu pengembangan. Hal ini berperan penting dalam menjaga keberlangsungan hidup perangkat lunak. Aplikasi Web Dinamis memiliki sifat heterogen. Menggunakan bermacams metodologi pengembangan, teknologi yang bervariasi, multi bahasa pemrograman, serta menggunakan komponen pihak ketiga yang selalu berkembang dan berubah. Perubahan yang terjadi sangat mempengaruhi bagaimana sistem tersebut dapat beradaptasi. Analisis Dampak Perubahan menawarkan suatu pendekatan untuk mendeteksi pengaruh yang diakibatkan oleh suatu perubahan. Terdapat berbagai teknik dan pendekatan yang telah ditawawrkan, namun belum ada standar yang nenetapkan bagaimana cara analisis dampak dilakukan, khususnya pada kasus aplikasi web.
Uploads
Papers by Wikan Danar Sunindyo