People Critical Success Factors of IT/IS Implementation: Malaysian Perspectives

Implementing Information Technology/ Information System (IT/IS) is critical for every industry as its potential benefits have been to motivate many industries including the Malaysian construction industry to invest in it. To successfully implement IT/IS has become the major concern for every organisation. Identifying the critical success factors (CSFs) has become the main agenda for researchers, academicians and practitioners due to the wide number of failures reported. This research paper seeks to identify the CSFs that influence the successful implementation of IT/IS in construction industry in Malaysia. Limited factors relating to people issue will be highlighted here to showcase some as it becomes one of the major contributing factors to the failure. Three (3) organisations have participated in this study. Semi-structured interviews are employed as they offer sufficient flexibility to ensure that all relevant factors are covered. Several key issues contributing to successful implementations of IT/IS are identified. The results of this study reveal that top management support, communication, user involvement, IT staff roles and responsibility, training/skills, leader/ IT Leader, organisation culture, knowledge/ experience, motivation, awareness, focus and ambition, satisfaction, teamwork/ collaboration, willingness to change, attitude, commitment, management style, interest in IT, employee behaviour towards collaborative environment, trust, interpersonal relationship, personal characteristic and competencies are significantly associated with the successful implementations of IT/IS. It is anticipated that this study will create awareness and contribute to a better understanding amongst construction industry players and will assist them to successfully implement IT/IS.

Regional Medical Imaging System

The purpose of this article is to introduce an advanced system for the support of processing of medical image information, and the terminology related to this system, which can be an important element to a faster transition to a fully digitalized hospital. The core of the system is a set of DICOM compliant applications running over a dedicated computer network. The whole integrated system creates a collaborative platform supporting daily routines in the radiology community, developing communication channels, supporting the exchange of information and special consultations among various medical institutions as well as supporting medical training for practicing radiologists and medical students. It gives the users outside of hospitals the tools to work in almost the same conditions as in the radiology departments.

Transient Stability Assessment Using Fuzzy SVM and Modified Preventive Control

Transient Stability is an important issue in power systems planning, operation and extension. The objective of transient stability analysis problem is not satisfied with mere transient instability detection or evaluation and it is most important to complement it by defining fast and efficient control measures in order to ensure system security. This paper presents a new Fuzzy Support Vector Machines (FSVM) to investigate the stability status of power systems and a modified generation rescheduling scheme to bring back the identified unstable cases to a more economical and stable operating point. FSVM improves the traditional SVM (Support Vector Machines) by adding fuzzy membership to each training sample to indicate the degree of membership of this sample to different classes. The preventive control based on economic generator rescheduling avoids the instability of the power systems with minimum change in operating cost under disturbed conditions. Numerical results on the New England 39 bus test system show the effectiveness of the proposed method.

Improving Worm Detection with Artificial Neural Networks through Feature Selection and Temporal Analysis Techniques

Computer worm detection is commonly performed by antivirus software tools that rely on prior explicit knowledge of the worm-s code (detection based on code signatures). We present an approach for detection of the presence of computer worms based on Artificial Neural Networks (ANN) using the computer's behavioral measures. Identification of significant features, which describe the activity of a worm within a host, is commonly acquired from security experts. We suggest acquiring these features by applying feature selection methods. We compare three different feature selection techniques for the dimensionality reduction and identification of the most prominent features to capture efficiently the computer behavior in the context of worm activity. Additionally, we explore three different temporal representation techniques for the most prominent features. In order to evaluate the different techniques, several computers were infected with five different worms and 323 different features of the infected computers were measured. We evaluated each technique by preprocessing the dataset according to each one and training the ANN model with the preprocessed data. We then evaluated the ability of the model to detect the presence of a new computer worm, in particular, during heavy user activity on the infected computers.

An ensemble of Weighted Support Vector Machines for Ordinal Regression

Instead of traditional (nominal) classification we investigate the subject of ordinal classification or ranking. An enhanced method based on an ensemble of Support Vector Machines (SVM-s) is proposed. Each binary classifier is trained with specific weights for each object in the training data set. Experiments on benchmark datasets and synthetic data indicate that the performance of our approach is comparable to state of the art kernel methods for ordinal regression. The ensemble method, which is straightforward to implement, provides a very good sensitivity-specificity trade-off for the highest and lowest rank.

A New Self-Adaptive EP Approach for ANN Weights Training

Evolutionary Programming (EP) represents a methodology of Evolutionary Algorithms (EA) in which mutation is considered as a main reproduction operator. This paper presents a novel EP approach for Artificial Neural Networks (ANN) learning. The proposed strategy consists of two components: the self-adaptive, which contains phenotype information and the dynamic, which is described by genotype. Self-adaptation is achieved by the addition of a value, called the network weight, which depends on a total number of hidden layers and an average number of neurons in hidden layers. The dynamic component changes its value depending on the fitness of a chromosome, exposed to mutation. Thus, the mutation step size is controlled by two components, encapsulated in the algorithm, which adjust it according to the characteristics of a predefined ANN architecture and the fitness of a particular chromosome. The comparative analysis of the proposed approach and the classical EP (Gaussian mutation) showed, that that the significant acceleration of the evolution process is achieved by using both phenotype and genotype information in the mutation strategy.

Software Model for a Computer Based Training for an HVDC Control Desk Simulator

With major technological advances and to reduce the cost of training apprentices for real-time critical systems, it was necessary the development of Intelligent Tutoring Systems for training apprentices in these systems. These systems, in general, have interactive features so that the learning is actually more efficient, making the learner more familiar with the mechanism in question. In the home stage of learning, tests are performed to obtain the student's income, a measure on their use. The aim of this paper is to present a framework to model an Intelligent Tutoring Systems using the UML language. The various steps of the analysis are considered the diagrams required to build a general model, whose purpose is to present the different perspectives of its development.

Effective Online Staff Training: Is This Possible?

The purpose of this paper is to consider the introduction of online courses to replace the current classroom-based staff training. The current training is practical, and must be completed before access to the financial computer system is authorized. The long term objective is to measure the efficacy, effectiveness and efficiency of the training, and to establish whether a transfer of knowledge back to the workplace has occurred. This paper begins with an overview explaining the importance of staff training in an evolving, competitive business environment and defines the problem facing this particular organization. A summary of the literature review is followed by a brief discussion of the research methodology and objective. The implementation of the alpha version of the online course is then described. This paper may be of interest to those seeking insights into, or new theory regarding, practical interventions of online learning in the real world.

Information and Communication Technologies vs. Education and Training: Contribution to Understand the Millennials’ Generational Effect

Information and Communication Technologies (ICT) are increasing in importance everyday, especially since the 90’s (last decade of birth for the Millennials generation). While social interactions involving the Millennials generation have been studied, a lack of investigation remains regarding the use of the ICT by this generation as well as the impact on outcomes in education and professional training. Observing and interviewing students preparing a MSc, we aimed at characterizing the interaction students-ICT during the courses. We found that up to 50% of the students (mainly female) could use ICT during courses at a rate of 0.84 occurrence/minutes for some of them, and they thought this involvement did not disturb learning, even was helpful. As recent researches show that multitasking leads people think they are much better than they actually are, further observations with assessments are needed to conclude whether or not the use ICT by students during the courses is a real strength.

ML Detection with Symbol Estimation for Nonlinear Distortion of OFDM Signal

In this paper, a new technique of signal detection has been proposed for detecting the orthogonal frequency-division multiplexing (OFDM) signal in the presence of nonlinear distortion.There are several advantages of OFDM communications system.However, one of the existing problems is remain considered as the nonlinear distortion generated by high-power-amplifier at the transmitter end due to the large dynamic range of an OFDM signal. The proposed method is the maximum likelihood detection with the symbol estimation. When the training data are available, the neural network has been used to learn the characteristic of received signal and to estimate the new positions of the transmitted symbol which are provided to the maximum likelihood detector. Resulting in the system performance, the nonlinear distortions of a traveling wave tube amplifier with OFDM signal are considered in this paper.Simulation results of the bit-error-rate performance are obtained with 16-QAM OFDM systems.

Ensemble Learning with Decision Tree for Remote Sensing Classification

In recent years, a number of works proposing the combination of multiple classifiers to produce a single classification have been reported in remote sensing literature. The resulting classifier, referred to as an ensemble classifier, is generally found to be more accurate than any of the individual classifiers making up the ensemble. As accuracy is the primary concern, much of the research in the field of land cover classification is focused on improving classification accuracy. This study compares the performance of four ensemble approaches (boosting, bagging, DECORATE and random subspace) with a univariate decision tree as base classifier. Two training datasets, one without ant noise and other with 20 percent noise was used to judge the performance of different ensemble approaches. Results with noise free data set suggest an improvement of about 4% in classification accuracy with all ensemble approaches in comparison to the results provided by univariate decision tree classifier. Highest classification accuracy of 87.43% was achieved by boosted decision tree. A comparison of results with noisy data set suggests that bagging, DECORATE and random subspace approaches works well with this data whereas the performance of boosted decision tree degrades and a classification accuracy of 79.7% is achieved which is even lower than that is achieved (i.e. 80.02%) by using unboosted decision tree classifier.

The Use of Dynamically Optimised High Frequency Moving Average Strategies for Intraday Trading

This paper is motivated by the aspect of uncertainty in financial decision making, and how artificial intelligence and soft computing, with its uncertainty reducing aspects can be used for algorithmic trading applications that trade in high frequency. This paper presents an optimized high frequency trading system that has been combined with various moving averages to produce a hybrid system that outperforms trading systems that rely solely on moving averages. The paper optimizes an adaptive neuro-fuzzy inference system that takes both the price and its moving average as input, learns to predict price movements from training data consisting of intraday data, dynamically switches between the best performing moving averages, and performs decision making of when to buy or sell a certain currency in high frequency.

Bayesian Network Model for Students- Laboratory Work Performance Assessment: An Empirical Investigation of the Optimal Construction Approach

There are three approaches to complete Bayesian Network (BN) model construction: total expert-centred, total datacentred, and semi data-centred. These three approaches constitute the basis of the empirical investigation undertaken and reported in this paper. The objective is to determine, amongst these three approaches, which is the optimal approach for the construction of a BN-based model for the performance assessment of students- laboratory work in a virtual electronic laboratory environment. BN models were constructed using all three approaches, with respect to the focus domain, and compared using a set of optimality criteria. In addition, the impact of the size and source of the training, on the performance of total data-centred and semi data-centred models was investigated. The results of the investigation provide additional insight for BN model constructors and contribute to literature providing supportive evidence for the conceptual feasibility and efficiency of structure and parameter learning from data. In addition, the results highlight other interesting themes.

Examination of Self and Decision Making Levels of Students Receiving Education in Schools of Physical Education and Sports

The purpose of this study is to examine the self and decision making levels of students receiving education in schools of physical training and sports. The population of the study consisted 258 students, among which 152 were male and 106 were female ( X age=19,3713 + 1,6968), that received education in the schools of physical education and sports of Selcuk University, Inonu University, Gazi University and Karamanoglu Mehmetbey University. In order to achieve the purpose of the study, the Melbourne Decision Making Questionnary developed by Mann et al. (1998) [1] and adapted to Turkish by Deniz (2004) [2] and the Self-Esteem Scale developed by Aricak (1999) [3] was utilized. For analyzing and interpreting data Kolmogorov-Smirnov test, t-test and one way anova test were used, while for determining the difference between the groups Tukey test and Multiple Linear Regression test were employed and significance was accepted at P

Distributed e-Learning System with Client-Server and P2P Hybrid Architecture

We have developed a distributed asynchronous Web based training system. In order to improve the scalability and robustness of this system, all contents and a function are realized on mobile agents. These agents are distributed to computers, and they can use a Peer to Peer network that modified Content-Addressable Network. In this system, all computers offer the function and exercise by themselves. However, the system that all computers do the same behavior is not realistic. In this paper, as a solution of this issue, we present an e-Learning system that is composed of computers of different participation types. Enabling the computer of different participation types will improve the convenience of the system.

Improved Back Propagation Algorithm to Avoid Local Minima in Multiplicative Neuron Model

The back propagation algorithm calculates the weight changes of artificial neural networks, and a common approach is to use a training algorithm consisting of a learning rate and a momentum factor. The major drawbacks of above learning algorithm are the problems of local minima and slow convergence speeds. The addition of an extra term, called a proportional factor reduces the convergence of the back propagation algorithm. We have applied the three term back propagation to multiplicative neural network learning. The algorithm is tested on XOR and parity problem and compared with the standard back propagation training algorithm.

Artificial Neural Network with Steepest Descent Backpropagation Training Algorithm for Modeling Inverse Kinematics of Manipulator

Inverse kinematics analysis plays an important role in developing a robot manipulator. But it is not too easy to derive the inverse kinematic equation of a robot manipulator especially robot manipulator which has numerous degree of freedom. This paper describes an application of Artificial Neural Network for modeling the inverse kinematics equation of a robot manipulator. In this case, the robot has three degree of freedoms and the robot was implemented for drilling a printed circuit board. The artificial neural network architecture used for modeling is a multilayer perceptron networks with steepest descent backpropagation training algorithm. The designed artificial neural network has 2 inputs, 2 outputs and varies in number of hidden layer. Experiments were done in variation of number of hidden layer and learning rate. Experimental results show that the best architecture of artificial neural network used for modeling inverse kinematics of is multilayer perceptron with 1 hidden layer and 38 neurons per hidden layer. This network resulted a RMSE value of 0.01474.

Use of Bayesian Network in Information Extraction from Unstructured Data Sources

This paper applies Bayesian Networks to support information extraction from unstructured, ungrammatical, and incoherent data sources for semantic annotation. A tool has been developed that combines ontologies, machine learning, and information extraction and probabilistic reasoning techniques to support the extraction process. Data acquisition is performed with the aid of knowledge specified in the form of ontology. Due to the variable size of information available on different data sources, it is often the case that the extracted data contains missing values for certain variables of interest. It is desirable in such situations to predict the missing values. The methodology, presented in this paper, first learns a Bayesian network from the training data and then uses it to predict missing data and to resolve conflicts. Experiments have been conducted to analyze the performance of the presented methodology. The results look promising as the methodology achieves high degree of precision and recall for information extraction and reasonably good accuracy for predicting missing values.

Investigation of Artificial Neural Networks Performance to Predict Net Heating Value of Crude Oil by Its Properties

The aim of this research is to use artificial neural networks computing technology for estimating the net heating value (NHV) of crude oil by its Properties. The approach is based on training the neural network simulator uses back-propagation as the learning algorithm for a predefined range of analytically generated well test response. The network with 8 neurons in one hidden layer was selected and prediction of this network has been good agreement with experimental data.

Meta Model Based EA for Complex Optimization

Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiency