Hybridizing Genetic Algorithm with Biased Chance Local Search

This paper explores university course timetabling problem. There are several characteristics that make scheduling and timetabling problems particularly difficult to solve: they have huge search spaces, they are often highly constrained, they require sophisticated solution representation schemes, and they usually require very time-consuming fitness evaluation routines. Thus standard evolutionary algorithms lack of efficiency to deal with them. In this paper we have proposed a memetic algorithm that incorporates the problem specific knowledge such that most of chromosomes generated are decoded into feasible solutions. Generating vast amount of feasible chromosomes makes the progress of search process possible in a time efficient manner. Experimental results exhibit the advantages of the developed Hybrid Genetic Algorithm than the standard Genetic Algorithm.

Cloud Computing Initiative using Modified Ant Colony Framework

Scheduling of diversified service requests in distributed computing is a critical design issue. Cloud is a type of parallel and distributed system consisting of a collection of interconnected and virtual computers. It is not only the clusters and grid but also it comprises of next generation data centers. The paper proposes an initial heuristic algorithm to apply modified ant colony optimization approach for the diversified service allocation and scheduling mechanism in cloud paradigm. The proposed optimization method is aimed to minimize the scheduling throughput to service all the diversified requests according to the different resource allocator available under cloud computing environment.

A Novel Digital Watermarking Technique Basedon ISB (Intermediate Significant Bit)

Least Significant Bit (LSB) technique is the earliest developed technique in watermarking and it is also the most simple, direct and common technique. It essentially involves embedding the watermark by replacing the least significant bit of the image data with a bit of the watermark data. The disadvantage of LSB is that it is not robust against attacks. In this study intermediate significant bit (ISB) has been used in order to improve the robustness of the watermarking system. The aim of this model is to replace the watermarked image pixels by new pixels that can protect the watermark data against attacks and at the same time keeping the new pixels very close to the original pixels in order to protect the quality of watermarked image. The technique is based on testing the value of the watermark pixel according to the range of each bit-plane.

Neural Networks Learning Improvement using the K-Means Clustering Algorithm to Detect Network Intrusions

In the present work, we propose a new technique to enhance the learning capabilities and reduce the computation intensity of a competitive learning multi-layered neural network using the K-means clustering algorithm. The proposed model use multi-layered network architecture with a back propagation learning mechanism. The K-means algorithm is first applied to the training dataset to reduce the amount of samples to be presented to the neural network, by automatically selecting an optimal set of samples. The obtained results demonstrate that the proposed technique performs exceptionally in terms of both accuracy and computation time when applied to the KDD99 dataset compared to a standard learning schema that use the full dataset.

Qualitative Parametric Comparison of Load Balancing Algorithms in Parallel and Distributed Computing Environment

Decrease in hardware costs and advances in computer networking technologies have led to increased interest in the use of large-scale parallel and distributed computing systems. One of the biggest issues in such systems is the development of effective techniques/algorithms for the distribution of the processes/load of a parallel program on multiple hosts to achieve goal(s) such as minimizing execution time, minimizing communication delays, maximizing resource utilization and maximizing throughput. Substantive research using queuing analysis and assuming job arrivals following a Poisson pattern, have shown that in a multi-host system the probability of one of the hosts being idle while other host has multiple jobs queued up can be very high. Such imbalances in system load suggest that performance can be improved by either transferring jobs from the currently heavily loaded hosts to the lightly loaded ones or distributing load evenly/fairly among the hosts .The algorithms known as load balancing algorithms, helps to achieve the above said goal(s). These algorithms come into two basic categories - static and dynamic. Whereas static load balancing algorithms (SLB) take decisions regarding assignment of tasks to processors based on the average estimated values of process execution times and communication delays at compile time, Dynamic load balancing algorithms (DLB) are adaptive to changing situations and take decisions at run time. The objective of this paper work is to identify qualitative parameters for the comparison of above said algorithms. In future this work can be extended to develop an experimental environment to study these Load balancing algorithms based on comparative parameters quantitatively.

AGV Guidance System: An Application of Simple Active Contour for Visual Tracking

In this paper, a simple active contour based visual tracking algorithm is presented for outdoor AGV application which is currently under development at the USM robotic research group (URRG) lab. The presented algorithm is computationally low cost and able to track road boundaries in an image sequence and can easily be implemented on available low cost hardware. The proposed algorithm used an active shape modeling using the B-spline deformable template and recursive curve fitting method to track the current orientation of the road.

Kernel’s Parameter Selection for Support Vector Domain Description

Support Vector Domain Description (SVDD) is one of the best-known one-class support vector learning methods, in which one tries the strategy of using balls defined on the feature space in order to distinguish a set of normal data from all other possible abnormal objects. As all kernel-based learning algorithms its performance depends heavily on the proper choice of the kernel parameter. This paper proposes a new approach to select kernel's parameter based on maximizing the distance between both gravity centers of normal and abnormal classes, and at the same time minimizing the variance within each class. The performance of the proposed algorithm is evaluated on several benchmarks. The experimental results demonstrate the feasibility and the effectiveness of the presented method.

Heuristic Set-Covering-Based Postprocessing for Improving the Quine-McCluskey Method

Finding the minimal logical functions has important applications in the design of logical circuits. This task is solved by many different methods but, frequently, they are not suitable for a computer implementation. We briefly summarise the well-known Quine-McCluskey method, which gives a unique procedure of computing and thus can be simply implemented, but, even for simple examples, does not guarantee an optimal solution. Since the Petrick extension of the Quine-McCluskey method does not give a generally usable method for finding an optimum for logical functions with a high number of values, we focus on interpretation of the result of the Quine-McCluskey method and show that it represents a set covering problem that, unfortunately, is an NP-hard combinatorial problem. Therefore it must be solved by heuristic or approximation methods. We propose an approach based on genetic algorithms and show suitable parameter settings.

Low Dimensional Representation of Dorsal Hand Vein Features Using Principle Component Analysis (PCA)

The quest of providing more secure identification system has led to a rise in developing biometric systems. Dorsal hand vein pattern is an emerging biometric which has attracted the attention of many researchers, of late. Different approaches have been used to extract the vein pattern and match them. In this work, Principle Component Analysis (PCA) which is a method that has been successfully applied on human faces and hand geometry is applied on the dorsal hand vein pattern. PCA has been used to obtain eigenveins which is a low dimensional representation of vein pattern features. Low cost CCD cameras were used to obtain the vein images. The extraction of the vein pattern was obtained by applying morphology. We have applied noise reduction filters to enhance the vein patterns. The system has been successfully tested on a database of 200 images using a threshold value of 0.9. The results obtained are encouraging.

Centralized Resource Management for Network Infrastructure Including Ip Telephony by Integrating a Mediator Between the Heterogeneous Data Sources

Over the past decade, mobile has experienced a revolution that will ultimately change the way we communicate.All these technologies have a common denominator exploitation of computer information systems, but their operation can be tedious because of problems with heterogeneous data sources.To overcome the problems of heterogeneous data sources, we propose to use a technique of adding an extra layer interfacing applications of management or supervision at the different data sources.This layer will be materialized by the implementation of a mediator between different host applications and information systems frequently used hierarchical and relational manner such that the heterogeneity is completely transparent to the VoIP platform.

Analysis of the Root Causes of Transformer Bushing Failures

This paper presents the results of a comprehensive investigation of five blackouts that occurred on 28 August to 8 September 2011 due to bushing failures of the 132/33 kV, 125 MVA transformers at JBB Ali Grid station. The investigation aims to explore the root causes of the bushing failures and come up with recommendations that help in rectifying the problem and avoiding the reoccurrence of similar type of incidents. The incident reports about the failed bushings and the SCADA reports at this grid station were examined and analyzed. Moreover, comprehensive power quality field measurements at ten 33/11 kV substations (S/Ss) in JBB Ali area were conducted, and frequency scans were performed to verify any harmonic resonance frequencies due to power factor correction capacitors. Furthermore, the daily operations of the on-load tap changers (OLTCs) of both the 125 MVA and 20 MVA transformers at JBB Ali Grid station have been analyzed. The investigation showed that the five bushing failures were due to a local problem, i.e. internal degradation of the bushing insulation. This has been confirmed by analyzing the time interval between successive OLTC operations of the faulty grid transformers. It was also found that monitoring the number of OLTC operations can help in predicting bushing failure.

Ottoman Script Recognition Using Hidden Markov Model

In this study, an OCR system for segmentation, feature extraction and recognition of Ottoman Scripts has been developed using handwritten characters. Detection of handwritten characters written by humans is a difficult process. Segmentation and feature extraction stages are based on geometrical feature analysis, followed by the chain code transformation of the main strokes of each character. The output of segmentation is well-defined segments that can be fed into any classification approach. The classes of main strokes are identified through left-right Hidden Markov Model (HMM).

Bioprocess Intelligent Control: A Case Study

Bioprocesses are appreciated as difficult to control because their dynamic behavior is highly nonlinear and time varying, in particular, when they are operating in fed batch mode. The research objective of this study was to develop an appropriate control method for a complex bioprocess and to implement it on a laboratory plant. Hence, an intelligent control structure has been designed in order to produce biomass and to maximize the specific growth rate.

An AK-Chart for the Non-Normal Data

Traditional multivariate control charts assume that measurement from manufacturing processes follows a multivariate normal distribution. However, this assumption may not hold or may be difficult to verify because not all the measurement from manufacturing processes are normal distributed in practice. This study develops a new multivariate control chart for monitoring the processes with non-normal data. We propose a mechanism based on integrating the one-class classification method and the adaptive technique. The adaptive technique is used to improve the sensitivity to small shift on one-class classification in statistical process control. In addition, this design provides an easy way to allocate the value of type I error so it is easier to be implemented. Finally, the simulation study and the real data from industry are used to demonstrate the effectiveness of the propose control charts.

Modelling Multiagent Systems

We propose a formal framework for the specification of the behavior of a system of agents, as well as those of the constituting agents. This framework allows us to model each agent-s effectoric capability including its interactions with the other agents. We also provide an algorithm based on Milner-s "observation equivalence" to derive an agent-s perception of its task domain situations from its effectoric capability, and use "system computations" to model the coordinated efforts of the agents in the system . Formal definitions of the concept of "behavior equivalence" of two agents and that of system computations equivalence for an agent are also provided.

Fault Detection and Isolation using RBF Networks for Polymer Electrolyte Membrane Fuel Cell

This paper presents a new method of fault detection and isolation (FDI) for polymer electrolyte membrane (PEM) fuel cell (FC) dynamic systems under an open-loop scheme. This method uses a radial basis function (RBF) neural network to perform fault identification, classification and isolation. The novelty is that the RBF model of independent mode is used to predict the future outputs of the FC stack. One actuator fault, one component fault and three sensor faults have been introduced to the PEMFC systems experience faults between -7% to +10% of fault size in real-time operation. To validate the results, a benchmark model developed by Michigan University is used in the simulation to investigate the effect of these five faults. The developed independent RBF model is tested on MATLAB R2009a/Simulink environment. The simulation results confirm the effectiveness of the proposed method for FDI under an open-loop condition. By using this method, the RBF networks able to detect and isolate all five faults accordingly and accurately.

Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms

In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.

A Reconfigurable Distributed Multiagent System Optimized for Scalability

This paper proposes a novel solution for optimizing the size and communication overhead of a distributed multiagent system without compromising the performance. The proposed approach addresses the challenges of scalability especially when the multiagent system is large. A modified spectral clustering technique is used to partition a large network into logically related clusters. Agents are assigned to monitor dedicated clusters rather than monitor each device or node. The proposed scalable multiagent system is implemented using JADE (Java Agent Development Environment) for a large power system. The performance of the proposed topologyindependent decentralized multiagent system and the scalable multiagent system is compared by comprehensively simulating different fault scenarios. The time taken for reconfiguration, the overall computational complexity, and the communication overhead incurred are computed. The results of these simulations show that the proposed scalable multiagent system uses fewer agents efficiently, makes faster decisions to reconfigure when a fault occurs, and incurs significantly less communication overhead.

Business Rules for Data Warehouse

Business rules and data warehouse are concepts and technologies that impact a wide variety of organizational tasks. In general, each area has evolved independently, impacting application development and decision-making. Generating knowledge from data warehouse is a complex process. This paper outlines an approach to ease import of information and knowledge from a data warehouse star schema through an inference class of business rules. The paper utilizes the Oracle database for illustrating the working of the concepts. The star schema structure and the business rules are stored within a relational database. The approach is explained through a prototype in Oracle-s PL/SQL Server Pages.

A Monte Carlo Method to Data Stream Analysis

Data stream analysis is the process of computing various summaries and derived values from large amounts of data which are continuously generated at a rapid rate. The nature of a stream does not allow a revisit on each data element. Furthermore, data processing must be fast to produce timely analysis results. These requirements impose constraints on the design of the algorithms to balance correctness against timely responses. Several techniques have been proposed over the past few years to address these challenges. These techniques can be categorized as either dataoriented or task-oriented. The data-oriented approach analyzes a subset of data or a smaller transformed representation, whereas taskoriented scheme solves the problem directly via approximation techniques. We propose a hybrid approach to tackle the data stream analysis problem. The data stream has been both statistically transformed to a smaller size and computationally approximated its characteristics. We adopt a Monte Carlo method in the approximation step. The data reduction has been performed horizontally and vertically through our EMR sampling method. The proposed method is analyzed by a series of experiments. We apply our algorithm on clustering and classification tasks to evaluate the utility of our approach.