Semi-automatic Construction of Ontology-based CBR System for Knowledge Integration

In order to integrate knowledge in heterogeneous case-based reasoning (CBR) systems, ontology-based CBR system has become a hot topic. To solve the facing problems of ontology-based CBR system, for example, its architecture is nonstandard, reusing knowledge in legacy CBR is deficient, ontology construction is difficult, etc, we propose a novel approach for semi-automatically construct ontology-based CBR system whose architecture is based on two-layer ontology. Domain knowledge implied in legacy case bases can be mapped from relational database schema and knowledge items to relevant OWL local ontology automatically by a mapping algorithm with low time-complexity. By concept clustering based on formal concept analysis, computing concept equation measure and concept inclusion measure, some suggestions about enriching or amending concept hierarchy of OWL local ontologies are made automatically that can aid designers to achieve semi-automatic construction of OWL domain ontology. Validation of the approach is done by an application example.

EZW Coding System with Artificial Neural Networks

Image compression plays a vital role in today-s communication. The limitation in allocated bandwidth leads to slower communication. To exchange the rate of transmission in the limited bandwidth the Image data must be compressed before transmission. Basically there are two types of compressions, 1) LOSSY compression and 2) LOSSLESS compression. Lossy compression though gives more compression compared to lossless compression; the accuracy in retrievation is less in case of lossy compression as compared to lossless compression. JPEG, JPEG2000 image compression system follows huffman coding for image compression. JPEG 2000 coding system use wavelet transform, which decompose the image into different levels, where the coefficient in each sub band are uncorrelated from coefficient of other sub bands. Embedded Zero tree wavelet (EZW) coding exploits the multi-resolution properties of the wavelet transform to give a computationally simple algorithm with better performance compared to existing wavelet transforms. For further improvement of compression applications other coding methods were recently been suggested. An ANN base approach is one such method. Artificial Neural Network has been applied to many problems in image processing and has demonstrated their superiority over classical methods when dealing with noisy or incomplete data for image compression applications. The performance analysis of different images is proposed with an analysis of EZW coding system with Error Backpropagation algorithm. The implementation and analysis shows approximately 30% more accuracy in retrieved image compare to the existing EZW coding system.

Improved Feature Processing for Iris Biometric Authentication System

Iris-based biometric authentication is gaining importance in recent times. Iris biometric processing however, is a complex process and computationally very expensive. In the overall processing of iris biometric in an iris-based biometric authentication system, feature processing is an important task. In feature processing, we extract iris features, which are ultimately used in matching. Since there is a large number of iris features and computational time increases as the number of features increases, it is therefore a challenge to develop an iris processing system with as few as possible number of features and at the same time without compromising the correctness. In this paper, we address this issue and present an approach to feature extraction and feature matching process. We apply Daubechies D4 wavelet with 4 levels to extract features from iris images. These features are encoded with 2 bits by quantizing into 4 quantization levels. With our proposed approach it is possible to represent an iris template with only 304 bits, whereas existing approaches require as many as 1024 bits. In addition, we assign different weights to different iris region to compare two iris templates which significantly increases the accuracy. Further, we match the iris template based on a weighted similarity measure. Experimental results on several iris databases substantiate the efficacy of our approach.

Fundamental Equation of Complete Factor Synergetics of Complex Systems with Normalization of Dimension

It is by reason of the unified measure of varieties of resources and the unified processing of the disposal of varieties of resources, that these closely related three of new basic models called the resources assembled node and the disposition integrated node as well as the intelligent organizing node are put forth in this paper; the three closely related quantities of integrative analytical mechanics including the disposal intensity and disposal- weighted intensity as well as the charge of resource charge are set; and then the resources assembled space and the disposition integrated space as well as the intelligent organizing space are put forth. The system of fundamental equations and model of complete factor synergetics is preliminarily approached for the general situation in this paper, to form the analytical base of complete factor synergetics. By the essential variables constituting this system of equations we should set twenty variables respectively with relation to the essential dynamical effect, external synergetic action and internal synergetic action of the system.

Persian Printed Numerals Classification Using Extended Moment Invariants

Classification of Persian printed numeral characters has been considered and a proposed system has been introduced. In representation stage, for the first time in Persian optical character recognition, extended moment invariants has been utilized as characters image descriptor. In classification stage, four different classifiers namely minimum mean distance, nearest neighbor rule, multi layer perceptron, and fuzzy min-max neural network has been used, which first and second are traditional nonparametric statistical classifier. Third is a well-known neural network and forth is a kind of fuzzy neural network that is based on utilizing hyperbox fuzzy sets. Set of different experiments has been done and variety of results has been presented. The results showed that extended moment invariants are qualified as features to classify Persian printed numeral characters.

Applications of Rough Set Decompositions in Information Retrieval

This paper proposes rough set models with three different level knowledge granules in incomplete information system under tolerance relation by similarity between objects according to their attribute values. Through introducing dominance relation on the discourse to decompose similarity classes into three subclasses: little better subclass, little worse subclass and vague subclass, it dismantles lower and upper approximations into three components. By using these components, retrieving information to find naturally hierarchical expansions to queries and constructing answers to elaborative queries can be effective. It illustrates the approach in applying rough set models in the design of information retrieval system to access different granular expanded documents. The proposed method enhances rough set model application in the flexibility of expansions and elaborative queries in information retrieval.

An Efficient Cache Replacement Strategy for the Hybrid Cache Consistency Approach

Caching was suggested as a solution for reducing bandwidth utilization and minimizing query latency in mobile environments. Over the years, different caching approaches have been proposed, some relying on the server to broadcast reports periodically informing of the updated data while others allowed the clients to request for the data whenever needed. Until recently a hybrid cache consistency scheme Scalable Asynchronous Cache Consistency Scheme SACCS was proposed, which combined the two different approaches benefits- and is proved to be more efficient and scalable. Nevertheless, caching has its limitations too, due to the limited cache size and the limited bandwidth, which makes the implementation of cache replacement strategy an important aspect for improving the cache consistency algorithms. In this thesis, we proposed a new cache replacement strategy, the Least Unified Value strategy (LUV) to replace the Least Recently Used (LRU) that SACCS was based on. This paper studies the advantages and the drawbacks of the new proposed strategy, comparing it with different categories of cache replacement strategies.

Performance Analysis of List Scheduling in Heterogeneous Computing Systems

Given a parallel program to be executed on a heterogeneous computing system, the overall execution time of the program is determined by a schedule. In this paper, we analyze the worst-case performance of the list scheduling algorithm for scheduling tasks of a parallel program in a mixed-machine heterogeneous computing system such that the total execution time of the program is minimized. We prove tight lower and upper bounds for the worst-case performance ratio of the list scheduling algorithm. We also examine the average-case performance of the list scheduling algorithm. Our experimental data reveal that the average-case performance of the list scheduling algorithm is much better than the worst-case performance and is very close to optimal, except for large systems with large heterogeneity. Thus, the list scheduling algorithm is very useful in real applications.

The Overall Aspects of E-Leaning Issues, Developments, Opportunities and Challenges

Rapid steps made in the field of Information and Communication Technology (ICT) has facilitated the development of teaching and learning methods and prepared them to serve the needs of an assorted educational institution. In other words, the information age has redefined the fundamentals and transformed the institutions and method of services delivery forever. The vision is the articulation of a desire to transform the method of teaching and learning could proceed through e-learning. E-learning is commonly deliberated to use of networked information and communications technology in teaching and learning practice. This paper deals the general aspects of the e-leaning with its issues, developments, opportunities and challenges, which can the higher institutions own.

Artificial Intelligence for Software Quality Improvement

This paper presents a software quality support tool, a Java source code evaluator and a code profiler based on computational intelligence techniques. It is Java prototype software developed by AI Group [1] from the Research Laboratories at Universidad de Palermo: an Intelligent Java Analyzer (in Spanish: Analizador Java Inteligente, AJI). It represents a new approach to evaluate and identify inaccurate source code usage and transitively, the software product itself. The aim of this project is to provide the software development industry with a new tool to increase software quality by extending the value of source code metrics through computational intelligence.

Complexity Analysis of Some Known Graph Coloring Instances

Graph coloring is an important problem in computer science and many algorithms are known for obtaining reasonably good solutions in polynomial time. One method of comparing different algorithms is to test them on a set of standard graphs where the optimal solution is already known. This investigation analyzes a set of 50 well known graph coloring instances according to a set of complexity measures. These instances come from a variety of sources some representing actual applications of graph coloring (register allocation) and others (mycieleski and leighton graphs) that are theoretically designed to be difficult to solve. The size of the graphs ranged from ranged from a low of 11 variables to a high of 864 variables. The method used to solve the coloring problem was the square of the adjacency (i.e., correlation) matrix. The results show that the most difficult graphs to solve were the leighton and the queen graphs. Complexity measures such as density, mobility, deviation from uniform color class size and number of block diagonal zeros are calculated for each graph. The results showed that the most difficult problems have low mobility (in the range of .2-.5) and relatively little deviation from uniform color class size.

Generational PipeLined Genetic Algorithm (PLGA)using Stochastic Selection

In this paper, a pipelined version of genetic algorithm, called PLGA, and a corresponding hardware platform are described. The basic operations of conventional GA (CGA) are made pipelined using an appropriate selection scheme. The selection operator, used here, is stochastic in nature and is called SA-selection. This helps maintaining the basic generational nature of the proposed pipelined GA (PLGA). A number of benchmark problems are used to compare the performances of conventional roulette-wheel selection and the SA-selection. These include unimodal and multimodal functions with dimensionality varying from very small to very large. It is seen that the SA-selection scheme is giving comparable performances with respect to the classical roulette-wheel selection scheme, for all the instances, when quality of solutions and rate of convergence are considered. The speedups obtained by PLGA for different benchmarks are found to be significant. It is shown that a complete hardware pipeline can be developed using the proposed scheme, if parallel evaluation of the fitness expression is possible. In this connection a low-cost but very fast hardware evaluation unit is described. Results of simulation experiments show that in a pipelined hardware environment, PLGA will be much faster than CGA. In terms of efficiency, PLGA is found to outperform parallel GA (PGA) also.

Transmission Lines Loading Enhancement Using ADPSO Approach

Discrete particle swarm optimization (DPSO) is a powerful stochastic evolutionary algorithm that is used to solve the large-scale, discrete and nonlinear optimization problems. However, it has been observed that standard DPSO algorithm has premature convergence when solving a complex optimization problem like transmission expansion planning (TEP). To resolve this problem an advanced discrete particle swarm optimization (ADPSO) is proposed in this paper. The simulation result shows that optimization of lines loading in transmission expansion planning with ADPSO is better than DPSO from precision view point.

Proactive Detection of DDoS Attacks Utilizing k-NN Classifier in an Anti-DDos Framework

Distributed denial-of-service (DDoS) attacks pose a serious threat to network security. There have been a lot of methodologies and tools devised to detect DDoS attacks and reduce the damage they cause. Still, most of the methods cannot simultaneously achieve (1) efficient detection with a small number of false alarms and (2) real-time transfer of packets. Here, we introduce a method for proactive detection of DDoS attacks, by classifying the network status, to be utilized in the detection stage of the proposed anti-DDoS framework. Initially, we analyse the DDoS architecture and obtain details of its phases. Then, we investigate the procedures of DDoS attacks and select variables based on these features. Finally, we apply the k-nearest neighbour (k-NN) method to classify the network status into each phase of DDoS attack. The simulation result showed that each phase of the attack scenario is classified well and we could detect DDoS attack in the early stage.

Dual Construction of Stern-based Signature Scheme

In this paper, we propose a dual version of the first threshold ring signature scheme based on error-correcting code proposed by Aguilar et. al in [1]. Our scheme uses an improvement of Véron zero-knowledge identification scheme, which provide smaller public and private key sizes and better computation complexity than the Stern one. This scheme is secure in the random oracle model.

An Application for Web Mining Systems with Services Oriented Architecture

Although the World Wide Web is considered the largest source of information there exists nowadays, due to its inherent dynamic characteristics, the task of finding useful and qualified information can become a very frustrating experience. This study presents a research on the information mining systems in the Web; and proposes an implementation of these systems by means of components that can be built using the technology of Web services. This implies that they can encompass features offered by a services oriented architecture (SOA) and specific components may be used by other tools, independent of platforms or programming languages. Hence, the main objective of this work is to provide an architecture to Web mining systems, divided into stages, where each step is a component that will incorporate the characteristics of SOA. The separation of these steps was designed based upon the existing literature. Interesting results were obtained and are shown here.

Surge Protection of Power Supply used for Automation Devices in Power Distribution System

The intent of this essay is to evaluate the effectiveness of surge suppressor aimed at power supply used for automation devices in power distribution system which is consist of MOV and T type low-pass filter. Books, journal articles and e-sources related to surge protection of power supply used for automation devices in power distribution system were consulted, and the useful information was organized, analyzed and developed into five parts: characteristics of surge wave, protection against surge wave, impedance characteristics of target, using Matlab to simulate circuit response after 5kV,1.2/50s surge wave and suggestions for surge protection. The results indicate that various types of load situation have great impact on the effectiveness of surge protective device. Therefore, type and parameters of surge protective device need to be carefully selected, and load matching is also vital to be concerned.

A Proposal for Federation Technology for Authenticated Information between Terminals

Recently, various services such as television and the Internet have come to be received through various terminals. However, we could gain greater convenience by receiving these services through cellular phone terminals when we go out and then continuing to receive the same services through a large screen digital television after we have come home. However, it is necessary to go through the same authentication processing again when using TVs after we have come home. In this study, we have developed an authentication method that enables users to switch terminals in environments in which the user receives service from a server through a terminal. Specifically, the method simplifies the authentication of the server side when switching from one terminal to another terminal by using previously authenticated information.

GPU-Based Volume Rendering for Medical Imagery

We present a method for fast volume rendering using graphics hardware (GPU). To our knowledge, it is the first implementation on the GPU. Based on the Shear-Warp algorithm, our GPU-based method provides real-time frame rates and outperforms the CPU-based implementation. When the number of slices is not sufficient, we add in-between slices computed by interpolation. This improves then the quality of the rendered images. We have also implemented the ray marching algorithm on the GPU. The results generated by the three algorithms (CPU-based and GPU-based Shear- Warp, GPU-based Ray Marching) for two test models has proved that the ray marching algorithm outperforms the shear-warp methods in terms of speed up and image quality.

On-line Testing of Software Components for Diagnosis of Embedded Systems

This paper studies the dependability of componentbased applications, especially embedded ones, from the diagnosis point of view. The principle of the diagnosis technique is to implement inter-component tests in order to detect and locate the faulty components without redundancy. The proposed approach for diagnosing faulty components consists of two main aspects. The first one concerns the execution of the inter-component tests which requires integrating test functionality within a component. This is the subject of this paper. The second one is the diagnosis process itself which consists of the analysis of inter-component test results to determine the fault-state of the whole system. Advantage of this diagnosis method when compared to classical redundancy faulttolerant techniques are application autonomy, cost-effectiveness and better usage of system resources. Such advantage is very important for many systems and especially for embedded ones.