Neuro-Fuzzy System for Equalization Channel Distortion

In this paper the application of neuro-fuzzy system for equalization of channel distortion is considered. The structure and operation algorithm of neuro-fuzzy equalizer are described. The use of neuro-fuzzy equalizer in digital signal transmission allows to decrease training time of parameters and decrease the complexity of the network. The simulation of neuro-fuzzy equalizer is performed. The obtained result satisfies the efficiency of application of neurofuzzy technology in channel equalization.

A Parallel Implementation of the Reverse Converter for the Moduli Set {2n, 2n–1, 2n–1–1}

In this paper, a new reverse converter for the moduli set {2n, 2n–1, 2n–1–1} is presented. We improved a previously introduced conversion algorithm for deriving an efficient hardware design for reverse converter. Hardware architecture of the proposed converter is based on carry-save adders and regular binary adders, without the requirement for modular adders. The presented design is faster than the latest introduced reverse converter for moduli set {2n, 2n–1, 2n–1–1}. Also, it has better performance than the reverse converters for the recently introduced moduli set {2n+1–1, 2n, 2n–1}

A Novel Web Metric for the Evaluation of Internet Trends

Web 2.0 (social networking, blogging and online forums) can serve as a data source for social science research because it contains vast amount of information from many different users. The volume of that information has been growing at a very high rate and becoming a network of heterogeneous data; this makes things difficult to find and is therefore not almost useful. We have proposed a novel theoretical model for gathering and processing data from Web 2.0, which would reflect semantic content of web pages in better way. This article deals with the analysis part of the model and its usage for content analysis of blogs. The introductory part of the article describes methodology for the gathering and processing data from blogs. The next part of the article is focused on the evaluation and content analysis of blogs, which write about specific trend.

Training Radial Basis Function Networks with Differential Evolution

In this paper, Differential Evolution (DE) algorithm, a new promising evolutionary algorithm, is proposed to train Radial Basis Function (RBF) network related to automatic configuration of network architecture. Classification tasks on data sets: Iris, Wine, New-thyroid, and Glass are conducted to measure the performance of neural networks. Compared with a standard RBF training algorithm in Matlab neural network toolbox, DE achieves more rational architecture for RBF networks. The resulting networks hence obtain strong generalization abilities.

A Task-Based Design Approach for Augmented Reality Systems

User interaction components of Augmented Reality (AR) systems have to be tested with users in order to find and fix usability problems as early as possible. In this paper we will report on a user-centered design approach for AR systems following the experience acquired during the design and evaluation of a software prototype for an AR-based educational platform. In this respect we will focus on the re-design of the user task based on the results from a formative usability evaluation. The basic idea of our approach is to describe task scenarios in a tabular format, to develop a task model in a task modeling environment and then to simulate the execution.

A Utilitarian Approach to Modeling Information Flows in Social Networks

We propose a multi-agent based utilitarian approach to model and understand information flows in social networks that lead to Pareto optimal informational exchanges. We model the individual expected utility function of the agents to reflect the net value of information received. We show how this model, adapted from a theorem by Karl Borch dealing with an actuarial Risk Exchange concept in the Insurance industry, can be used for social network analysis. We develop a utilitarian framework that allows us to interpret Pareto optimal exchanges of value as potential information flows, while achieving a maximization of a sum of expected utilities of information of the group of agents. We examine some interesting conditions on the utility function under which the flows are optimal. We illustrate the promise of this new approach to attach economic value to information in networks with a synthetic example.

T-DOF PID Controller Design using Characteristic Ratio Assignment Method for Quadruple Tank Process

A control system design with Characteristic Ratio Assignment (CRA) is proven that effective for SISO control design. But the control system design for MIMO via CRA is not concrete procedure. In this paper presents the control system design method for quadruple-tank process via CRA. By using the decentralized method for both minimum phase and non-minimum phase are made. The results from PI and PID controller design via CRA can be illustrated the validity of our approach by MATLAB.

A P2P File Sharing Technique by Indexed-Priority Metric

Recently, the improvements in processing performance of a computer and in high speed communication of an optical fiber have been achieved, so that the amount of data which are processed by a computer and flowed on a network has been increasing greatly. However, in a client-server system, since the server receives and processes the amount of data from the clients through the network, a load on the server is increasing. Thus, there are needed to introduce a server with high processing ability and to have a line with high bandwidth. In this paper, concerning to P2P networks to resolve the load on a specific server, a criterion called an Indexed-Priority Metric is proposed and its performance is evaluated. The proposed metric is to allocate some files to each node. As a result, the load on a specific server can distribute them to each node equally well. A P2P file sharing system using the proposed metric is implemented. Simulation results show that the proposed metric can make it distribute files on the specific server.

A Method to Annotate Programs with High-Level Knowledge of Computation

When programming in languages such as C, Java, etc., it is difficult to reconstruct the programmer's ideas only from the program code. This occurs mainly because, much of the programmer's ideas behind the implementation are not recorded in the code during implementation. For example, physical aspects of computation such as spatial structures, activities, and meaning of variables are not required as instructions to the computer and are often excluded. This makes the future reconstruction of the original ideas difficult. AIDA, which is a multimedia programming language based on the cyberFilm model, can solve these problems allowing to describe ideas behind programs using advanced annotation methods as a natural extension to programming. In this paper, a development environment that implements the AIDA language is presented with a focus on the annotation methods. In particular, an actual scientific numerical computation code is created and the effects of the annotation methods are analyzed.

Using Automatic Ontology Learning Methods in Human Plausible Reasoning Based Systems

Knowledge discovery from text and ontology learning are relatively new fields. However their usage is extended in many fields like Information Retrieval (IR) and its related domains. Human Plausible Reasoning based (HPR) IR systems for example need a knowledge base as their underlying system which is currently made by hand. In this paper we propose an architecture based on ontology learning methods to automatically generate the needed HPR knowledge base.

Real-Time Vision-based Korean Finger Spelling Recognition System

Finger spelling is an art of communicating by signs made with fingers, and has been introduced into sign language to serve as a bridge between the sign language and the verbal language. Previous approaches to finger spelling recognition are classified into two categories: glove-based and vision-based approaches. The glove-based approach is simpler and more accurate recognizing work of hand posture than vision-based, yet the interfaces require the user to wear a cumbersome and carry a load of cables that connected the device to a computer. In contrast, the vision-based approaches provide an attractive alternative to the cumbersome interface, and promise more natural and unobtrusive human-computer interaction. The vision-based approaches generally consist of two steps: hand extraction and recognition, and two steps are processed independently. This paper proposes real-time vision-based Korean finger spelling recognition system by integrating hand extraction into recognition. First, we tentatively detect a hand region using CAMShift algorithm. Then fill factor and aspect ratio estimated by width and height estimated by CAMShift are used to choose candidate from database, which can reduce the number of matching in recognition step. To recognize the finger spelling, we use DTW(dynamic time warping) based on modified chain codes, to be robust to scale and orientation variations. In this procedure, since accurate hand regions, without holes and noises, should be extracted to improve the precision, we use graph cuts algorithm that globally minimize the energy function elegantly expressed by Markov random fields (MRFs). In the experiments, the computational times are less than 130ms, and the times are not related to the number of templates of finger spellings in database, as candidate templates are selected in extraction step.

A Hybrid Distributed Vision System for Robot Localization

Localization is one of the critical issues in the field of robot navigation. With an accurate estimate of the robot pose, robots will be capable of navigating in the environment autonomously and efficiently. In this paper, a hybrid Distributed Vision System (DVS) for robot localization is presented. The presented approach integrates odometry data from robot and images captured from overhead cameras installed in the environment to help reduce possibilities of fail localization due to effects of illumination, encoder accumulated errors, and low quality range data. An odometry-based motion model is applied to predict robot poses, and robot images captured by overhead cameras are then used to update pose estimates with HSV histogram-based measurement model. Experiment results show the presented approach could localize robots in a global world coordinate system with localization errors within 100mm.

Folksonomy-based Recommender Systems with User-s Recent Preferences

Social bookmarking is an environment in which the user gradually changes interests over time so that the tag data associated with the current temporal period is usually more important than tag data temporally far from the current period. This implies that in the social tagging system, the newly tagged items by the user are more relevant than older items. This study proposes a novel recommender system that considers the users- recent tag preferences. The proposed system includes the following stages: grouping similar users into clusters using an E-M clustering algorithm, finding similar resources based on the user-s bookmarks, and recommending the top-N items to the target user. The study examines the system-s information retrieval performance using a dataset from del.icio.us, which is a famous social bookmarking web site. Experimental results show that the proposed system is better and more effective than traditional approaches.

On Preprocessing of Speech Signals

Preprocessing of speech signals is considered a crucial step in the development of a robust and efficient speech or speaker recognition system. In this paper, we present some popular statistical outlier-detection based strategies to segregate the silence/unvoiced part of the speech signal from the voiced portion. The proposed methods are based on the utilization of the 3 σ edit rule, and the Hampel Identifier which are compared with the conventional techniques: (i) short-time energy (STE) based methods, and (ii) distribution based methods. The results obtained after applying the proposed strategies on some test voice signals are encouraging.

Improvement in Power Transformer Intelligent Dissolved Gas Analysis Method

Non-Destructive evaluation of in-service power transformer condition is necessary for avoiding catastrophic failures. Dissolved Gas Analysis (DGA) is one of the important methods. Traditional, statistical and intelligent DGA approaches have been adopted for accurate classification of incipient fault sources. Unfortunately, there are not often enough faulty patterns required for sufficient training of intelligent systems. By bootstrapping the shortcoming is expected to be alleviated and algorithms with better classification success rates to be obtained. In this paper the performance of an artificial neural network, K-Nearest Neighbour and support vector machine methods using bootstrapped data are detailed and shown that while the success rate of the ANN algorithms improves remarkably, the outcome of the others do not benefit so much from the provided enlarged data space. For assessment, two databases are employed: IEC TC10 and a dataset collected from reported data in papers. High average test success rate well exhibits the remarkable outcome.

Performance Analysis of MT Evaluation Measures and Test Suites

Many measures have been proposed for machine translation evaluation (MTE) while little research has been done on the performance of MTE methods. This paper is an effort for MTE performance analysis. A general frame is proposed for the description of the MTE measure and the test suite, including whether the automatic measure is consistent with human evaluation, whether different results from various measures or test suites are consistent, whether the content of the test suite is suitable for performance evaluation, the degree of difficulty of the test suite and its influence on the MTE, the relationship of MTE result significance and the size of the test suite, etc. For a better clarification of the frame, several experiment results are analyzed relating human evaluation, BLEU evaluation, and typological MTE. A visualization method is introduced for better presentation of the results. The study aims for aid in construction of test suite and method selection in MTE practice.

LFSR Counter Implementation in CMOS VLSI

As chip manufacturing technology is suddenly on the threshold of major evaluation, which shrinks chip in size and performance, LFSR (Linear Feedback Shift Register) is implemented in layout level which develops the low power consumption chip, using recent CMOS, sub-micrometer layout tools. Thus LFSR counter can be a new trend setter in cryptography and is also beneficial as compared to GRAY & BINARY counter and variety of other applications. This paper compares 3 architectures in terms of the hardware implementation, CMOS layout and power consumption, using Microwind CMOS layout tool. Thus it provides solution to a low power architecture implementation of LFSR in CMOS VLSI.

Enhanced Shell Sorting Algorithm

Many algorithms are available for sorting the unordered elements. Most important of them are Bubble sort, Heap sort, Insertion sort and Shell sort. These algorithms have their own pros and cons. Shell Sort which is an enhanced version of insertion sort, reduces the number of swaps of the elements being sorted to minimize the complexity and time as compared to insertion sort. Shell sort improves the efficiency of insertion sort by quickly shifting values to their destination. Average sort time is O(n1.25), while worst-case time is O(n1.5). It performs certain iterations. In each iteration it swaps some elements of the array in such a way that in last iteration when the value of h is one, the number of swaps will be reduced. Donald L. Shell invented a formula to calculate the value of ?h?. this work focuses to identify some improvement in the conventional Shell sort algorithm. ''Enhanced Shell Sort algorithm'' is an improvement in the algorithm to calculate the value of 'h'. It has been observed that by applying this algorithm, number of swaps can be reduced up to 60 percent as compared to the existing algorithm. In some other cases this enhancement was found faster than the existing algorithms available.

Powerful Tool to Expand Business Intelligence: Text Mining

With the extensive inclusion of document, especially text, in the business systems, data mining does not cover the full scope of Business Intelligence. Data mining cannot deliver its impact on extracting useful details from the large collection of unstructured and semi-structured written materials based on natural languages. The most pressing issue is to draw the potential business intelligence from text. In order to gain competitive advantages for the business, it is necessary to develop the new powerful tool, text mining, to expand the scope of business intelligence. In this paper, we will work out the strong points of text mining in extracting business intelligence from huge amount of textual information sources within business systems. We will apply text mining to each stage of Business Intelligence systems to prove that text mining is the powerful tool to expand the scope of BI. After reviewing basic definitions and some related technologies, we will discuss the relationship and the benefits of these to text mining. Some examples and applications of text mining will also be given. The motivation behind is to develop new approach to effective and efficient textual information analysis. Thus we can expand the scope of Business Intelligence using the powerful tool, text mining.

Determining the Minimum Threshold for the Functional Relatedness of Inner-Outer Class

Inner class is a specialized class that defined within a regular outer class. It is used in some programming languages such as Java to carry out the task which is related to its outer class. The functional relatedness between inner class and outer class is always the main concern of defining an inner class. However, excessive use of inner class could sabotage the class cohesiveness. In addition, excessive inner class leads to the difficulty of software maintenance and comprehension. Our research aims at determining the minimum threshold for the functional relatedness of inner-outer class. Such minimum threshold is a guideline for removing or relocating the excessive inner class. Our research provides a feasible way for software developers to define inner classes which are functionally related to the outer class.