Enhancing capabilities of Texture Extraction for Color Image Retrieval

Content-Based Image Retrieval has been a major area of research in recent years. Efficient image retrieval with high precision would require an approach which combines usage of both the color and texture features of the image. In this paper we propose a method for enhancing the capabilities of texture based feature extraction and further demonstrate the use of these enhanced texture features in Texture-Based Color Image Retrieval.

Volterra Filter for Color Image Segmentation

Color image segmentation plays an important role in computer vision and image processing areas. In this paper, the features of Volterra filter are utilized for color image segmentation. The discrete Volterra filter exhibits both linear and nonlinear characteristics. The linear part smoothes the image features in uniform gray zones and is used for getting a gross representation of objects of interest. The nonlinear term compensates for the blurring due to the linear term and preserves the edges which are mainly used to distinguish the various objects. The truncated quadratic Volterra filters are mainly used for edge preserving along with Gaussian noise cancellation. In our approach, the segmentation is based on K-means clustering algorithm in HSI space. Both the hue and the intensity components are fully utilized. For hue clustering, the special cyclic property of the hue component is taken into consideration. The experimental results show that the proposed technique segments the color image while preserving significant features and removing noise effects.

On Pattern-Based Programming towards the Discovery of Frequent Patterns

The problem of frequent pattern discovery is defined as the process of searching for patterns such as sets of features or items that appear in data frequently. Finding such frequent patterns has become an important data mining task because it reveals associations, correlations, and many other interesting relationships hidden in a database. Most of the proposed frequent pattern mining algorithms have been implemented with imperative programming languages. Such paradigm is inefficient when set of patterns is large and the frequent pattern is long. We suggest a high-level declarative style of programming apply to the problem of frequent pattern discovery. We consider two languages: Haskell and Prolog. Our intuitive idea is that the problem of finding frequent patterns should be efficiently and concisely implemented via a declarative paradigm since pattern matching is a fundamental feature supported by most functional languages and Prolog. Our frequent pattern mining implementation using the Haskell and Prolog languages confirms our hypothesis about conciseness of the program. The comparative performance studies on line-of-code, speed and memory usage of declarative versus imperative programming have been reported in the paper.

Advanced Robust PDC Fuzzy Control of Nonlinear Systems

This paper introduces a new method called ARPDC (Advanced Robust Parallel Distributed Compensation) for automatic control of nonlinear systems. This method improves a quality of robust control by interpolating of robust and optimal controller. The weight of each controller is determined by an original criteria function for model validity and disturbance appreciation. ARPDC method is based on nonlinear Takagi-Sugeno (T-S) fuzzy systems and Parallel Distributed Compensation (PDC) control scheme. The relaxed stability conditions of ARPDC control of nominal system have been derived. The advantages of presented method are demonstrated on the inverse pendulum benchmark problem. From comparison between three different controllers (robust, optimal and ARPDC) follows, that ARPDC control is almost optimal with the robustness close to the robust controller. The results indicate that ARPDC algorithm can be a good alternative not only for a robust control, but in some cases also to an adaptive control of nonlinear systems.

Automating the Testing of Object Behaviour: A Statechart-Driven Approach

The evolution of current modeling specifications gives rise to the problem of generating automated test cases from a variety of application tools. Past endeavours on behavioural testing of UML statecharts have not systematically leveraged the potential of existing graph theory for testing of objects. Therefore there exists a need for a simple, tool-independent, and effective method for automatic test generation. An architecture, codenamed ACUTE-J (Automated stateChart Unit Testing Engine for Java), for automating the unit test generation process is presented. A sequential approach for converting UML statechart diagrams to JUnit test classes is described, with the application of existing graph theory. Research byproducts such as a universal XML Schema and API for statechart-driven testing are also proposed. The result from a Java implementation of ACUTE-J is discussed in brief. The Chinese Postman algorithm is utilised as an illustration for a run-through of the ACUTE-J architecture.

A Distinguish Attack on COSvd Cipher

The COSvd Ciphers has been proposed by Filiol and others (2004). It is a strengthened version of COS stream cipher family denoted COSvd that has been adopted for at least one commercial standard. We propose a distinguish attack on this version, and prove that, it is distinguishable from a random stream. In the COSvd Cipher used one S-Box (10×8) on the final part of cipher. We focus on S-Box and use weakness this S-Box for distinguish attack. In addition, found a leak on HNLL that the sub s-boxes don-t select uniformly. We use this property for an Improve distinguish attack.

Multi-Agent Systems for Intelligent Clustering

Intelligent systems are required in order to quickly and accurately analyze enormous quantities of data in the Internet environment. In intelligent systems, information extracting processes can be divided into supervised learning and unsupervised learning. This paper investigates intelligent clustering by unsupervised learning. Intelligent clustering is the clustering system which determines the clustering model for data analysis and evaluates results by itself. This system can make a clustering model more rapidly, objectively and accurately than an analyzer. The methodology for the automatic clustering intelligent system is a multi-agent system that comprises a clustering agent and a cluster performance evaluation agent. An agent exchanges information about clusters with another agent and the system determines the optimal cluster number through this information. Experiments using data sets in the UCI Machine Repository are performed in order to prove the validity of the system.

Optimized Data Fusion in an Intelligent Integrated GPS/INS System Using Genetic Algorithm

Most integrated inertial navigation systems (INS) and global positioning systems (GPS) have been implemented using the Kalman filtering technique with its drawbacks related to the need for predefined INS error model and observability of at least four satellites. Most recently, a method using a hybrid-adaptive network based fuzzy inference system (ANFIS) has been proposed which is trained during the availability of GPS signal to map the error between the GPS and the INS. Then it will be used to predict the error of the INS position components during GPS signal blockage. This paper introduces a genetic optimization algorithm that is used to update the ANFIS parameters with respect to the INS/GPS error function used as the objective function to be minimized. The results demonstrate the advantages of the genetically optimized ANFIS for INS/GPS integration in comparison with conventional ANFIS specially in the cases of satellites- outages. Coping with this problem plays an important role in assessment of the fusion approach in land navigation.

Secure Secret Recovery by using Weighted Personal Entropy

Authentication plays a vital role in many secure systems. Most of these systems require user to log in with his or her secret password or pass phrase before entering it. This is to ensure all the valuables information is kept confidential guaranteeing also its integrity and availability. However, to achieve this goal, users are required to memorize high entropy passwords or pass phrases. Unfortunately, this sometimes causes difficulty for user to remember meaningless strings of data. This paper presents a new scheme which assigns a weight to each personal question given to the user in revealing the encrypted secrets or password. Concentration of this scheme is to offer fault tolerance to users by allowing them to forget the specific password to a subset of questions and still recover the secret and achieve successful authentication. Comparison on level of security for weight-based and weightless secret recovery scheme is also discussed. The paper concludes with the few areas that requires more investigation in this research.

Categorical Clustering By Converting Associated Information

Lacking an inherent “natural" dissimilarity measure between objects in categorical dataset presents special difficulties in clustering analysis. However, each categorical attributes from a given dataset provides natural probability and information in the sense of Shannon. In this paper, we proposed a novel method which heuristically converts categorical attributes to numerical values by exploiting such associated information. We conduct an experimental study with real-life categorical dataset. The experiment demonstrates the effectiveness of our approach.

Objective Performance of Compressed Image Quality Assessments

Measurement of the quality of image compression is important for image processing application. In this paper, we propose an objective image quality assessment to measure the quality of gray scale compressed image, which is correlation well with subjective quality measurement (MOS) and least time taken. The new objective image quality measurement is developed from a few fundamental of objective measurements to evaluate the compressed image quality based on JPEG and JPEG2000. The reliability between each fundamental objective measurement and subjective measurement (MOS) is found. From the experimental results, we found that the Maximum Difference measurement (MD) and a new proposed measurement, Structural Content Laplacian Mean Square Error (SCLMSE), are the suitable measurements that can be used to evaluate the quality of JPEG200 and JPEG compressed image, respectively. In addition, MD and SCLMSE measurements are scaled to make them equivalent to MOS, given the rate of compressed image quality from 1 to 5 (unacceptable to excellent quality).

Genetic Programming Approach to Hierarchical Production Rule Discovery

Automated discovery of hierarchical structures in large data sets has been an active research area in the recent past. This paper focuses on the issue of mining generalized rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses flat rules as initial individuals of GP and discovers hierarchical structure. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy. Experimental results are presented to demonstrate the performance of the proposed algorithm.

New Approach for Manipulation of Stratified Programs

Negation is useful in the majority of the real world applications. However, its introduction leads to semantic and canonical problems. We propose in this paper an approach based on stratification to deal with negation problems. This approach is based on an extension of predicates nets. It is characterized with two main contributions. The first concerns the management of the whole class of stratified programs. The second contribution is related to usual operations optimizations on stratified programs (maximal stratification, incremental updates ...).

Image Modeling Using Gibbs-Markov Random Field and Support Vector Machines Algorithm

This paper introduces a novel approach to estimate the clique potentials of Gibbs Markov random field (GMRF) models using the Support Vector Machines (SVM) algorithm and the Mean Field (MF) theory. The proposed approach is based on modeling the potential function associated with each clique shape of the GMRF model as a Gaussian-shaped kernel. In turn, the energy function of the GMRF will be in the form of a weighted sum of Gaussian kernels. This formulation of the GMRF model urges the use of the SVM with the Mean Field theory applied for its learning for estimating the energy function. The approach has been tested on synthetic texture images and is shown to provide satisfactory results in retrieving the synthesizing parameters.

NEAR: Visualizing Information Relations in Multimedia Repository A•VI•RE

This paper describes the NEAR (Navigating Exhibitions, Annotations and Resources) panel, a novel interactive visualization technique designed to help people navigate and interpret groups of resources, exhibitions and annotations by revealing hidden relations such as similarities and references. NEAR is implemented on A•VI•RE, an extended online information repository. A•VI•RE supports a semi-structured collection of exhibitions containing various resources and annotations. Users are encouraged to contribute, share, annotate and interpret resources in the system by building their own exhibitions and annotations. However, it is hard to navigate smoothly and efficiently in A•VI•RE because of its high capacity and complexity. We present a visual panel that implements new navigation and communication approaches that support discovery of implied relations. By quickly scanning and interacting with NEAR, users can see not only implied relations but also potential connections among different data elements. NEAR was tested by several users in the A•VI•RE system and shown to be a supportive navigation tool. In the paper, we further analyze the design, report the evaluation and consider its usage in other applications.

Approximate Frequent Pattern Discovery Over Data Stream

Frequent pattern discovery over data stream is a hard problem because a continuously generated nature of stream does not allow a revisit on each data element. Furthermore, pattern discovery process must be fast to produce timely results. Based on these requirements, we propose an approximate approach to tackle the problem of discovering frequent patterns over continuous stream. Our approximation algorithm is intended to be applied to process a stream prior to the pattern discovery process. The results of approximate frequent pattern discovery have been reported in the paper.

Neural-Symbolic Machine-Learning for Knowledge Discovery and Adaptive Information Retrieval

In this paper, a model for an information retrieval system is proposed which takes into account that knowledge about documents and information need of users are dynamic. Two methods are combined, one qualitative or symbolic and the other quantitative or numeric, which are deemed suitable for many clustering contexts, data analysis, concept exploring and knowledge discovery. These two methods may be classified as inductive learning techniques. In this model, they are introduced to build “long term" knowledge about past queries and concepts in a collection of documents. The “long term" knowledge can guide and assist the user to formulate an initial query and can be exploited in the process of retrieving relevant information. The different kinds of knowledge are organized in different points of view. This may be considered an enrichment of the exploration level which is coherent with the concept of document/query structure.

Moment Invariants in Image Analysis

This paper aims to present a survey of object recognition/classification methods based on image moments. We review various types of moments (geometric moments, complex moments) and moment-based invariants with respect to various image degradations and distortions (rotation, scaling, affine transform, image blurring, etc.) which can be used as shape descriptors for classification. We explain a general theory how to construct these invariants and show also a few of them in explicit forms. We review efficient numerical algorithms that can be used for moment computation and demonstrate practical examples of using moment invariants in real applications.

Learning Classifier Systems Approach for Automated Discovery of Crisp and Fuzzy Hierarchical Production Rules

This research presents a system for post processing of data that takes mined flat rules as input and discovers crisp as well as fuzzy hierarchical structures using Learning Classifier System approach. Learning Classifier System (LCS) is basically a machine learning technique that combines evolutionary computing, reinforcement learning, supervised or unsupervised learning and heuristics to produce adaptive systems. A LCS learns by interacting with an environment from which it receives feedback in the form of numerical reward. Learning is achieved by trying to maximize the amount of reward received. Crisp description for a concept usually cannot represent human knowledge completely and practically. In the proposed Learning Classifier System initial population is constructed as a random collection of HPR–trees (related production rules) and crisp / fuzzy hierarchies are evolved. A fuzzy subsumption relation is suggested for the proposed system and based on Subsumption Matrix (SM), a suitable fitness function is proposed. Suitable genetic operators are proposed for the chosen chromosome representation method. For implementing reinforcement a suitable reward and punishment scheme is also proposed. Experimental results are presented to demonstrate the performance of the proposed system.

Pronominal Anaphora Processing

Discourse pronominal anaphora resolution must be part of any efficient information processing systems, since the reference of a pronoun is dependent on an antecedent located in the discourse. Contrary to knowledge-poor approaches, this paper shows that syntax-semantic relations are basic in pronominal anaphora resolution. The identification of quantified expressions to which pronouns can be anaphorically related provides further evidence that pronominal anaphora is based on domains of interpretation where asymmetric agreement holds.