Combining Similarity and Dissimilarity Measurements for the Development of QSAR Models Applied to the Prediction of Antiobesity Activity of Drugs

In this paper we study different similarity based approaches for the development of QSAR model devoted to the prediction of activity of antiobesity drugs. Classical similarity approaches are compared regarding to dissimilarity models based on the consideration of the calculation of Euclidean distances between the nonisomorphic fragments extracted in the matching process. Combining the classical similarity and dissimilarity approaches into a new similarity measure, the Approximate Similarity was also studied, and better results were obtained. The application of the proposed method to the development of quantitative structure-activity relationships (QSAR) has provided reliable tools for predicting of inhibitory activity of drugs. Acceptable results were obtained for the models presented here.

Ant Colony Optimization for Feature Subset Selection

The Ant Colony Optimization (ACO) is a metaheuristic inspired by the behavior of real ants in their search for the shortest paths to food sources. It has recently attracted a lot of attention and has been successfully applied to a number of different optimization problems. Due to the importance of the feature selection problem and the potential of ACO, this paper presents a novel method that utilizes the ACO algorithm to implement a feature subset search procedure. Initial results obtained using the classification of speech segments are very promising.

Real-Time Specific Weed Recognition System Using Histogram Analysis

Information on weed distribution within the field is necessary to implement spatially variable herbicide application. Since hand labor is costly, an automated weed control system could be feasible. This paper deals with the development of an algorithm for real time specific weed recognition system based on Histogram Analysis of an image that is used for the weed classification. This algorithm is specifically developed to classify images into broad and narrow class for real-time selective herbicide application. The developed system has been tested on weeds in the lab, which have shown that the system to be very effectiveness in weed identification. Further the results show a very reliable performance on images of weeds taken under varying field conditions. The analysis of the results shows over 95 percent classification accuracy over 140 sample images (broad and narrow) with 70 samples from each category of weeds.

Ionanofluids as Novel Fluids for Advanced Heat Transfer Applications

Ionanofluids are a new and innovative class of heat transfer fluids which exhibit fascinating thermophysical properties compared to their base ionic liquids. This paper deals with the findings of thermal conductivity and specific heat capacity of ionanofluids as a function of a temperature and concentration of nanotubes. Simulation results using ionanofluids as coolants in heat exchanger are also used to access their feasibility and performance in heat transfer devices. Results on thermal conductivity and heat capacity of ionanofluids as well as the estimation of heat transfer areas for ionanofluids and ionic liquids in a model shell and tube heat exchanger reveal that ionanofluids possess superior thermal conductivity and heat capacity and require considerably less heat transfer areas as compared to those of their base ionic liquids. This novel class of fluids shows great potential for advanced heat transfer applications.

Software Architectural Design Ontology

Software Architecture plays a key role in software development but absence of formal description of Software Architecture causes different impede in software development. To cope with these difficulties, ontology has been used as artifact. This paper proposes ontology for Software Architectural design based on IEEE model for architecture description and Kruchten 4+1 model for viewpoints classification. For categorization of style and views, ISO/IEC 42010 has been used. Corpus method has been used to evaluate ontology. The main aim of the proposed ontology is to classify and locate Software Architectural design information.

Bayesian Networks for Earthquake Magnitude Classification in a Early Warning System

During last decades, worldwide researchers dedicated efforts to develop machine-based seismic Early Warning systems, aiming at reducing the huge human losses and economic damages. The elaboration time of seismic waveforms is to be reduced in order to increase the time interval available for the activation of safety measures. This paper suggests a Data Mining model able to correctly and quickly estimate dangerousness of the running seismic event. Several thousand seismic recordings of Japanese and Italian earthquakes were analyzed and a model was obtained by means of a Bayesian Network (BN), which was tested just over the first recordings of seismic events in order to reduce the decision time and the test results were very satisfactory. The model was integrated within an Early Warning System prototype able to collect and elaborate data from a seismic sensor network, estimate the dangerousness of the running earthquake and take the decision of activating the warning promptly.

Towards an Effective Reputation Assessment Process in Peer-to-Peer Systems

The need for reputation assessment is particularly strong in peer-to-peer (P2P) systems because the peers' personal site autonomy is amplified by the inherent technological decentralization of the environment. However, the decentralization notion makes the problem of designing a peer-to-peer based reputation assessment substantially harder in P2P networks than in centralized settings.Existing reputation systems tackle the reputation assessment process in an ad-hoc manner. There is no systematic and coherent way to derive measures and analyze the current reputation systems. In this paper, we propose a reputation assessment process and use it to classify the existing reputation systems. Simulation experiments are conducted and focused on the different methods in selecting the recommendation sources and retrieving the recommendations. These two phases can contribute significantly to the overall performance due to communication cost and coverage.

Discrimination of Seismic Signals Using Artificial Neural Networks

The automatic discrimination of seismic signals is an important practical goal for earth-science observatories due to the large amount of information that they receive continuously. An essential discrimination task is to allocate the incoming signal to a group associated with the kind of physical phenomena producing it. In this paper, two classes of seismic signals recorded routinely in geophysical laboratory of the National Center for Scientific and Technical Research in Morocco are considered. They correspond to signals associated to local earthquakes and chemical explosions. The approach adopted for the development of an automatic discrimination system is a modular system composed by three blocs: 1) Representation, 2) Dimensionality reduction and 3) Classification. The originality of our work consists in the use of a new wavelet called "modified Mexican hat wavelet" in the representation stage. For the dimensionality reduction, we propose a new algorithm based on the random projection and the principal component analysis.

Classic and Heuristic Approaches in Robot Motion Planning A Chronological Review

This paper reviews the major contributions to the Motion Planning (MP) field throughout a 35-year period, from classic approaches to heuristic algorithms. Due to the NP-Hardness of the MP problem, heuristic methods have outperformed the classic approaches and have gained wide popularity. After surveying around 1400 papers in the field, the amount of existing works for each method is identified and classified. Especially, the history and applications of numerous heuristic methods in MP is investigated. The paper concludes with comparative tables and graphs demonstrating the frequency of each MP method's application, and so can be used as a guideline for MP researchers.

An Efficient Hamiltonian for Discrete Fractional Fourier Transform

Fractional Fourier Transform, which is a generalization of the classical Fourier Transform, is a powerful tool for the analysis of transient signals. The discrete Fractional Fourier Transform Hamiltonians have been proposed in the past with varying degrees of correlation between their eigenvectors and Hermite Gaussian functions. In this paper, we propose a new Hamiltonian for the discrete Fractional Fourier Transform and show that the eigenvectors of the proposed matrix has a higher degree of correlation with the Hermite Gaussian functions. Also, the proposed matrix is shown to give better Fractional Fourier responses with various transform orders for different signals.

Improving Academic Performance Prediction using Voting Technique in Data Mining

In this paper we compare the accuracy of data mining methods to classifying students in order to predicting student-s class grade. These predictions are more useful for identifying weak students and assisting management to take remedial measures at early stages to produce excellent graduate that will graduate at least with second class upper. Firstly we examine single classifiers accuracy on our data set and choose the best one and then ensembles it with a weak classifier to produce simple voting method. We present results show that combining different classifiers outperformed other single classifiers for predicting student performance.

Customer Segmentation in Foreign Trade based on Clustering Algorithms Case Study: Trade Promotion Organization of Iran

The goal of this paper is to segment the countries based on the value of export from Iran during 14 years ending at 2005. To measure the dissimilarity among export baskets of different countries, we define Dissimilarity Export Basket (DEB) function and use this distance function in K-means algorithm. The DEB function is defined based on the concepts of the association rules and the value of export group-commodities. In this paper, clustering quality function and clusters intraclass inertia are defined to, respectively, calculate the optimum number of clusters and to compare the functionality of DEB versus Euclidean distance. We have also study the effects of importance weight in DEB function to improve clustering quality. Lastly when segmentation is completed, a designated RFM model is used to analyze the relative profitability of each cluster.

Induced Graphoidal Covers in a Graph

An induced graphoidal cover of a graph G is a collection ψ of (not necessarily open) paths in G such that every path in ψ has at least two vertices, every vertex of G is an internal vertex of at most one path in ψ, every edge of G is in exactly one path in ψ and every member of ψ is an induced cycle or an induced path. The minimum cardinality of an induced graphoidal cover of G is called the induced graphoidal covering number of G and is denoted by ηi(G) or ηi. Here we find induced graphoidal cover for some classes of graphs.

Integrating Computational Intelligence Techniques and Assessment Agents in ELearning Environments

In this contribution an innovative platform is being presented that integrates intelligent agents and evolutionary computation techniques in legacy e-learning environments. It introduces the design and development of a scalable and interoperable integration platform supporting: I) various assessment agents for e-learning environments, II) a specific resource retrieval agent for the provision of additional information from Internet sources matching the needs and profile of the specific user and III) a genetic algorithm designed to extract efficient information (classifying rules) based on the students- answering input data. The agents are implemented in order to provide intelligent assessment services based on computational intelligence techniques such as Bayesian Networks and Genetic Algorithms. The proposed Genetic Algorithm (GA) is used in order to extract efficient information (classifying rules) based on the students- answering input data. The idea of using a GA in order to fulfil this difficult task came from the fact that GAs have been widely used in applications including classification of unknown data. The utilization of new and emerging technologies like web services allows integrating the provided services to any web based legacy e-learning environment.

A Modified Genetic Based Technique for Solving the Power System State Estimation Problem

Power system state estimation is the process of calculating a reliable estimate of the power system state vector composed of bus voltages' angles and magnitudes from telemetered measurements on the system. This estimate of the state vector provides the description of the system necessary for the operation and security monitoring. Many methods are described in the literature for solving the state estimation problem, the most important of which are the classical weighted least squares method and the nondeterministic genetic based method; however both showed drawbacks. In this paper a modified version of the genetic algorithm power system state estimation is introduced, Sensitivity of the proposed algorithm to genetic operators is discussed, the algorithm is applied to case studies and finally it is compared with the classical weighted least squares method formulation.

A Comparison of SVM-based Criteria in Evolutionary Method for Gene Selection and Classification of Microarray Data

An evolutionary method whose selection and recombination operations are based on generalization error-bounds of support vector machine (SVM) can select a subset of potentially informative genes for SVM classifier very efficiently [7]. In this paper, we will use the derivative of error-bound (first-order criteria) to select and recombine gene features in the evolutionary process, and compare the performance of the derivative of error-bound with the error-bound itself (zero-order) in the evolutionary process. We also investigate several error-bounds and their derivatives to compare the performance, and find the best criteria for gene selection and classification. We use 7 cancer-related human gene expression datasets to evaluate the performance of the zero-order and first-order criteria of error-bounds. Though both criteria have the same strategy in theoretically, experimental results demonstrate the best criterion for microarray gene expression data.

A Laplace Transform Dual-Reciprocity Boundary Element Method for Axisymmetric Elastodynamic Problems

A dual-reciprocity boundary element method is presented for the numerical solution of a class of axisymmetric elastodynamic problems. The domain integrals that arise in the integrodifferential formulation are converted to line integrals by using the dual-reciprocity method together suitably constructed interpolating functions. The second order time derivatives of the displacement in the governing partial differential equations are suppressed by using Laplace transformation. In the Laplace transform domain, the problem under consideration is eventually reduced to solving a system of linear algebraic equations. Once the linear algebraic equations are solved, the displacement and stress fields in the physical domain can be recovered by using a numerical technique for inverting Laplace transforms.

Detection of Bias in GPS satellites- Measurements for Enhanced Measurement Integrity

In this paper, the detection of a fault in the Global Positioning System (GPS) measurement is addressed. The class of faults considered is a bias in the GPS pseudorange measurements. This bias is modeled as an unknown constant. The fault could be the result of a receiver fault or signal fault such as multipath error. A bias bank is constructed based on set of possible fault hypotheses. Initially, there is equal probability of occurrence for any of the biases in the bank. Subsequently, as the measurements are processed, the probability of occurrence for each of the biases is sequentially updated. The fault with a probability approaching unity will be declared as the current fault in the GPS measurement. The residual formed from the GPS and Inertial Measurement Unit (IMU) measurements is used to update the probability of each fault. Results will be presented to show the performance of the presented algorithm.