Practical Issues for Real-Time Video Tracking

In this paper we present the algorithm which allows us to have an object tracking close to real time in Full HD videos. The frame rate (FR) of a video stream is considered to be between 5 and 30 frames per second. The real time track building will be achieved if the algorithm can follow 5 or more frames per second. The principle idea is to use fast algorithms when doing preprocessing to obtain the key points and track them after. The procedure of matching points during assignment is hardly dependent on the number of points. Because of this we have to limit pointed number of points using the most informative of them.

Lattice Boltzmann Simulation of Binary Mixture Diffusion Using Modern Graphics Processors

A highly optimized implementation of binary mixture diffusion with no initial bulk velocity on graphics processors is presented. The lattice Boltzmann model is employed for simulating the binary diffusion of oxygen and nitrogen into each other with different initial concentration distributions. Simulations have been performed using the latest proposed lattice Boltzmann model that satisfies both the indifferentiability principle and the H-theorem for multi-component gas mixtures. Contemporary numerical optimization techniques such as memory alignment and increasing the multiprocessor occupancy are exploited along with some novel optimization strategies to enhance the computational performance on graphics processors using the C for CUDA programming language. Speedup of more than two orders of magnitude over single-core processors is achieved on a variety of Graphical Processing Unit (GPU) devices ranging from conventional graphics cards to advanced, high-end GPUs, while the numerical results are in excellent agreement with the available analytical and numerical data in the literature.

Study of Forging Process in 7075 Aluminum Alloy Professional Bicycle Pedal using Taguchi Method

The current of professional bicycle pedal-s manufacturing model mostly used casting, forging, die-casting processing methods, so the paper used 7075 aluminum alloy which is to produce the bicycle parts most commonly, and employs the rigid-plastic finite element (FE) DEFORMTM 3D software to simulate and to analyze the professional bicycle pedal design. First we use Solid works 2010 3D graphics software to design the professional bicycle pedal of the mold and appearance, then import finite element (FE) DEFORMTM 3D software for analysis. The paper used rigid-plastic model analytical methods, and assuming mode to be rigid body. A series of simulation analyses in which the variables depend on different temperature of forging billet, friction factors, forging speed, mold temperature are reveal to effective stress, effective strain, damage and die radial load distribution for forging bicycle pedal. The analysis results hope to provide professional bicycle pedal forming mold references to identified whether suit with the finite element results for high-strength design suitability of aluminum alloy.

Multiple Job Shop-Scheduling using Hybrid Heuristic Algorithm

In this paper, multi-processors job shop scheduling problems are solved by a heuristic algorithm based on the hybrid of priority dispatching rules according to an ant colony optimization algorithm. The objective function is to minimize the makespan, i.e. total completion time, in which a simultanous presence of various kinds of ferons is allowed. By using the suitable hybrid of priority dispatching rules, the process of finding the best solution will be improved. Ant colony optimization algorithm, not only promote the ability of this proposed algorithm, but also decreases the total working time because of decreasing in setup times and modifying the working production line. Thus, the similar work has the same production lines. Other advantage of this algorithm is that the similar machines (not the same) can be considered. So, these machines are able to process a job with different processing and setup times. According to this capability and from this algorithm evaluation point of view, a number of test problems are solved and the associated results are analyzed. The results show a significant decrease in throughput time. It also shows that, this algorithm is able to recognize the bottleneck machine and to schedule jobs in an efficient way.

Unsupervised Feature Selection Using Feature Density Functions

Since dealing with high dimensional data is computationally complex and sometimes even intractable, recently several feature reductions methods have been developed to reduce the dimensionality of the data in order to simplify the calculation analysis in various applications such as text categorization, signal processing, image retrieval, gene expressions and etc. Among feature reduction techniques, feature selection is one the most popular methods due to the preservation of the original features. In this paper, we propose a new unsupervised feature selection method which will remove redundant features from the original feature space by the use of probability density functions of various features. To show the effectiveness of the proposed method, popular feature selection methods have been implemented and compared. Experimental results on the several datasets derived from UCI repository database, illustrate the effectiveness of our proposed methods in comparison with the other compared methods in terms of both classification accuracy and the number of selected features.

Decision Support System for Flood Crisis Management using Artificial Neural Network

This paper presents an alternate approach that uses artificial neural network to simulate the flood level dynamics in a river basin. The algorithm was developed in a decision support system environment in order to enable users to process the data. The decision support system is found to be useful due to its interactive nature, flexibility in approach and evolving graphical feature and can be adopted for any similar situation to predict the flood level. The main data processing includes the gauging station selection, input generation, lead-time selection/generation, and length of prediction. This program enables users to process the flood level data, to train/test the model using various inputs and to visualize results. The program code consists of a set of files, which can as well be modified to match other purposes. This program may also serve as a tool for real-time flood monitoring and process control. The running results indicate that the decision support system applied to the flood level seems to have reached encouraging results for the river basin under examination. The comparison of the model predictions with the observed data was satisfactory, where the model is able to forecast the flood level up to 5 hours in advance with reasonable prediction accuracy. Finally, this program may also serve as a tool for real-time flood monitoring and process control.

On the Use of Image Processing Techniques for the Estimation of the Porosity of Textile Fabrics

This paper presents a novel approach to assessing textile porosity by the application of the image analysis techniques. The images of different types of sample fabrics, taken through a microscope when the fabric is placed over a constant light source,transfer the problem into the image analysis domain. Indeed, porosity can thus be expressed in terms of a brightness percentage index calculated on the digital microscope image. Furthermore, it is meaningful to compare the brightness percentage index with the air permeability and the tightness indices of each fabric type. We have experimentally shown that there exists an approximately linear relation between brightness percentage and air permeability indices.

Recursive Path-finding in a Dynamic Maze with Modified Tremaux's Algorithm

Number Link is a Japanese logic puzzle where pairs of same numbers are connected using lines. Number Link can be regarded as a dynamic multiple travelers, multiple entries and exits maze, where the walls and passages are dynamically changing as the travelers move. In this paper, we apply the Tremaux’s algorithm to solve Number Link puzzles of size 8x8, 10x10 and 15x20. The algorithm works well and produces a solution for puzzles of size 8x8 and 10x10. However, solving a puzzle of size 15x20 requires high computer processing power and is time consuming.

Increase of Organization in Complex Systems

Measures of complexity and entropy have not converged to a single quantitative description of levels of organization of complex systems. The need for such a measure is increasingly necessary in all disciplines studying complex systems. To address this problem, starting from the most fundamental principle in Physics, here a new measure for quantity of organization and rate of self-organization in complex systems based on the principle of least (stationary) action is applied to a model system - the central processing unit (CPU) of computers. The quantity of organization for several generations of CPUs shows a double exponential rate of change of organization with time. The exact functional dependence has a fine, S-shaped structure, revealing some of the mechanisms of self-organization. The principle of least action helps to explain the mechanism of increase of organization through quantity accumulation and constraint and curvature minimization with an attractor, the least average sum of actions of all elements and for all motions. This approach can help describe, quantify, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self organization to improve their quality. It can be applied to other complex systems from Physics, Chemistry, Biology, Ecology, Economics, Cities, network theory and others where complex systems are present.

Optic Disc Detection by Earth Mover's Distance Template Matching

This paper presents a method for the detection of OD in the retina which takes advantage of the powerful preprocessing techniques such as the contrast enhancement, Gabor wavelet transform for vessel segmentation, mathematical morphology and Earth Mover-s distance (EMD) as the matching process. The OD detection algorithm is based on matching the expected directional pattern of the retinal blood vessels. Vessel segmentation method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel-s feature vector. Feature vectors are composed of the pixel-s intensity and 2D Gabor wavelet transform responses taken at multiple scales. A simple matched filter is proposed to roughly match the direction of the vessels at the OD vicinity using the EMD. The minimum distance provides an estimate of the OD center coordinates. The method-s performance is evaluated on publicly available DRIVE and STARE databases. On the DRIVE database the OD center was detected correctly in all of the 40 images (100%) and on the STARE database the OD was detected correctly in 76 out of the 81 images, even in rather difficult pathological situations.

A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition

Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.

Reactive Neural Control for Phototaxis and Obstacle Avoidance Behavior of Walking Machines

This paper describes reactive neural control used to generate phototaxis and obstacle avoidance behavior of walking machines. It utilizes discrete-time neurodynamics and consists of two main neural modules: neural preprocessing and modular neural control. The neural preprocessing network acts as a sensory fusion unit. It filters sensory noise and shapes sensory data to drive the corresponding reactive behavior. On the other hand, modular neural control based on a central pattern generator is applied for locomotion of walking machines. It coordinates leg movements and can generate omnidirectional walking. As a result, through a sensorimotor loop this reactive neural controller enables the machines to explore a dynamic environment by avoiding obstacles, turn toward a light source, and then stop near to it.

Model Transformation with a Visual Control Flow Language

Graph rewriting-based visual model processing is a widely used technique for model transformation. Visual model transformations often need to follow an algorithm that requires a strict control over the execution sequence of the transformation steps. Therefore, in Visual Model Processors (VMPs) the execution order of the transformation steps is crucial. This paper presents the visual control flow support of Visual Modeling and Transformation System (VMTS), which facilitates composing complex model transformations of simple transformation steps and executing them. The VMTS Visual Control Flow Language (VCFL) uses stereotyped activity diagrams to specify control flow structures and OCL constraints to choose between different control flow branches. This paper introduces VCFL, discusses its termination properties and provides an algorithm to support the termination analysis of VCFL transformations.

Evaluating Spectral Relationships between Signals by Removing the Contribution of a Common, Periodic Source A Partial Coherence-based Approach

Partial coherence between two signals removing the contribution of a periodic, deterministic signal is proposed for evaluating the interrelationship in multivariate systems. The estimator expression was derived and shown to be independent of such periodic signal. Simulations were used for obtaining its critical value, which were found to be the same as those for Gaussian signals, as well as for evaluating the technique. An Illustration with eletroencephalografic (EEG) signals during photic stimulation is also provided. The application of the proposed technique in both simulation and real EEG data indicate that it seems to be very specific in removing the contribution of periodic sources. The estimate independence of the periodic signal may widen partial coherence application to signal analysis, since it could be used together with simple coherence to test for contamination in signals by a common, periodic noise source.

Optimal Data Compression and Filtering: The Case of Infinite Signal Sets

We present a theory for optimal filtering of infinite sets of random signals. There are several new distinctive features of the proposed approach. First, we provide a single optimal filter for processing any signal from a given infinite signal set. Second, the filter is presented in the special form of a sum with p terms where each term is represented as a combination of three operations. Each operation is a special stage of the filtering aimed at facilitating the associated numerical work. Third, an iterative scheme is implemented into the filter structure to provide an improvement in the filter performance at each step of the scheme. The final step of the concerns signal compression and decompression. This step is based on the solution of a new rank-constrained matrix approximation problem. The solution to the matrix problem is described in this paper. A rigorous error analysis is given for the new filter.

Fish Marketing: A Panacea towards Sustainable Agriculture in Ogun State, Nigeria

This study assessed fish marketing as panacea towards sustainable agriculture in Ogun State, Nigeria. Multi-stage sampling technique was used in the selection of 150 fish marketers for this study. Descriptive statistics were used for the objectives while Product Pearson Moment Correlation was used to test the hypothesis. Result of the findings revealed that the mean age of the respondents was 38.60 years. Majority (93.33%) of the respondents had acceptable levels of formal education. Many (44.00%) of the respondents had spent 1-5 years in fish marketing. The average quantity of fish sold in a day was 94.10kg. However, efficient fish marketing were hindered by inadequate processing equipment, storage rooms and ice holding facilities (86.67%). There was a significant relationship between socio-economic characteristics and profit realized from fish marketing (p < 0.05). It was recommended that storage and warehousing facilities should be provided to the fish marketers in the study area.

Classifier Combination Approach in Motion Imagery Signals Processing for Brain Computer Interface

In this study we focus on improvement performance of a cue based Motor Imagery Brain Computer Interface (BCI). For this purpose, data fusion approach is used on results of different classifiers to make the best decision. At first step Distinction Sensitive Learning Vector Quantization method is used as a feature selection method to determine most informative frequencies in recorded signals and its performance is evaluated by frequency search method. Then informative features are extracted by packet wavelet transform. In next step 5 different types of classification methods are applied. The methodologies are tested on BCI Competition II dataset III, the best obtained accuracy is 85% and the best kappa value is 0.8. At final step ordered weighted averaging (OWA) method is used to provide a proper aggregation classifiers outputs. Using OWA enhanced system accuracy to 95% and kappa value to 0.9. Applying OWA just uses 50 milliseconds for performing calculation.

Vector Control of Multimotor Drive

Three-phase induction machines are today a standard for industrial electrical drives. Cost, reliability, robustness and maintenance free operation are among the reasons these machines are replacing dc drive systems. The development of power electronics and signal processing systems has eliminated one of the greatest disadvantages of such ac systems, which is the issue of control. With modern techniques of field oriented vector control, the task of variable speed control of induction machines is no longer a disadvantage. The need to increase system performance, particularly when facing limits on the power ratings of power supplies and semiconductors, motivates the use of phase number other than three, In this paper a novel scheme of connecting two, three phase induction motors in parallel fed by two inverters; viz. VSI and CSI and their vector control is presented.

On Generalizing Rough Set Theory via using a Filter

The theory of rough sets is generalized by using a filter. The filter is induced by binary relations and it is used to generalize the basic rough set concepts. The knowledge representations and processing of binary relations in the style of rough set theory are investigated.

Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules

Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.