Computing the Loop Bound in Iterative Data Flow Graphs Using Natural Token Flow

Signal processing applications which are iterative in nature are best represented by data flow graphs (DFG). In these applications, the maximum sampling frequency is dependent on the topology of the DFG, the cyclic dependencies in particular. The determination of the iteration bound, which is the reciprocal of the maximum sampling frequency, is critical in the process of hardware implementation of signal processing applications. In this paper, a novel technique to compute the iteration bound is proposed. This technique is different from all previously proposed techniques, in the sense that it is based on the natural flow of tokens into the DFG rather than the topology of the graph. The proposed algorithm has lower run-time complexity than all known algorithms. The performance of the proposed algorithm is illustrated through analytical analysis of the time complexity, as well as through simulation of some benchmark problems.

A Novel Reversible Watermarking Method based on Adaptive Thresholding and Companding Technique

Embedding and extraction of a secret information as well as the restoration of the original un-watermarked image is highly desirable in sensitive applications like military, medical, and law enforcement imaging. This paper presents a novel reversible data-hiding method for digital images using integer to integer wavelet transform and companding technique which can embed and recover the secret information as well as can restore the image to its pristine state. The novel method takes advantage of block based watermarking and iterative optimization of threshold for companding which avoids histogram pre and post-processing. Consequently, it reduces the associated overhead usually required in most of the reversible watermarking techniques. As a result, it keeps the distortion small between the marked and the original images. Experimental results show that the proposed method outperforms the existing reversible data hiding schemes reported in the literature.

A Vehicular Visual Tracking System Incorporating Global Positioning System

Surveillance system is widely used in the traffic monitoring. The deployment of cameras is moving toward a ubiquitous camera (UbiCam) environment. In our previous study, a novel service, called GPS-VT, was firstly proposed by incorporating global positioning system (GPS) and visual tracking techniques for the UbiCam environment. The first prototype is called GODTA (GPS-based Moving Object Detection and Tracking Approach). For a moving person carried GPS-enabled mobile device, he can be tracking when he enters the field-of-view (FOV) of a camera according to his real-time GPS coordinate. In this paper, GPS-VT service is applied to the tracking of vehicles. The moving speed of a vehicle is much faster than a person. It means that the time passing through the FOV is much shorter than that of a person. Besides, the update interval of GPS coordinate is once per second, it is asynchronous with the frame rate of the real-time image. The above asynchronous is worsen by the network transmission delay. These factors are the main challenging to fulfill GPS-VT service on a vehicle.In order to overcome the influence of the above factors, a back-propagation neural network (BPNN) is used to predict the possible lane before the vehicle enters the FOV of a camera. Then, a template matching technique is used for the visual tracking of a target vehicle. The experimental result shows that the target vehicle can be located and tracking successfully. The success location rate of the implemented prototype is higher than that of the previous GODTA.

An Enterprise Intelligent System Development and Solution Framework

The recent trend has been using hybrid approach rather than using a single intelligent technique to solve the problems. In this paper, we describe and discuss a framework to develop enterprise solutions that are backed by intelligent techniques. The framework not only uses intelligent techniques themselves but it is a complete environment that includes various interfaces and components to develop the intelligent solutions. The framework is completely Web-based and uses XML extensively. It can work like shared plat-form to be accessed by multiple developers, users and decision makers.

Automated Textile Defect Recognition System Using Computer Vision and Artificial Neural Networks

Least Development Countries (LDC) like Bangladesh, whose 25% revenue earning is achieved from Textile export, requires producing less defective textile for minimizing production cost and time. Inspection processes done on these industries are mostly manual and time consuming. To reduce error on identifying fabric defects requires more automotive and accurate inspection process. Considering this lacking, this research implements a Textile Defect Recognizer which uses computer vision methodology with the combination of multi-layer neural networks to identify four classifications of textile defects. The recognizer, suitable for LDC countries, identifies the fabric defects within economical cost and produces less error prone inspection system in real time. In order to generate input set for the neural network, primarily the recognizer captures digital fabric images by image acquisition device and converts the RGB images into binary images by restoration process and local threshold techniques. Later, the output of the processed image, the area of the faulty portion, the number of objects of the image and the sharp factor of the image, are feed backed as an input layer to the neural network which uses back propagation algorithm to compute the weighted factors and generates the desired classifications of defects as an output.

Beta-spline Surface Fitting to Multi-slice Images

Beta-spline is built on G2 continuity which guarantees smoothness of generated curves and surfaces using it. This curve is preferred to be used in object design rather than reconstruction. This study however, employs the Beta-spline in reconstructing a 3- dimensional G2 image of the Stanford Rabbit. The original data consists of multi-slice binary images of the rabbit. The result is then compared with related works using other techniques.

Order Reduction by Least-Squares Methods about General Point ''a''

The concept of order reduction by least-squares moment matching and generalised least-squares methods has been extended about a general point ?a?, to obtain the reduced order models for linear, time-invariant dynamic systems. Some heuristic criteria have been employed for selecting the linear shift point ?a?, based upon the means (arithmetic, harmonic and geometric) of real parts of the poles of high order system. It is shown that the resultant model depends critically on the choice of linear shift point ?a?. The validity of the criteria is illustrated by solving a numerical example and the results are compared with the other existing techniques.

Biospeckle Supported Fruit Bruise Detection

This research work proposed a study of fruit bruise detection by means of a biospeckle method, selecting the papaya fruit (Carica papaya) as testing body. Papaya is recognized as a fruit of outstanding nutritional qualities, showing high vitamin A content, calcium, carbohydrates, exhibiting high popularity all over the world, considering consumption and acceptability. The commercialization of papaya faces special problems which are associated to bruise generation during harvesting, packing and transportation. Papaya is classified as climacteric fruit, permitting to be harvested before the maturation is completed. However, by one side bruise generation is partially controlled once the fruit flesh exhibits high mechanical firmness. By the other side, mechanical loads can set a future bruise at that maturation stage, when it can not be detected yet by conventional methods. Mechanical damages of fruit skin leave an entrance door to microorganisms and pathogens, which will cause severe losses of quality attributes. Traditional techniques of fruit quality inspection include total soluble solids determination, mechanical firmness tests, visual inspections, which would hardly meet required conditions for a fully automated process. However, the pertinent literature reveals a new method named biospeckle which is based on the laser reflectance and interference phenomenon. The laser biospeckle or dynamic speckle is quantified by means of the Moment of Inertia, named after its mechanical counterpart due to similarity between the defining formulae. Biospeckle techniques are able to quantify biological activities of living tissues, which has been applied to seed viability analysis, vegetable senescence and similar topics. Since the biospeckle techniques can monitor tissue physiology, it could also detect changes in the fruit caused by mechanical damages. The proposed technique holds non invasive character, being able to generate numerical results consistent with an adequate automation. The experimental tests associated to this research work included the selection of papaya fruit at different maturation stages which were submitted to artificial mechanical bruising tests. Damages were visually compared with the frequency maps yielded by the biospeckle technique. Results were considered in close agreement.

A System of Automatic Speech Recognition based on the Technique of Temporal Retiming

We report in this paper the procedure of a system of automatic speech recognition based on techniques of the dynamic programming. The technique of temporal retiming is a technique used to synchronize between two forms to compare. We will see how this technique is adapted to the field of the automatic speech recognition. We will expose, in a first place, the theory of the function of retiming which is used to compare and to adjust an unknown form with a whole of forms of reference constituting the vocabulary of the application. Then we will give, in the second place, the various algorithms necessary to their implementation on machine. The algorithms which we will present were tested on part of the corpus of words in Arab language Arabdic-10 [4] and gave whole satisfaction. These algorithms are effective insofar as we apply them to the small ones or average vocabularies.

Arabic Word Semantic Similarity

This paper is concerned with the production of an Arabic word semantic similarity benchmark dataset. It is the first of its kind for Arabic which was particularly developed to assess the accuracy of word semantic similarity measurements. Semantic similarity is an essential component to numerous applications in fields such as natural language processing, artificial intelligence, linguistics, and psychology. Most of the reported work has been done for English. To the best of our knowledge, there is no word similarity measure developed specifically for Arabic. In this paper, an Arabic benchmark dataset of 70 word pairs is presented. New methods and best possible available techniques have been used in this study to produce the Arabic dataset. This includes selecting and creating materials, collecting human ratings from a representative sample of participants, and calculating the overall ratings. This dataset will make a substantial contribution to future work in the field of Arabic WSS and hopefully it will be considered as a reference basis from which to evaluate and compare different methodologies in the field.

Performances Comparison of Neural Architectures for On-Line Speed Estimation in Sensorless IM Drives

The performance of sensor-less controlled induction motor drive depends on the accuracy of the estimated speed. Conventional estimation techniques being mathematically complex require more execution time resulting in poor dynamic response. The nonlinear mapping capability and powerful learning algorithms of neural network provides a promising alternative for on-line speed estimation. The on-line speed estimator requires the NN model to be accurate, simpler in design, structurally compact and computationally less complex to ensure faster execution and effective control in real time implementation. This in turn to a large extent depends on the type of Neural Architecture. This paper investigates three types of neural architectures for on-line speed estimation and their performance is compared in terms of accuracy, structural compactness, computational complexity and execution time. The suitable neural architecture for on-line speed estimation is identified and the promising results obtained are presented.

A Quality Optimization Approach: An Application on Next Generation Networks

The next generation wireless systems, especially the cognitive radio networks aim at utilizing network resources more efficiently. They share a wide range of available spectrum in an opportunistic manner. In this paper, we propose a quality management model for short-term sub-lease of unutilized spectrum bands to different service providers. We built our model on competitive secondary market architecture. To establish the necessary conditions for convergent behavior, we utilize techniques from game theory. Our proposed model is based on potential game approach that is suitable for systems with dynamic decision making. The Nash equilibrium point tells the spectrum holders the ideal price values where profit is maximized at the highest level of customer satisfaction. Our numerical results show that the price decisions of the network providers depend on the price and QoS of their own bands as well as the prices and QoS levels of their opponents- bands.

Evolutionary Computing Approach for the Solution of Initial value Problems in Ordinary Differential Equations

An evolutionary computing technique for solving initial value problems in Ordinary Differential Equations is proposed in this paper. Neural network is used as a universal approximator while the adaptive parameters of neural networks are optimized by genetic algorithm. The solution is achieved on the continuous grid of time instead of discrete as in other numerical techniques. The comparison is carried out with classical numerical techniques and the solution is found with a uniform accuracy of MSE ≈ 10-9 .

Probabilistic Bayesian Framework for Infrared Face Recognition

Face recognition in the infrared spectrum has attracted a lot of interest in recent years. Many of the techniques used in infrared are based on their visible counterpart, especially linear techniques like PCA and LDA. In this work, we introduce a probabilistic Bayesian framework for face recognition in the infrared spectrum. In the infrared spectrum, variations can occur between face images of the same individual due to pose, metabolic, time changes, etc. Bayesian approaches permit to reduce intrapersonal variation, thus making them very interesting for infrared face recognition. This framework is compared with classical linear techniques. Non linear techniques we developed recently for infrared face recognition are also presented and compared to the Bayesian face recognition framework. A new approach for infrared face extraction based on SVM is introduced. Experimental results show that the Bayesian technique is promising and lead to interesting results in the infrared spectrum when a sufficient number of face images is used in an intrapersonal learning process.

Heuristic Optimization Techniques for Network Reconfiguration in Distribution System

Network reconfiguration is an operation to modify the network topology. The implementation of network reconfiguration has many advantages such as loss minimization, increasing system security and others. In this paper, two topics about the network reconfiguration in distribution system are briefly described. The first topic summarizes its impacts while the second explains some heuristic optimization techniques for solving the network reconfiguration problem.

State of the Art: A Study on Fall Detection

Unintentional falls are rife throughout the ages and have been the common factor of serious or critical injuries especially for the elderly society. Fortunately, owing to the recent rapid advancement in technology, fall detection system is made possible, enabling detection of falling events for the elderly, monitoring the patient and consequently provides emergency support in the event of falling. This paper presents a review of 3 main categories of fall detection techniques, ranging from year 2005 to year 2010. This paper will be focusing on discussing the techniques alongside with summary and conclusion for them.

Mining Educational Data to Analyze the Student Motivation Behavior

The purpose of this research aims to discover the knowledge for analysis student motivation behavior on e-Learning based on Data Mining Techniques, in case of the Information Technology for Communication and Learning Course at Suan Sunandha Rajabhat University. The data mining techniques was applied in this research including association rules, classification techniques. The results showed that using data mining technique can indicate the important variables that influence the student motivation behavior on e-Learning.

An Energy Efficient Cluster Formation Protocol with Low Latency In Wireless Sensor Networks

Data gathering is an essential operation in wireless sensor network applications. So it requires energy efficiency techniques to increase the lifetime of the network. Similarly, clustering is also an effective technique to improve the energy efficiency and network lifetime of wireless sensor networks. In this paper, an energy efficient cluster formation protocol is proposed with the objective of achieving low energy dissipation and latency without sacrificing application specific quality. The objective is achieved by applying randomized, adaptive, self-configuring cluster formation and localized control for data transfers. It involves application - specific data processing, such as data aggregation or compression. The cluster formation algorithm allows each node to make independent decisions, so as to generate good clusters as the end. Simulation results show that the proposed protocol utilizes minimum energy and latency for cluster formation, there by reducing the overhead of the protocol.

Soft Computing based Retrieval System for Medical Applications

With increasing data in medical databases, medical data retrieval is growing in popularity. Some of this analysis including inducing propositional rules from databases using many soft techniques, and then using these rules in an expert system. Diagnostic rules and information on features are extracted from clinical databases on diseases of congenital anomaly. This paper explain the latest soft computing techniques and some of the adaptive techniques encompasses an extensive group of methods that have been applied in the medical domain and that are used for the discovery of data dependencies, importance of features, patterns in sample data, and feature space dimensionality reduction. These approaches pave the way for new and interesting avenues of research in medical imaging and represent an important challenge for researchers.

The Relevance of Data Warehousing and Data Mining in the Field of Evidence-based Medicine to Support Healthcare Decision Making

Evidence-based medicine is a new direction in modern healthcare. Its task is to prevent, diagnose and medicate diseases using medical evidence. Medical data about a large patient population is analyzed to perform healthcare management and medical research. In order to obtain the best evidence for a given disease, external clinical expertise as well as internal clinical experience must be available to the healthcare practitioners at right time and in the right manner. External evidence-based knowledge can not be applied directly to the patient without adjusting it to the patient-s health condition. We propose a data warehouse based approach as a suitable solution for the integration of external evidence-based data sources into the existing clinical information system and data mining techniques for finding appropriate therapy for a given patient and a given disease. Through integration of data warehousing, OLAP and data mining techniques in the healthcare area, an easy to use decision support platform, which supports decision making process of care givers and clinical managers, is built. We present three case studies, which show, that a clinical data warehouse that facilitates evidence-based medicine is a reliable, powerful and user-friendly platform for strategic decision making, which has a great relevance for the practice and acceptance of evidence-based medicine.