Voice Command Recognition System Based on MFCC and VQ Algorithms

The goal of this project is to design a system to recognition voice commands. Most of voice recognition systems contain two main modules as follow “feature extraction" and “feature matching". In this project, MFCC algorithm is used to simulate feature extraction module. Using this algorithm, the cepstral coefficients are calculated on mel frequency scale. VQ (vector quantization) method will be used for reduction of amount of data to decrease computation time. In the feature matching stage Euclidean distance is applied as similarity criterion. Because of high accuracy of used algorithms, the accuracy of this voice command system is high. Using these algorithms, by at least 5 times repetition for each command, in a single training session, and then twice in each testing session zero error rate in recognition of commands is achieved.

Face Recognition using Features Combination and a New Non-linear Kernel

To improve the classification rate of the face recognition, features combination and a novel non-linear kernel are proposed. The feature vector concatenates three different radius of local binary patterns and Gabor wavelet features. Gabor features are the mean, standard deviation and the skew of each scaling and orientation parameter. The aim of the new kernel is to incorporate the power of the kernel methods with the optimal balance between the features. To verify the effectiveness of the proposed method, numerous methods are tested by using four datasets, which are consisting of various emotions, orientations, configuration, expressions and lighting conditions. Empirical results show the superiority of the proposed technique when compared to other methods.

A New Extended Group Mutual Exclusion Algorithm with Low Message Complexity in Distributed Systems

The group mutual exclusion (GME) problem is an interesting generalization of the mutual exclusion problem. In the group mutual exclusion, multiple processes can enter a critical section simultaneously if they belong to the same group. In the extended group mutual exclusion, each process is a member of multiple groups at the same time. As a result, after the process by selecting a group enter critical section, other processes can select the same group with its belonging group and can enter critical section at the moment, so that it avoids their unnecessary blocking. This paper presents a quorum-based distributed algorithm for the extended group mutual exclusion problem. The message complexity of our algorithm is O(4Q ) in the best case and O(5Q) in the worst case, where Q is a quorum size.

Augmenting Use Case View for Modeling

Mathematical, graphical and intuitive models are often constructed in the development process of computational systems. The Unified Modeling Language (UML) is one of the most popular modeling languages used by practicing software engineers. This paper critically examines UML models and suggests an augmented use case view with the addition of new constructs for modeling software. It also shows how a use case diagram can be enhanced. The improved modeling constructs are presented with examples for clarifying important design and implementation issues.

An Integrated Natural Language Processing Approach for Conversation System

The main aim of this research is to investigate a novel technique for implementing a more natural and intelligent conversation system. Conversation systems are designed to converse like a human as much as their intelligent allows. Sometimes, we can think that they are the embodiment of Turing-s vision. It usually to return a predetermined answer in a predetermined order, but conversations abound with uncertainties of various kinds. This research will focus on an integrated natural language processing approach. This approach includes an integrated knowledge-base construction module, a conversation understanding and generator module, and a state manager module. We discuss effectiveness of this approach based on an experiment.

Performance Analysis of Parallel Client-Server Model Versus Parallel Mobile Agent Model

Mobile agent has motivated the creation of a new methodology for parallel computing. We introduce a methodology for the creation of parallel applications on the network. The proposed Mobile-Agent parallel processing framework uses multiple Javamobile Agents. Each mobile agent can travel to the specified machine in the network to perform its tasks. We also introduce the concept of master agent, which is Java object capable of implementing a particular task of the target application. Master agent is dynamically assigns the task to mobile agents. We have developed and tested a prototype application: Mobile Agent Based Parallel Computing. Boosted by the inherited benefits of using Java and Mobile Agents, our proposed methodology breaks the barriers between the environments, and could potentially exploit in a parallel manner all the available computational resources on the network. This paper elaborates performance issues of a mobile agent for parallel computing.

Text Summarization for Oil and Gas News Article

Information is increasing in volumes; companies are overloaded with information that they may lose track in getting the intended information. It is a time consuming task to scan through each of the lengthy document. A shorter version of the document which contains only the gist information is more favourable for most information seekers. Therefore, in this paper, we implement a text summarization system to produce a summary that contains gist information of oil and gas news articles. The summarization is intended to provide important information for oil and gas companies to monitor their competitor-s behaviour in enhancing them in formulating business strategies. The system integrated statistical approach with three underlying concepts: keyword occurrences, title of the news article and location of the sentence. The generated summaries were compared with human generated summaries from an oil and gas company. Precision and recall ratio are used to evaluate the accuracy of the generated summary. Based on the experimental results, the system is able to produce an effective summary with the average recall value of 83% at the compression rate of 25%.

The Synthetic T2 Quality Control Chart and its Multi-Objective Optimization

In some real applications of Statistical Process Control it is necessary to design a control chart to not detect small process shifts, but keeping a good performance to detect moderate and large shifts in the quality. In this work we develop a new quality control chart, the synthetic T2 control chart, that can be designed to cope with this objective. A multi-objective optimization is carried out employing Genetic Algorithms, finding the Pareto-optimal front of non-dominated solutions for this optimization problem.

Performance of Histogram-Based Skin Colour Segmentation for Arms Detection in Human Motion Analysis Application

Arms detection is one of the fundamental problems in human motion analysis application. The arms are considered as the most challenging body part to be detected since its pose and speed varies in image sequences. Moreover, the arms are usually occluded with other body parts such as the head and torso. In this paper, histogram-based skin colour segmentation is proposed to detect the arms in image sequences. Six different colour spaces namely RGB, rgb, HSI, TSL, SCT and CIELAB are evaluated to determine the best colour space for this segmentation procedure. The evaluation is divided into three categories, which are single colour component, colour without luminance and colour with luminance. The performance is measured using True Positive (TP) and True Negative (TN) on 250 images with manual ground truth. The best colour is selected based on the highest TN value followed by the highest TP value.

Prediction of a Human Facial Image by ANN using Image Data and its Content on Web Pages

Choosing the right metadata is a critical, as good information (metadata) attached to an image will facilitate its visibility from a pile of other images. The image-s value is enhanced not only by the quality of attached metadata but also by the technique of the search. This study proposes a technique that is simple but efficient to predict a single human image from a website using the basic image data and the embedded metadata of the image-s content appearing on web pages. The result is very encouraging with the prediction accuracy of 95%. This technique may become a great assist to librarians, researchers and many others for automatically and efficiently identifying a set of human images out of a greater set of images.

Multiple Sequence Alignment Using Three- Dimensional Fragments

Background: Dialign is a DNA/Protein alignment tool for performing pairwise and multiple pairwise alignments through the comparison of gap-free segments (fragments) between sequence pairs. An alignment of two sequences is a chain of fragments, i.e local gap-free pairwise alignments, with the highest total score. METHOD: A new approach is defined in this article which relies on the concept of using three-dimensional fragments – i.e. local threeway alignments -- in the alignment process instead of twodimensional ones. These three-dimensional fragments are gap-free alignments constituting of equal-length segments belonging to three distinct sequences. RESULTS: The obtained results showed good improvments over the performance of DIALIGN.

Data Traffic Dynamics and Saturation on a Single Link

The dynamics of User Datagram Protocol (UDP) traffic over Ethernet between two computers are analyzed using nonlinear dynamics which shows that there are two clear regimes in the data flow: free flow and saturated. The two most important variables affecting this are the packet size and packet flow rate. However, this transition is due to a transcritical bifurcation rather than phase transition in models such as in vehicle traffic or theorized large-scale computer network congestion. It is hoped this model will help lay the groundwork for further research on the dynamics of networks, especially computer networks.

Mobile Phone as a Tool for Data Collection in Field Research

The necessity of accurate and timely field data is shared among organizations engaged in fundamentally different activities, public services or commercial operations. Basically, there are three major components in the process of the qualitative research: data collection, interpretation and organization of data, and analytic process. Representative technological advancements in terms of innovation have been made in mobile devices (mobile phone, PDA-s, tablets, laptops, etc). Resources that can be potentially applied on the data collection activity for field researches in order to improve this process. This paper presents and discuss the main features of a mobile phone based solution for field data collection, composed of basically three modules: a survey editor, a server web application and a client mobile application. The data gathering process begins with the survey creation module, which enables the production of tailored questionnaires. The field workforce receives the questionnaire(s) on their mobile phones to collect the interviews responses and sending them back to a server for immediate analysis.

A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding

In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Improving Fault Resilience and Reconstruction of Overlay Multicast Tree Using Leaving Time of Participants

Network layer multicast, i.e. IP multicast, even after many years of research, development and standardization, is not deployed in large scale due to both technical (e.g. upgrading of routers) and political (e.g. policy making and negotiation) issues. Researchers looked for alternatives and proposed application/overlay multicast where multicast functions are handled by end hosts, not network layer routers. Member hosts wishing to receive multicast data form a multicast delivery tree. The intermediate hosts in the tree act as routers also, i.e. they forward data to the lower hosts in the tree. Unlike IP multicast, where a router cannot leave the tree until all members below it leave, in overlay multicast any member can leave the tree at any time thus disjoining the tree and disrupting the data dissemination. All the disrupted hosts have to rejoin the tree. This characteristic of the overlay multicast causes multicast tree unstable, data loss and rejoin overhead. In this paper, we propose that each node sets its leaving time from the tree and sends join request to a number of nodes in the tree. The nodes in the tree will reject the request if their leaving time is earlier than the requesting node otherwise they will accept the request. The node can join at one of the accepting nodes. This makes the tree more stable as the nodes will join the tree according to their leaving time, earliest leaving time node being at the leaf of the tree. Some intermediate nodes may not follow their leaving time and leave earlier than their leaving time thus disrupting the tree. For this, we propose a proactive recovery mechanism so that disrupted nodes can rejoin the tree at predetermined nodes immediately. We have shown by simulation that there is less overhead when joining the multicast tree and the recovery time of the disrupted nodes is much less than the previous works. Keywords

Adaptive MPC Using a Recursive Learning Technique

A model predictive controller based on recursive learning is proposed. In this SISO adaptive controller, a model is automatically updated using simple recursive equations. The identified models are then stored in the memory to be re-used in the future. The decision for model update is taken based on a new control performance index. The new controller allows the use of simple linear model predictive controllers in the control of nonlinear time varying processes.

Genetic Algorithms with Oracle for the Traveling Salesman Problem

By introducing the concept of Oracle we propose an approach for improving the performance of genetic algorithms for large-scale asymmetric Traveling Salesman Problems. The results have shown that the proposed approach allows overcoming some traditional problems for creating efficient genetic algorithms.

Drainage Prediction for Dam using Fuzzy Support Vector Regression

The drainage Estimating is an important factor in dam management. In this paper, we use fuzzy support vector regression (FSVR) to predict the drainage of the Sirikrit Dam at Uttaradit province, Thailand. The results show that the FSVR is a suitable method in drainage estimating.

A Hybrid Ontology Based Approach for Ranking Documents

Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques to extract phrases from documents and the query and doing stemming on words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done flexible and in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.

A New Approach to Face Recognition Using Dual Dimension Reduction

In this paper a new approach to face recognition is presented that achieves double dimension reduction, making the system computationally efficient with better recognition results and out perform common DCT technique of face recognition. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results change with change in face image resolution and provide optimal results when arriving at a certain resolution level. In the proposed model of face recognition, initially image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to increased computational speed and feature extraction potential of Discrete Cosine Transform (DCT), it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A tradeoff between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL , Yale and EME color database.