Improved Closed Set Text-Independent Speaker Identification by Combining MFCC with Evidence from Flipped Filter Banks

A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system has been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This paper proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature set outperforms baseline MFCC significantly. This proposition is validated by experiments conducted on two different kinds of public databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with Gaussian Mixture Models (GMM) as a Classifier for various model orders.

Experimental Studies on Multiphase Flow in Porous Media and Pore Wettability

Multiphase flow transport in porous medium is very common and significant in science and engineering applications. For example, in CO2 Storage and Enhanced Oil Recovery processes, CO2 has to be delivered to the pore spaces in reservoirs and aquifers. CO2 storage and enhance oil recovery are actually displacement processes, in which oil or water is displaced by CO2. This displacement is controlled by pore size, chemical and physical properties of pore surfaces and fluids, and also pore wettability. In this study, a technique was developed to measure the pressure profile for driving gas/liquid to displace water in pores. Through this pressure profile, the impact of pore size on the multiphase flow transport and displacement can be analyzed. The other rig developed can be used to measure the static and dynamic pore wettability and investigate the effects of pore size, surface tension, viscosity and chemical structure of liquids on pore wettability.

Estimation of Broadcast Probability in Wireless Adhoc Networks

Most routing protocols (DSR, AODV etc.) that have been designed for wireless adhoc networks incorporate the broadcasting operation in their route discovery scheme. Probabilistic broadcasting techniques have been developed to optimize the broadcast operation which is otherwise very expensive in terms of the redundancy and the traffic it generates. In this paper we have explored percolation theory to gain a different perspective on probabilistic broadcasting schemes which have been actively researched in the recent years. This theory has helped us estimate the value of broadcast probability in a wireless adhoc network as a function of the size of the network. We also show that, operating at those optimal values of broadcast probability there is at least 25-30% reduction in packet regeneration during successful broadcasting.

Abstraction Hierarchies for Engineering Design

Complex engineering design problems consist of numerous factors of varying criticalities. Considering fundamental features of design and inferior details alike will result in an extensive waste of time and effort. Design parameters should be introduced gradually as appropriate based on their significance relevant to the problem context. This motivates the representation of design parameters at multiple levels of an abstraction hierarchy. However, developing abstraction hierarchies is an area that is not well understood. Our research proposes a novel hierarchical abstraction methodology to plan effective engineering designs and processes. It provides a theoretically sound foundation to represent, abstract and stratify engineering design parameters and tasks according to causality and criticality. The methodology creates abstraction hierarchies in a recursive and bottom-up approach that guarantees no backtracking across any of the abstraction levels. The methodology consists of three main phases, representation, abstraction, and layering to multiple hierarchical levels. The effectiveness of the developed methodology is demonstrated by a design problem.

Optical Coherence Tomography Combined with the Confocal Microscopy Method and Fluorescence for Class V Cavities Investigations

The purpose of this study is to present a non invasive method for the marginal adaptation evaluation in class V composite restorations. Standardized class V cavities, prepared in human extracted teeth, were filled with Premise (Kerr) composite. The specimens were thermo cycled. The interfaces were examined by Optical Coherence Tomography method (OCT) combined with the confocal microscopy and fluorescence. The optical configuration uses two single mode directional couplers with a superluminiscent diode as the source at 1300 nm. The scanning procedure is similar to that used in any confocal microscope, where the fast scanning is enface (line rate) and the depth scanning is much slower (at the frame rate). Gaps at the interfaces as well as inside the composite resin materials were identified. OCT has numerous advantages which justify its use in vivo as well as in vitro in comparison with conventional techniques.

Corporate Credit Rating using Multiclass Classification Models with order Information

Corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has been one of the attractive research topics in the literature. In recent years, multiclass classification models such as artificial neural network (ANN) or multiclass support vector machine (MSVM) have become a very appealing machine learning approaches due to their good performance. However, most of them have only focused on classifying samples into nominal categories, thus the unique characteristic of the credit rating - ordinality - has been seldom considered in their approaches. This study proposes new types of ANN and MSVM classifiers, which are named OMANN and OMSVM respectively. OMANN and OMSVM are designed to extend binary ANN or SVM classifiers by applying ordinal pairwise partitioning (OPP) strategy. These models can handle ordinal multiple classes efficiently and effectively. To validate the usefulness of these two models, we applied them to the real-world bond rating case. We compared the results of our models to those of conventional approaches. The experimental results showed that our proposed models improve classification accuracy in comparison to typical multiclass classification techniques with the reduced computation resource.

Selective Transverse Modes in a Diode End- Pumped Nd:Yag Pulsed Laser

The output beam quality of multi transverse modes of laser, are relatively poor. In order to obtain better beam quality, one may use an aperture inside the laser resonator. In this case, various transverse modes can be selected. We have selected various transverse modes both by simulation and doing experiment. By inserting a circular aperture inside the diode end-pumped Nd:YAG pulsed laser resonator, we have obtained 00 TEM , 01 TEM , 20 TEM and have studied which parameters, can change the mode shape. Then, we have determined the beam quality factor of TEM00 gaussian mode.

Qualification and Provisioning of xDSL Broadband Lines using a GIS Approach

In this paper is presented a Geographic Information System (GIS) approach in order to qualify and monitor the broadband lines in efficient way. The methodology used for interpolation is the Delaunay Triangular Irregular Network (TIN). This method is applied for a case study in ISP Greece monitoring 120,000 broadband lines.

Extended Dynamic Source Routing Protocol for the Non Co-Operating Nodes in Mobile Adhoc Networks

In this paper, a new approach based on the extent of friendship between the nodes is proposed which makes the nodes to co-operate in an ad hoc environment. The extended DSR protocol is tested under different scenarios by varying the number of malicious nodes and node moving speed. It is also tested varying the number of nodes in simulation used. The result indicates the achieved throughput by extended DSR is greater than the standard DSR and indicates the percentage of malicious drops over total drops are less in the case of extended DSR than the standard DSR.

Effect of Size of the Step in the Response Surface Methodology using Nonlinear Test Functions

The response surface methodology (RSM) is a collection of mathematical and statistical techniques useful in the modeling and analysis of problems in which the dependent variable receives the influence of several independent variables, in order to determine which are the conditions under which should operate these variables to optimize a production process. The RSM estimated a regression model of first order, and sets the search direction using the method of maximum / minimum slope up / down MMS U/D. However, this method selects the step size intuitively, which can affect the efficiency of the RSM. This paper assesses how the step size affects the efficiency of this methodology. The numerical examples are carried out through Monte Carlo experiments, evaluating three response variables: efficiency gain function, the optimum distance and the number of iterations. The results in the simulation experiments showed that in response variables efficiency and gain function at the optimum distance were not affected by the step size, while the number of iterations is found that the efficiency if it is affected by the size of the step and function type of test used.

Performance Improvements of DSP Applications on a Generic Reconfigurable Platform

Speedups from mapping four real-life DSP applications on an embedded system-on-chip that couples coarsegrained reconfigurable logic with an instruction-set processor are presented. The reconfigurable logic is realized by a 2-Dimensional Array of Processing Elements. A design flow for improving application-s performance is proposed. Critical software parts, called kernels, are accelerated on the Coarse-Grained Reconfigurable Array. The kernels are detected by profiling the source code. For mapping the detected kernels on the reconfigurable logic a prioritybased mapping algorithm has been developed. Two 4x4 array architectures, which differ in their interconnection structure among the Processing Elements, are considered. The experiments for eight different instances of a generic system show that important overall application speedups have been reported for the four applications. The performance improvements range from 1.86 to 3.67, with an average value of 2.53, compared with an all-software execution. These speedups are quite close to the maximum theoretical speedups imposed by Amdahl-s law.

Where has All the Physical Education Gone? Results of a Generalist Primary Schools Teachers- Survey on Teaching Physical Education

Concerns about low levels of children-s physical activity and motor skill development, prompted the Ministry of Education to trial a physical activity pilot project (PAPP) in 16 New Zealand primary schools. The project comprised professional development and training in physical education for lead teachers and introduced four physical activity coordinators to liaise with and increase physical activity opportunities in the pilot schools. A survey of generalist teachers (128 baseline, 155 post-intervention) from these schools looked at timetabled physical activity sessions and issues related to teaching physical education. The authors calculated means and standard deviations of data relating to timetabled PE sessions and used a one-way analysis of variance to determine significant differences. Results indicated time devoted to physical activity related subjects significantly increased over the course of the intervention. Teacher-s reported improved confidence and competence, which resulted in an improvement in quality physical education delivered more often.

Optimal Facility Layout Problem Solution Using Genetic Algorithm

Facility Layout Problem (FLP) is one of the essential problems of several types of manufacturing and service sector. It is an optimization problem on which the main objective is to obtain the efficient locations, arrangement and order of the facilities. In the literature, there are numerous facility layout problem research presented and have used meta-heuristic approaches to achieve optimal facility layout design. This paper presented genetic algorithm to solve facility layout problem; to minimize total cost function. The performance of the proposed approach was verified and compared using problems in the literature.

An Effective Framework for Chinese Syntactic Parsing

This paper presents an effective framework for Chinesesyntactic parsing, which includes two parts. The first one is a parsing framework, which is based on an improved bottom-up chart parsingalgorithm, and integrates the idea of the beam search strategy of N bestalgorithm and heuristic function of A* algorithm for pruning, then get multiple parsing trees. The second is a novel evaluation model, which integrates contextual and partial lexical information into traditional PCFG model and defines a new score function. Using this model, the tree with the highest score is found out as the best parsing tree. Finally,the contrasting experiment results are given. Keywords?syntactic parsing, PCFG, pruning, evaluation model.

Block Sorting: A New Characterization and a New Heuristic

The Block Sorting problem is to sort a given permutation moving blocks. A block is defined as a substring of the given permutation, which is also a substring of the identity permutation. Block Sorting has been proved to be NP-Hard. Until now two different 2-Approximation algorithms have been presented for block sorting. These are the best known algorithms for Block Sorting till date. In this work we present a different characterization of Block Sorting in terms of a transposition cycle graph. Then we suggest a heuristic, which we show to exhibit a 2-approximation performance guarantee for most permutations.

Property Aggregation and Uncertainty with Links to the Management and Determination of Critical Design Features

Within the domain of Systems Engineering the need to perform property aggregation to understand, analyze and manage complex systems is unequivocal. This can be seen in numerous domains such as capability analysis, Mission Essential Competencies (MEC) and Critical Design Features (CDF). Furthermore, the need to consider uncertainty propagation as well as the sensitivity of related properties within such analysis is equally as important when determining a set of critical properties within such a system. This paper describes this property breakdown in a number of domains within Systems Engineering and, within the area of CDFs, emphasizes the importance of uncertainty analysis. As part of this, a section of the paper describes possible techniques which may be used within uncertainty propagation and in conclusion an example is described utilizing one of the techniques for property and uncertainty aggregation within an aircraft system to aid the determination of Critical Design Features.

Ensuring Data Security and Consistency in FTIMA - A Fault Tolerant Infrastructure for Mobile Agents

Transaction management is one of the most crucial requirements for enterprise application development which often require concurrent access to distributed data shared amongst multiple application / nodes. Transactions guarantee the consistency of data records when multiple users or processes perform concurrent operations. Existing Fault Tolerance Infrastructure for Mobile Agents (FTIMA) provides a fault tolerant behavior in distributed transactions and uses multi-agent system for distributed transaction and processing. In the existing FTIMA architecture, data flows through the network and contains personal, private or confidential information. In banking transactions a minor change in the transaction can cause a great loss to the user. In this paper we have modified FTIMA architecture to ensure that the user request reaches the destination server securely and without any change. We have used triple DES for encryption/ decryption and MD5 algorithm for validity of message.

Simulation Study of Lateral Trench Gate Power MOSFET on 4H-SiC

A lateral trench-gate power metal-oxide-semiconductor on 4H-SiC is proposed. The device consists of two separate trenches in which two gates are placed on both sides of P-body region resulting two parallel channels. Enhanced current conduction and reduced-surface-field effect in the structure provide substantial improvement in the device performance. Using two dimensional simulations, the performance of proposed device is evaluated and compare of with that of the conventional device for same cell pitch. It is demonstrated that the proposed structure provides two times higher output current, 11% decrease in threshold voltage, 70% improvement in transconductance, 70% reduction in specific ON-resistance, 52% increase in breakdown voltage, and nearly eight time improvement in figure-of-merit over the conventional device.

An Assessment of Software Process Optimization Compared to International Best Practice in Bangladesh

The challenge for software development house in Bangladesh is to find a path of using minimum process rather than CMMI or ISO type gigantic practice and process area. The small and medium size organization in Bangladesh wants to ensure minimum basic Software Process Improvement (SPI) in day to day operational activities. Perhaps, the basic practices will ensure to realize their company's improvement goals. This paper focuses on the key issues in basic software practices for small and medium size software organizations, who are unable to effort the CMMI, ISO, ITIL etc. compliance certifications. This research also suggests a basic software process practices model for Bangladesh and it will show the mapping of our suggestions with international best practice. In this IT competitive world for software process improvement, Small and medium size software companies that require collaboration and strengthening to transform their current perspective into inseparable global IT scenario. This research performed some investigations and analysis on some projects- life cycle, current good practice, effective approach, reality and pain area of practitioners, etc. We did some reasoning, root cause analysis, comparative analysis of various approach, method, practice and justifications of CMMI and real life. We did avoid reinventing the wheel, where our focus is for minimal practice, which will ensure a dignified satisfaction between organizations and software customer.

An Approach for Data Analysis, Evaluation and Correction: A Case Study from Man-Made River Project in Libya

The world-s largest Pre-stressed Concrete Cylinder Pipe (PCCP) water supply project had a series of pipe failures which occurred between 1999 and 2001. This has led the Man-Made River Authority (MMRA), the authority in charge of the implementation and operation of the project, to setup a rehabilitation plan for the conveyance system while maintaining the uninterrupted flow of water to consumers. At the same time, MMRA recognized the need for a long term management tool that would facilitate repair and maintenance decisions and enable taking the appropriate preventive measures through continuous monitoring and estimation of the remaining life of each pipe. This management tool is known as the Pipe Risk Management System (PRMS) and now in operation at MMRA. Both the rehabilitation plan and the PRMS require the availability of complete and accurate pipe construction and manufacturing data This paper describes a systematic approach of data collection, analysis, evaluation and correction for the construction and manufacturing data files of phase I pipes which are the platform for the PRMS database and any other related decision support system.