Image Magnification Using Adaptive Interpolationby Pixel Level Data-Dependent Geometrical Shapes

World has entered in 21st century. The technology of computer graphics and digital cameras is prevalent. High resolution display and printer are available. Therefore high resolution images are needed in order to produce high quality display images and high quality prints. However, since high resolution images are not usually provided, there is a need to magnify the original images. One common difficulty in the previous magnification techniques is that of preserving details, i.e. edges and at the same time smoothing the data for not introducing the spurious artefacts. A definitive solution to this is still an open issue. In this paper an image magnification using adaptive interpolation by pixel level data-dependent geometrical shapes is proposed that tries to take into account information about the edges (sharp luminance variations) and smoothness of the image. It calculate threshold, classify interpolation region in the form of geometrical shapes and then assign suitable values inside interpolation region to the undefined pixels while preserving the sharp luminance variations and smoothness at the same time. The results of proposed technique has been compared qualitatively and quantitatively with five other techniques. In which the qualitative results show that the proposed method beats completely the Nearest Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The quantitative results are competitive and consistent with NN, BL, BC and others.

Automatic Voice Classification System Based on Traditional Korean Medicine

This paper introduces an automatic voice classification system for the diagnosis of individual constitution based on Sasang Constitutional Medicine (SCM) in Traditional Korean Medicine (TKM). For the developing of this algorithm, we used the voices of 309 female speakers and extracted a total of 134 speech features from the voice data consisting of 5 sustained vowels and one sentence. The classification system, based on a rule-based algorithm that is derived from a non parametric statistical method, presents 3 types of decisions: reserved, positive and negative decisions. In conclusion, 71.5% of the voice data were diagnosed by this system, of which 47.7% were correct positive decisions and 69.7% were correct negative decisions.

A Combined Fuzzy Decision Making Approach to Supply Chain Risk Assessment

Many firms implemented various initiatives such as outsourced manufacturing which could make a supply chain (SC) more vulnerable to various types of disruptions. So managing risk has become a critical component of SC management. Different types of SC vulnerability management methodologies have been proposed for managing SC risk, most offer only point-based solutions that deal with a limited set of risks. This research aims to reinforce SC risk management by proposing an integrated approach. SC risks are identified and a risk index classification structure is created. Then we develop a SC risk assessment approach based on the analytic network process (ANP) and the VIKOR methods under the fuzzy environment where the vagueness and subjectivity are handled with linguistic terms parameterized by triangular fuzzy numbers. By using FANP, risks weights are calculated and then inserted to the FVIKOR to rank the SC members and find the most risky partner.

Customer Need Type Classification Model using Data Mining Techniques for Recommender Systems

Recommender systems are usually regarded as an important marketing tool in the e-commerce. They use important information about users to facilitate accurate recommendation. The information includes user context such as location, time and interest for personalization of mobile users. We can easily collect information about location and time because mobile devices communicate with the base station of the service provider. However, information about user interest can-t be easily collected because user interest can not be captured automatically without user-s approval process. User interest usually represented as a need. In this study, we classify needs into two types according to prior research. This study investigates the usefulness of data mining techniques for classifying user need type for recommendation systems. We employ several data mining techniques including artificial neural networks, decision trees, case-based reasoning, and multivariate discriminant analysis. Experimental results show that CHAID algorithm outperforms other models for classifying user need type. This study performs McNemar test to examine the statistical significance of the differences of classification results. The results of McNemar test also show that CHAID performs better than the other models with statistical significance.

Discovery of Production Rules with Fuzzy Hierarchy

In this paper a novel algorithm is proposed that integrates the process of fuzzy hierarchy generation and rule discovery for automated discovery of Production Rules with Fuzzy Hierarchy (PRFH) in large databases.A concept of frequency matrix (Freq) introduced to summarize large database that helps in minimizing the number of database accesses, identification and removal of irrelevant attribute values and weak classes during the fuzzy hierarchy generation.Experimental results have established the effectiveness of the proposed algorithm.

A CTL Specification of Serializability for Transactions Accessing Uniform Data

Existing work in temporal logic on representing the execution of infinitely many transactions, uses linear-time temporal logic (LTL) and only models two-step transactions. In this paper, we use the comparatively efficient branching-time computational tree logic CTL and extend the transaction model to a class of multistep transactions, by introducing distinguished propositional variables to represent the read and write steps of n multi-step transactions accessing m data items infinitely many times. We prove that the well known correspondence between acyclicity of conflict graphs and serializability for finite schedules, extends to infinite schedules. Furthermore, in the case of transactions accessing the same set of data items in (possibly) different orders, serializability corresponds to the absence of cycles of length two. This result is used to give an efficient encoding of the serializability condition into CTL.

An Interactive Tool for Teaching and Learning English at Upper Primary Level for Mauritius

E-learning refers to the specific kind of learning experienced within the domain of educational technology, which can be used in or out of the classroom. In this paper, we give an overview of an e-learning platform 'An Innovative Interactive and Online English Platform for Upper Primary Students' is an interactive web-based application which will serve as an aid to the primary school students in Mauritius. The objectives of this platform are to offer quality learning resources for the English subject at our primary level of education, encourage self-learning and hence promote e-learning. The platform developed consists of several interesting features, for example, the English Verb Conjugation tool, Negative Form tool, Interrogative Form tool and Close Test Generator. Thus, this learning platform will be useful at a time where our country is looking for an alternative to private tuition and also, looking forward to increase the pass rate.

Post ERP Feral System and use of ‘Feral System as Coping Mechanism

A number of studies highlighted problems related to ERP systems, yet, most of these studies focus on the problems during the project and implementation stages but not during the postimplementation use process. Problems encountered in the process of using ERP would hinder the effective exploitation and the extended and continued use of ERP systems and their value to organisations. This paper investigates the different types of problems users (operational, supervisory and managerial) faced in using ERP and how 'feral system' is used as the coping mechanism. The paper adopts a qualitative method and uses data collected from two cases and 26 interviews, to inductively develop a casual network model of ERP usage problem and its coping mechanism. This model classified post ERP usage problems as data quality, system quality, interface and infrastructure. The model is also categorised the different coping mechanism through use of 'feral system' inclusive of feral information system, feral data and feral use of technology.

Algebraic Quantum Error Correction Codes

A systematic and exhaustive method based on the group structure of a unitary Lie algebra is proposed to generate an enormous number of quantum codes. With respect to the algebraic structure, the orthogonality condition, which is the central rule of generating quantum codes, is proved to be fully equivalent to the distinguishability of the elements in this structure. In addition, four types of quantum codes are classified according to the relation of the codeword operators and some initial quantum state. By linking the unitary Lie algebra with the additive group, the classical correspondences of some of these quantum codes can be rendered.

Face Detection using Variance based Haar-Like feature and SVM

This paper proposes a new approach to perform the problem of real-time face detection. The proposed method combines primitive Haar-Like feature and variance value to construct a new feature, so-called Variance based Haar-Like feature. Face in image can be represented with a small quantity of features using this new feature. We used SVM instead of AdaBoost for training and classification. We made a database containing 5,000 face samples and 10,000 non-face samples extracted from real images for learning purposed. The 5,000 face samples contain many images which have many differences of light conditions. And experiments showed that face detection system using Variance based Haar-Like feature and SVM can be much more efficient than face detection system using primitive Haar-Like feature and AdaBoost. We tested our method on two Face databases and one Non-Face database. We have obtained 96.17% of correct detection rate on YaleB face database, which is higher 4.21% than that of using primitive Haar-Like feature and AdaBoost.

Robust Quadratic Stabilization of Uncertain Impulsive Switched Systems

This paper focuses on the quadratic stabilization problem for a class of uncertain impulsive switched systems. The uncertainty is assumed to be norm-bounded and enters both the state and the input matrices. Based on the Lyapunov methods, some results on robust stabilization and quadratic stabilization for the impulsive switched system are obtained. A stabilizing state feedback control law realizing the robust stabilization of the closed-loop system is constructed.

Application of Micro-continuum Approach in the Estimation of Snow Drift Density, Velocity and Mass Transport in Hilly Bound Cold Regions

We estimate snow velocity and snow drift density on hilly terrain under the assumption that the drifting snow mass can be represented using a micro-continuum approach (i.e. using a nonclassical mechanics approach assuming a class of fluids for which basic equations of mass, momentum and energy have been derived). In our model, the theory of coupled stress fluids proposed by Stokes [1] has been employed for the computation of flow parameters. Analyses of bulk drift velocity, drift density, drift transport and mass transport of snow particles have been carried out and computations made, considering various parametric effects. Results are compared with those of classical mechanics (logarithmic wind profile). The results indicate that particle size affects the flow characteristics significantly.

Comparing Arabic and Latin Handwritten Digits Recognition Problems

A comparison between the performance of Latin and Arabic handwritten digits recognition problems is presented. The performance of ten different classifiers is tested on two similar Arabic and Latin handwritten digits databases. The analysis shows that Arabic handwritten digits recognition problem is easier than that of Latin digits. This is because the interclass difference in case of Latin digits is smaller than in Arabic digits and variances in writing Latin digits are larger. Consequently, weaker yet fast classifiers are expected to play more prominent role in Arabic handwritten digits recognition.

Support Vector Machines Approach for Detecting the Mean Shifts in Hotelling-s T2 Control Chart with Sensitizing Rules

In many industries, control charts is one of the most frequently used tools for quality management. Hotelling-s T2 is used widely in multivariate control chart. However, it has little defect when detecting small or medium process shifts. The use of supplementary sensitizing rules can improve the performance of detection. This study applied sensitizing rules for Hotelling-s T2 control chart to improve the performance of detection. Support vector machines (SVM) classifier to identify the characteristic or group of characteristics that are responsible for the signal and to classify the magnitude of the mean shifts. The experimental results demonstrate that the support vector machines (SVM) classifier can effectively identify the characteristic or group of characteristics that caused the process mean shifts and the magnitude of the shifts.

400 kW Six Analytical High Speed Generator Designs for Smart Grid Systems

High Speed PM Generators driven by micro-turbines are widely used in Smart Grid System. So, this paper proposes comparative study among six classical, optimized and genetic analytical design cases for 400 kW output power at tip speed 200 m/s. These six design trials of High Speed Permanent Magnet Synchronous Generators (HSPMSGs) are: Classical Sizing; Unconstrained optimization for total losses and its minimization; Constrained optimized total mass with bounded constraints are introduced in the problem formulation. Then a genetic algorithm is formulated for obtaining maximum efficiency and minimizing machine size. In the second genetic problem formulation, we attempt to obtain minimum mass, the machine sizing that is constrained by the non-linear constraint function of machine losses. Finally, an optimum torque per ampere genetic sizing is predicted. All results are simulated with MATLAB, Optimization Toolbox and its Genetic Algorithm. Finally, six analytical design examples comparisons are introduced with study of machines waveforms, THD and rotor losses.

Selection of Best Band Combination for Soil Salinity Studies using ETM+ Satellite Images (A Case study: Nyshaboor Region,Iran)

One of the main environmental problems which affect extensive areas in the world is soil salinity. Traditional data collection methods are neither enough for considering this important environmental problem nor accurate for soil studies. Remote sensing data could overcome most of these problems. Although satellite images are commonly used for these studies, however there are still needs to find the best calibration between the data and real situations in each specified area. Neyshaboor area, North East of Iran was selected as a field study of this research. Landsat satellite images for this area were used in order to prepare suitable learning samples for processing and classifying the images. 300 locations were selected randomly in the area to collect soil samples and finally 273 locations were reselected for further laboratory works and image processing analysis. Electrical conductivity of all samples was measured. Six reflective bands of ETM+ satellite images taken from the study area in 2002 were used for soil salinity classification. The classification was carried out using common algorithms based on the best composition bands. The results showed that the reflective bands 7, 3, 4 and 1 are the best band composition for preparing the color composite images. We also found out, that hybrid classification is a suitable method for identifying and delineation of different salinity classes in the area.

A Novel SVM-Based OOK Detector in Low SNR Infrared Channels

Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.

Performance Comparison and Evaluation of AdaBoost and SoftBoost Algorithms on Generic Object Recognition

SoftBoost is a recently presented boosting algorithm, which trades off the size of achieved classification margin and generalization performance. This paper presents a performance evaluation of SoftBoost algorithm on the generic object recognition problem. An appearance-based generic object recognition model is used. The evaluation experiments are performed using a difficult object recognition benchmark. An assessment with respect to different degrees of label noise as well as a comparison to the well known AdaBoost algorithm is performed. The obtained results reveal that SoftBoost is encouraged to be used in cases when the training data is known to have a high degree of noise. Otherwise, using Adaboost can achieve better performance.

Object Speed Estimation by using Fuzzy Set

Speed estimation is one of the important and practical tasks in machine vision, Robotic and Mechatronic. the availability of high quality and inexpensive video cameras, and the increasing need for automated video analysis has generated a great deal of interest in machine vision algorithms. Numerous approaches for speed estimation have been proposed. So classification and survey of the proposed methods can be very useful. The goal of this paper is first to review and verify these methods. Then we will propose a novel algorithm to estimate the speed of moving object by using fuzzy concept. There is a direct relation between motion blur parameters and object speed. In our new approach we will use Radon transform to find direction of blurred image, and Fuzzy sets to estimate motion blur length. The most benefit of this algorithm is its robustness and precision in noisy images. Our method was tested on many images with different range of SNR and is satisfiable.

Formant Tracking Linear Prediction Model using HMMs for Noisy Speech Processing

This paper presents a formant-tracking linear prediction (FTLP) model for speech processing in noise. The main focus of this work is the detection of formant trajectory based on Hidden Markov Models (HMM), for improved formant estimation in noise. The approach proposed in this paper provides a systematic framework for modelling and utilization of a time- sequence of peaks which satisfies continuity constraints on parameter; the within peaks are modelled by the LP parameters. The formant tracking LP model estimation is composed of three stages: (1) a pre-cleaning multi-band spectral subtraction stage to reduce the effect of residue noise on formants (2) estimation stage where an initial estimate of the LP model of speech for each frame is obtained (3) a formant classification using probability models of formants and Viterbi-decoders. The evaluation results for the estimation of the formant tracking LP model tested in Gaussian white noise background, demonstrate that the proposed combination of the initial noise reduction stage with formant tracking and LPC variable order analysis, results in a significant reduction in errors and distortions. The performance was evaluated with noisy natual vowels extracted from international french and English vocabulary speech signals at SNR value of 10dB. In each case, the estimated formants are compared to reference formants.