A Robust Visual Tracking Algorithm with Low-Rank Region Covariance

Region covariance (RC) descriptor is an effective and efficient feature for visual tracking. Current RC-based tracking algorithms use the whole RC matrix to track the target in video directly. However, there exist some issues for these whole RCbased algorithms. If some features are contaminated, the whole RC will become unreliable, which results in lost object-tracking. In addition, if some features are very discriminative to the background, other features are still processed and thus reduce the efficiency. In this paper a new robust tracking method is proposed, in which the whole RC matrix is decomposed into several low rank matrices. Those matrices are dynamically chosen and processed so as to achieve a good tradeoff between discriminability and complexity. Experimental results have shown that our method is more robust to complex environment changes, especially either when occlusion happens or when the background is similar to the target compared to other RC-based methods.

Distributed Estimation Using an Improved Incremental Distributed LMS Algorithm

In this paper we consider the problem of distributed adaptive estimation in wireless sensor networks for two different observation noise conditions. In the first case, we assume that there are some sensors with high observation noise variance (noisy sensors) in the network. In the second case, different variance for observation noise is assumed among the sensors which is more close to real scenario. In both cases, an initial estimate of each sensor-s observation noise is obtained. For the first case, we show that when there are such sensors in the network, the performance of conventional distributed adaptive estimation algorithms such as incremental distributed least mean square (IDLMS) algorithm drastically decreases. In addition, detecting and ignoring these sensors leads to a better performance in a sense of estimation. In the next step, we propose a simple algorithm to detect theses noisy sensors and modify the IDLMS algorithm to deal with noisy sensors. For the second case, we propose a new algorithm in which the step-size parameter is adjusted for each sensor according to its observation noise variance. As the simulation results show, the proposed methods outperforms the IDLMS algorithm in the same condition.

High Performance Liquid Chromatography Determination of Urinary Hippuric Acid and Benzoic Acid as Indices for Glue Sniffer Urine

A simple method for the simultaneous determination of hippuric acid and benzoic acid in urine using reversed-phase high performance liquid chromatography was described. Chromatography was performed on a Nova-Pak C18 (3.9 x 150 mm) column with a mobile phase of mixed solution methanol: water: acetic acid (20:80:0.2) and UV detection at 254 nm. The calibration curve was linear within concentration range at 0.125 to 6.0 mg/ml of hippuric acid and benzoic acid. The recovery, accuracy and coefficient variance of hippuric acid were 104.54%, 0.2% and 0.2% respectively and for benzoic acid were 98.48%, 1.25% and 0.60% respectively. The detection limit of this method was 0.01ng/l for hippuric acid and 0.06ng/l for benzoic acid. This method has been applied to the analysis of urine samples from the suspected of toluene abuser or glue sniffer among secondary school students at Johor Bahru.

An Improved Algorithm for Channel Estimations of OFDM System based Pilot Signal

This paper presents a new algorithm for the channel estimation of the OFDM system based on a pilot signal for the new generation of high data rate communication systems. In orthogonal frequency division multiplexing (OFDM) systems over fast-varying fading channels, channel estimation and tracking is generally carried out by transmitting known pilot symbols in given positions of the frequency-time grid. In this paper, we propose to derive an improved algorithm based on the calculation of the mean and the variance of the adjacent pilot signals for a specific distribution of the pilot signals in the OFDM frequency-time grid then calculating of the entire unknown channel coefficients from the equation of the mean and the variance. Simulation results shows that the performance of the OFDM system increase as the length of the channel increase where the accuracy of the estimated channel will be increased using this low complexity algorithm, also the number of the pilot signal needed to be inserted in the OFDM signal will be reduced which lead to increase in the throughput of the signal over the OFDM system in compared with other type of the distribution such as Comb type and Block type channel estimation.

Automated Service Scene Detection for Badminton Game Analysis Using CHLAC and MRA

Extracting in-play scenes in sport videos is essential for quantitative analysis and effective video browsing of the sport activities. Game analysis of badminton as of the other racket sports requires detecting the start and end of each rally period in an automated manner. This paper describes an automatic serve scene detection method employing cubic higher-order local auto-correlation (CHLAC) and multiple regression analysis (MRA). CHLAC can extract features of postures and motions of multiple persons without segmenting and tracking each person by virtue of shift-invariance and additivity, and necessitate no prior knowledge. Then, the specific scenes, such as serve, are detected by linear regression (MRA) from the CHLAC features. To demonstrate the effectiveness of our method, the experiment was conducted on video sequences of five badminton matches captured by a single ceiling camera. The averaged precision and recall rates for the serve scene detection were 95.1% and 96.3%, respectively.

Fuzzy Numbers and MCDM Methods for Portfolio Optimization

A new deployment of the multiple criteria decision making (MCDM) techniques: the Simple Additive Weighting (SAW), and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for portfolio allocation, is demonstrated in this paper. Rather than exclusive reference to mean and variance as in the traditional mean-variance method, the criteria used in this demonstration are the first four moments of the portfolio distribution. Each asset is evaluated based on its marginal impacts to portfolio higher moments that are characterized by trapezoidal fuzzy numbers. Then centroid-based defuzzification is applied to convert fuzzy numbers to the crisp numbers by which SAW and TOPSIS can be deployed. Experimental results suggest the similar efficiency of these MCDM approaches to selecting dominant assets for an optimal portfolio under higher moments. The proposed approaches allow investors flexibly adjust their risk preferences regarding higher moments via different schemes adapting to various (from conservative to risky) kinds of investors. The other significant advantage is that, compared to the mean-variance analysis, the portfolio weights obtained by SAW and TOPSIS are consistently well-diversified.

Satellite Data Classification Accuracy Assessment Based from Reference Dataset

In order to develop forest management strategies in tropical forest in Malaysia, surveying the forest resources and monitoring the forest area affected by logging activities is essential. There are tremendous effort has been done in classification of land cover related to forest resource management in this country as it is a priority in all aspects of forest mapping using remote sensing and related technology such as GIS. In fact classification process is a compulsory step in any remote sensing research. Therefore, the main objective of this paper is to assess classification accuracy of classified forest map on Landsat TM data from difference number of reference data (200 and 388 reference data). This comparison was made through observation (200 reference data), and interpretation and observation approaches (388 reference data). Five land cover classes namely primary forest, logged over forest, water bodies, bare land and agricultural crop/mixed horticultural can be identified by the differences in spectral wavelength. Result showed that an overall accuracy from 200 reference data was 83.5 % (kappa value 0.7502459; kappa variance 0.002871), which was considered acceptable or good for optical data. However, when 200 reference data was increased to 388 in the confusion matrix, the accuracy slightly improved from 83.5% to 89.17%, with Kappa statistic increased from 0.7502459 to 0.8026135, respectively. The accuracy in this classification suggested that this strategy for the selection of training area, interpretation approaches and number of reference data used were importance to perform better classification result.

A New Algorithm for Cluster Initialization

Clustering is a very well known technique in data mining. One of the most widely used clustering techniques is the k-means algorithm. Solutions obtained from this technique are dependent on the initialization of cluster centers. In this article we propose a new algorithm to initialize the clusters. The proposed algorithm is based on finding a set of medians extracted from a dimension with maximum variance. The algorithm has been applied to different data sets and good results are obtained.

Carbon Disulfide Production via Hydrogen Sulfide Methane Reformation

Carbon disulfide is widely used for the production of viscose rayon, rubber, and other organic materials and it is a feedstock for the synthesis of sulfuric acid. The objective of this paper is to analyze possibilities for efficient production of CS2 from sour natural gas reformation (H2SMR) (2H2S+CH4 =CS2 +4H2) . Also, the effect of H2S to CH4 feed ratio and reaction temperature on carbon disulfide production is investigated numerically in a reforming reactor. The chemical reaction model is based on an assumed Probability Density Function (PDF) parameterized by the mean and variance of mixture fraction and β-PDF shape. The results show that the major factors influencing CS2 production are reactor temperature. The yield of carbon disulfide increases with increasing H2S to CH4 feed gas ratio (H2S/CH4≤4). Also the yield of C(s) increases with increasing temperature until the temperature reaches to 1000°K, and then due to increase of CS2 production and consumption of C(s), yield of C(s) drops with further increase in the temperature. The predicted CH4 and H2S conversion and yield of carbon disulfide are in good agreement with result of Huang and TRaissi.

A Robust Controller for Output Variance Reduction and Minimum Variance with Application on a Permanent Field DC-Motor

In this paper, we present an experimental testing for a new algorithm that determines an optimal controller-s coefficients for output variance reduction related to Linear Time Invariant (LTI) Systems. The algorithm features simplicity in calculation, generalization to minimal and non-minimal phase systems, and could be configured to achieve reference tracking as well as variance reduction after compromising with the output variance. An experiment of DCmotor velocity control demonstrates the application of this new algorithm in designing the controller. The results show that the controller achieves minimum variance and reference tracking for a preset velocity reference relying on an identified model of the motor.

The Statistical Properties of Filtered Signals

In this paper, the statistical properties of filtered or convolved signals are considered by deriving the resulting density functions as well as the exact mean and variance expressions given a prior knowledge about the statistics of the individual signals in the filtering or convolution process. It is shown that the density function after linear convolution is a mixture density, where the number of density components is equal to the number of observations of the shortest signal. For circular convolution, the observed samples are characterized by a single density function, which is a sum of products.

An Interval-Based Multi-Attribute Decision Making Approach for Electric Utility Resource Planning

This paper presents an interval-based multi-attribute decision making (MADM) approach in support of the decision process with imprecise information. The proposed decision methodology is based on the model of linear additive utility function but extends the problem formulation with the measure of composite utility variance. A sample study concerning with the evaluation of electric generation expansion strategies is provided showing how the imprecise data may affect the choice toward the best solution and how a set of alternatives, acceptable to the decision maker (DM), may be identified with certain confidence.

Determinants of the U.S. Current Account

This article provides empirical evidence on the effect of domestic and international factors on the U.S. current account deficit. Linear dynamic regression and vector autoregression models are employed to estimate the relationships during the period from 1986 to 2011. The findings of this study suggest that the current and lagged private saving rate and foreign current account for East Asian economies have played a vital role in affecting the U.S. current account. Additionally, using Granger causality tests and variance decompositions, the change of the productivity growth and foreign domestic demand are determined to influence significantly the change of the U.S. current account. To summarize, the empirical relationship between the U.S. current account deficit and its determinants is sensitive to alternative regression models and specifications.

Awareness of Reading Strategies among EFL Learners at Bangkok University

This questionnaire-based study, aimed to measure and compare the awareness of English reading strategies among EFL learners at Bangkok University (BU) classified by their gender, field of study, and English learning experience. Proportional stratified random sampling was employed to formulate a sample of 380 BU students. The data were statistically analyzed in terms of the mean and standard deviation. t-Test analysis was used to find differences in awareness of reading strategies between two groups (-male and female- /-science and social-science students). In addition, one-way analysis of variance (ANOVA) was used to compare reading strategy awareness among BU students with different lengths of English learning experience. The results of this study indicated that the overall awareness of reading strategies of EFL learners at BU was at a high level (ðÑ = 3.60) and that there was no statistically significant difference between males and females, and among students who have different lengths of English learning experience at the significance level of 0.05. However, significant differences among students coming from different fields of study were found at the same level of significance.

Efficient Detection Using Sequential Probability Ratio Test in Mobile Cognitive Radio Systems

This paper proposes a smart design strategy for a sequential detector to reliably detect the primary user-s signal, especially in fast fading environments. We study the computation of the log-likelihood ratio for coping with a fast changing received signal and noise sample variances, which are considered random variables. First, we analyze the detectability of the conventional generalized log-likelihood ratio (GLLR) scheme when considering fast changing statistics of unknown parameters caused by fast fading effects. Secondly, we propose an efficient sensing algorithm for performing the sequential probability ratio test in a robust and efficient manner when the channel statistics are unknown. Finally, the proposed scheme is compared to the conventional method with simulation results with respect to the average number of samples required to reach a detection decision.

A New Approach for Prioritization of Failure Modes in Design FMEA using ANOVA

The traditional Failure Mode and Effects Analysis (FMEA) uses Risk Priority Number (RPN) to evaluate the risk level of a component or process. The RPN index is determined by calculating the product of severity, occurrence and detection indexes. The most critically debated disadvantage of this approach is that various sets of these three indexes may produce an identical value of RPN. This research paper seeks to address the drawbacks in traditional FMEA and to propose a new approach to overcome these shortcomings. The Risk Priority Code (RPC) is used to prioritize failure modes, when two or more failure modes have the same RPN. A new method is proposed to prioritize failure modes, when there is a disagreement in ranking scale for severity, occurrence and detection. An Analysis of Variance (ANOVA) is used to compare means of RPN values. SPSS (Statistical Package for the Social Sciences) statistical analysis package is used to analyze the data. The results presented are based on two case studies. It is found that the proposed new methodology/approach resolves the limitations of traditional FMEA approach.

Modeling of Material Removal on Machining of Ti-6Al-4V through EDM using Copper Tungsten Electrode and Positive Polarity

This paper deals optimized model to investigate the effects of peak current, pulse on time and pulse off time in EDM performance on material removal rate of titanium alloy utilizing copper tungsten as electrode and positive polarity of the electrode. The experiments are carried out on Ti6Al4V. Experiments were conducted by varying the peak current, pulse on time and pulse off time. A mathematical model is developed to correlate the influences of these variables and material removal rate of workpiece. Design of experiments (DOE) method and response surface methodology (RSM) techniques are implemented. The validity test of the fit and adequacy of the proposed models has been carried out through analysis of variance (ANOVA). The obtained results evidence that as the material removal rate increases as peak current and pulse on time increases. The effect of pulse off time on MRR changes with peak ampere. The optimum machining conditions in favor of material removal rate are verified and compared. The optimum machining conditions in favor of material removal rate are estimated and verified with proposed optimized results. It is observed that the developed model is within the limits of the agreeable error (about 4%) when compared to experimental results. This result leads to desirable material removal rate and economical industrial machining to optimize the input parameters.

Reconstitute Information about Discontinued Water Quality Variables in the Nile Delta Monitoring Network Using Two Record Extension Techniques

The world economic crises and budget constraints have caused authorities, especially those in developing countries, to rationalize water quality monitoring activities. Rationalization consists of reducing the number of monitoring sites, the number of samples, and/or the number of water quality variables measured. The reduction in water quality variables is usually based on correlation. If two variables exhibit high correlation, it is an indication that some of the information produced may be redundant. Consequently, one variable can be discontinued, and the other continues to be measured. Later, the ordinary least squares (OLS) regression technique is employed to reconstitute information about discontinued variable by using the continuously measured one as an explanatory variable. In this paper, two record extension techniques are employed to reconstitute information about discontinued water quality variables, the OLS and the Line of Organic Correlation (LOC). An empirical experiment is conducted using water quality records from the Nile Delta water quality monitoring network in Egypt. The record extension techniques are compared for their ability to predict different statistical parameters of the discontinued variables. Results show that the OLS is better at estimating individual water quality records. However, results indicate an underestimation of the variance in the extended records. The LOC technique is superior in preserving characteristics of the entire distribution and avoids underestimation of the variance. It is concluded from this study that the OLS can be used for the substitution of missing values, while LOC is preferable for inferring statements about the probability distribution.

The Development of Positive Emotion Regulation Strategies Scale for Children and Adolescents

The study was designed to develop a measurement of the positive emotion regulation questionnaire (PERQ) that assesses positive emotion regulation strategies through self-report. The 14 items developed for the surveying instrument of the study were based upon literatures regarding elements of positive regulation strategies. 319 elementary students (age ranging from 12 to14) were recruited among three public elementary schools to survey on their use of positive emotion regulation strategies. Of 319 subjects, 20 invalid questionnaire s yielded a response rate of 92%. The data collected wasanalyzed through methods such as item analysis, factor analysis, and structural equation models. In reference to the results from item analysis, the formal survey instrument was reduced to 11 items. A principal axis factor analysis with varimax was performed on responses, resulting in a 2-factor equation (savoring strategy and neutralizing strategy), which accounted for 55.5% of the total variance. Then, the two-factor structure of scale was also identified by structural equation models. Finally, the reliability coefficients of the two factors were Cronbach-s α .92 and .74. Gender difference was only found in savoring strategy. In conclusion, the positive emotion regulation strategies questionnaire offers a brief, internally consistent, and valid self-report measure for understanding the emotional regulation strategies of children that may be useful to researchers and applied professionals.

Investigations Into the Turning Parameters Effect on the Surface Roughness of Flame Hardened Medium Carbon Steel with TiN-Al2O3-TiCN Coated Inserts based on Taguchi Techniques

The aim of this research is to evaluate surface roughness and develop a multiple regression model for surface roughness as a function of cutting parameters during the turning of flame hardened medium carbon steel with TiN-Al2O3-TiCN coated inserts. An experimental plan of work and signal-to-noise ratio (S/N) were used to relate the influence of turning parameters to the workpiece surface finish utilizing Taguchi methodology. The effects of turning parameters were studied by using the analysis of variance (ANOVA) method. Evaluated parameters were feed, cutting speed, and depth of cut. It was found that the most significant interaction among the considered turning parameters was between depth of cut and feed. The average surface roughness (Ra) resulted by TiN-Al2O3- TiCN coated inserts was about 2.44 μm and minimum value was 0.74 μm. In addition, the regression model was able to predict values for surface roughness in comparison with experimental values within reasonable limit.