Image Segment Matching Using Affine- Invariant Regions

In this paper, a method for matching image segments using triangle-based (geometrical) regions is proposed. Triangular regions are formed from triples of vertex points obtained from a keypoint detector (SIFT). However, triangle regions are subject to noise and distortion around the edges and vertices (especially acute angles). Therefore, these triangles are expanded into parallelogramshaped regions. The extracted image segments inherit an important triangle property; the invariance to affine distortion. Given two images, matching corresponding regions is conducted by computing the relative affine matrix, rectifying one of the regions w.r.t. the other one, then calculating the similarity between the reference and rectified region. The experimental tests show the efficiency and robustness of the proposed algorithm against geometrical distortion.

Variance Based Component Analysis for Texture Segmentation

This paper presents a comparative analysis of a new unsupervised PCA-based technique for steel plates texture segmentation towards defect detection. The proposed scheme called Variance Based Component Analysis or VBCA employs PCA for feature extraction, applies a feature reduction algorithm based on variance of eigenpictures and classifies the pixels as defective and normal. While the classic PCA uses a clusterer like Kmeans for pixel clustering, VBCA employs thresholding and some post processing operations to label pixels as defective and normal. The experimental results show that proposed algorithm called VBCA is 12.46% more accurate and 78.85% faster than the classic PCA.

The Robust Clustering with Reduction Dimension

A clustering is process to identify a homogeneous groups of object called as cluster. Clustering is one interesting topic on data mining. A group or class behaves similarly characteristics. This paper discusses a robust clustering process for data images with two reduction dimension approaches; i.e. the two dimensional principal component analysis (2DPCA) and principal component analysis (PCA). A standard approach to overcome this problem is dimension reduction, which transforms a high-dimensional data into a lower-dimensional space with limited loss of information. One of the most common forms of dimensionality reduction is the principal components analysis (PCA). The 2DPCA is often called a variant of principal component (PCA), the image matrices were directly treated as 2D matrices; they do not need to be transformed into a vector so that the covariance matrix of image can be constructed directly using the original image matrices. The decomposed classical covariance matrix is very sensitive to outlying observations. The objective of paper is to compare the performance of robust minimizing vector variance (MVV) in the two dimensional projection PCA (2DPCA) and the PCA for clustering on an arbitrary data image when outliers are hiden in the data set. The simulation aspects of robustness and the illustration of clustering images are discussed in the end of paper

Credit Spread Changes and Volatility Spillover Effects

The purpose of this paper is to investigate the influence of a number of variables on the conditional mean and conditional variance of credit spread changes. The empirical analysis in this paper is conducted within the context of bivariate GARCH-in- Mean models, using the so-called BEKK parameterization. We show that credit spread changes are determined by interest-rate and equityreturn variables, which is in line with theory as provided by the structural models of default. We also identify the credit spread change volatility as an important determinant of credit spread changes, and provide evidence on the transmission of volatility between the variables under study.

Evaluation of Chlorophyll Content and Chlorophyll Fluorescence Parameters and Relationships between Chlorophyll a, b and Chlorophyll Content Index under Water Stress in Olea europaea cv. Dezful

This study was conducted to determine effect of water stress on chlorophyll content and chlorophyll fluorescence parameter in young `Dezful- olive trees. Three irrigation regimes (40% ETcrop, 65% ETcrop and 100% ETcrop) were used. After irrigation treatments were applied, some of biochemical parameters including chlorophyll a, b, total chlorophyll, chlorophyll fluorescence and also chlorophyll content index (C.C.I) were measured. Results of Analysis of variance showed that irrigation treatments had significant effect on chlorophylla, total chlorophyll (chl a+b), C.C.I and Fv/Fm ratio. The amount of decreased chlorophyll a and total chlorophyll in plants were received 40% ETcrop were 51.55% and 46.86%, respectively, compared with 100% ETcrop.

A Fast Object Detection Method with Rotation Invariant Features

Based on the combined shape feature and texture feature, a fast object detection method with rotation invariant features is proposed in this paper. A quick template matching scheme based online learning designed for online applications is also introduced in this paper. The experimental results have shown that the proposed approach has the features of lower computation complexity and higher detection rate, while keeping almost the same performance compared to the HOG-based method, and can be more suitable for run time applications.

Effects of Computer–Based Instructional Designs among Pupils of Different Music Intelligence Levels

The purpose of this study was to investigate the effects of computer–based instructional designs, namely modality and redundancy principles on the attitude and learning of music theory among primary pupils of different Music Intelligence levels. The lesson of music theory was developed in three different modes, audio and image (AI), text with image (TI) and audio with image and text (AIT). The independent variables were the three modes of courseware. The moderator variable was music intelligence. The dependent variables were the post test score. ANOVA was used to determine the significant differences of the pretest scores among the three groups. Analyses of covariance (ANCOVA) and Post hoc were carried out to examine the main effects as well as the interaction effects of the independent variables on the dependent variables. High music intelligence pupils performed significantly better than low music intelligence pupils in all the three treatment modes. The AI mode was found to help pupils with low music intelligence significantly more than the TI and AIT modes.

Analysis of Chatter in Ball End Milling by Wavelet Transform

The chatter is one of the major limitations of the productivity in the ball end milling process. It affects the surface roughness, the dimensional accuracy and the tool life. The aim of this research is to propose the new system to detect the chatter during the ball end milling process by using the wavelet transform. The proposed method is implemented on the 5-axis CNC machining center and the new three parameters are introduced from three dynamic cutting forces, which are calculated by taking the ratio of the average variances of dynamic cutting forces to the absolute variances of themselves. It had been proved that the chatter can be easier to detect during the in-process cutting by using the new parameters which are proposed in this research. The experimentally obtained results showed that the wavelet transform can provide the reliable results to detect the chatter under various cutting conditions.

Statistical Approach to Basis Function Truncation in Digital Interpolation Filters

In this paper an alternative analysis in the time domain is described and the results of the interpolation process are presented by means of functions that are based on the rule of conditional mathematical expectation and the covariance function. A comparison between the interpolation error caused by low order filters and the classic sinc(t) truncated function is also presented. When fewer samples are used, low-order filters have less error. If the number of samples increases, the sinc(t) type functions are a better alternative. Generally speaking there is an optimal filter for each input signal which depends on the filter length and covariance function of the signal. A novel scheme of work for adaptive interpolation filters is also presented.

Fast Wavelet Image Denoising Based on Local Variance and Edge Analysis

The approach based on the wavelet transform has been widely used for image denoising due to its multi-resolution nature, its ability to produce high levels of noise reduction and the low level of distortion introduced. However, by removing noise, high frequency components belonging to edges are also removed, which leads to blurring the signal features. This paper proposes a new method of image noise reduction based on local variance and edge analysis. The analysis is performed by dividing an image into 32 x 32 pixel blocks, and transforming the data into wavelet domain. Fast lifting wavelet spatial-frequency decomposition and reconstruction is developed with the advantages of being computationally efficient and boundary effects minimized. The adaptive thresholding by local variance estimation and edge strength measurement can effectively reduce image noise while preserve the features of the original image corresponding to the boundaries of the objects. Experimental results demonstrate that the method performs well for images contaminated by natural and artificial noise, and is suitable to be adapted for different class of images and type of noises. The proposed algorithm provides a potential solution with parallel computation for real time or embedded system application.

Adaptive Block State Update Method for Separating Background

In this paper, we proposed the robust mobile object detection method for light effect in the night street image block based updating reference background model using block state analysis. Experiment image is acquired sequence color video from steady camera. When suddenly appeared artificial illumination, reference background model update this information such as street light, sign light. Generally natural illumination is change by temporal, but artificial illumination is suddenly appearance. So in this paper for exactly detect artificial illumination have 2 state process. First process is compare difference between current image and reference background by block based, it can know changed blocks. Second process is difference between current image-s edge map and reference background image-s edge map, it possible to estimate illumination at any block. This information is possible to exactly detect object, artificial illumination and it was generating reference background more clearly. Block is classified by block-state analysis. Block-state has a 4 state (i.e. transient, stationary, background, artificial illumination). Fig. 1 is show characteristic of block-state respectively [1]. Experimental results show that the presented approach works well in the presence of illumination variance.

Optimization of Control Parameters for MRR in Injection Flushing Type of EDM on Stainless Steel 304 Workpiece

The operating control parameters of injection flushing type of electrical discharge machining process on stainless steel 304 workpiece with copper tools are being optimized according to its individual machining characteristic i.e. material removal rate (MRR). Lower MRR during EDM machining process may decrease its- machining productivity. Hence, the quality characteristic for MRR is set to higher-the-better to achieve the optimum machining productivity. Taguchi method has been used for the construction, layout and analysis of the experiment for each of the machining characteristic for the MRR. The use of Taguchi method in the experiment saves a lot of time and cost of preparing and machining the experiment samples. Therefore, an L18 Orthogonal array which was the fundamental component in the statistical design of experiments has been used to plan the experiments and Analysis of Variance (ANOVA) is used to determine the optimum machining parameters for this machining characteristic. The control parameters selected for this optimization experiments are polarity, pulse on duration, discharge current, discharge voltage, machining depth, machining diameter and dielectric liquid pressure. The result had shown that the higher the discharge voltage, the higher will be the MRR.

Multivariate Statistical Analysis of Decathlon Performance Results in Olympic Athletes (1988-2008)

The performance results of the athletes competed in the 1988-2008 Olympic Games were analyzed (n = 166). The data were obtained from the IAAF official protocols. In the principal component analysis, the first three principal components explained 70% of the total variance. In the 1st principal component (with 43.1% of total variance explained) the largest factor loadings were for 100m (0.89), 400m (0.81), 110m hurdle run (0.76), and long jump (–0.72). This factor can be interpreted as the 'sprinting performance'. The loadings on the 2nd factor (15.3% of the total variance) presented a counter-intuitive throwing-jumping combination: the highest loadings were for throwing events (javelin throwing 0.76; shot put 0.74; and discus throwing 0.73) and also for jumping events (high jump 0.62; pole vaulting 0.58). On the 3rd factor (11.6% of total variance), the largest loading was for 1500 m running (0.88); all other loadings were below 0.4.

Optimal Controller Design for Linear Magnetic Levitation Rail System

In many applications, magnetic suspension systems are required to operate over large variations in air gap. As a result, the nonlinearities inherent in most types of suspensions have a significant impact on performance. Specifically, it may be difficult to design a linear controller which gives satisfactory performance, stability, and disturbance rejection over a wide range of operating points. in this paper an optimal controller based on discontinuous mathematical model of the system for an electromagnetic suspension system which is applied in magnetic trains has been designed . Simulations show that the new controller can adapt well to the variance of suspension mass and gap, and keep its dynamic performance, thus it is superior to the classic controller.

Understanding E-Learning Satisfaction in the Context of University Teachers

The present study was designed to test the influence of confirmed expectations, perceived usefulness and perceived competence on e-learning satisfaction among university teachers. A questionnaire was completed by 125 university teachers from 12 different universities in Norway. We found that 51% of the variance in university teachers- satisfaction with e-learning could be explained by the three proposed antecedents. Perceived usefulness seems to be the most important predictor of teachers- satisfaction with e-learning.

A New Approach for Predicting and Optimizing Weld Bead Geometry in GMAW

Gas Metal Arc Welding (GMAW) processes is an important joining process widely used in metal fabrication industries. This paper addresses modeling and optimization of this technique using a set of experimental data and regression analysis. The set of experimental data has been used to assess the influence of GMAW process parameters in weld bead geometry. The process variables considered here include voltage (V); wire feed rate (F); torch Angle (A); welding speed (S) and nozzle-to-plate distance (D). The process output characteristics include weld bead height, width and penetration. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. In the next stage, the proposed model is embedded into a Simulated Annealing (SA) algorithm to optimize the GMAW process parameters. The objective is to determine a suitable set of process parameters that can produce desired bead geometry, considering the ranges of the process parameters. Computational results prove the effectiveness of the proposed model and optimization procedure.

Modeling and Optimization of Abrasive Waterjet Parameters using Regression Analysis

Abrasive waterjet is a novel machining process capable of processing wide range of hard-to-machine materials. This research addresses modeling and optimization of the process parameters for this machining technique. To model the process a set of experimental data has been used to evaluate the effects of various parameter settings in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. Depth of cut, as one of the most important output characteristics, has been evaluated based on different parameter settings. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. The pairwise effects of process parameters settings on process response outputs are also shown graphically. The proposed model is then embedded into a Simulated Annealing algorithm to optimize the process parameters. The optimization is carried out for any desired values of depth of cut. The objective is to determine proper levels of process parameters in order to obtain a certain level of depth of cut. Computational results demonstrate that the proposed solution procedure is quite effective in solving such multi-variable problems.

Implementation of Response Surface Methodology using in Small Brown Rice Peeling Machine: Part I

Implementation of response surface methodology (RSM) was employed to study the effects of two factor (rubber clearance and round per minute) in brown rice peeling machine of The optimal BROKENS yield (19.02, average of three repeats),.The optimized composition derived from RSM regression was analyzed using Regression analysis and Analysis of Variance (ANOVA). At a significant level α = 0.05, the values of Regression coefficient, R 2 (adj)were 97.35 % and standard deviation were 1.09513. The independent variables are initial rubber clearance, and round per minute parameters namely. The investigating responses are final rubber clearance, and round per minute (RPM). The restriction of the optimization is the designated.

On Adaptive Optimization of Filter Performance Based on Markov Representation for Output Prediction Error

This paper addresses the problem of how one can improve the performance of a non-optimal filter. First the theoretical question on dynamical representation for a given time correlated random process is studied. It will be demonstrated that for a wide class of random processes, having a canonical form, there exists a dynamical system equivalent in the sense that its output has the same covariance function. It is shown that the dynamical approach is more effective for simulating and estimating a Markov and non- Markovian random processes, computationally is less demanding, especially with increasing of the dimension of simulated processes. Numerical examples and estimation problems in low dimensional systems are given to illustrate the advantages of the approach. A very useful application of the proposed approach is shown for the problem of state estimation in very high dimensional systems. Here a modified filter for data assimilation in an oceanic numerical model is presented which is proved to be very efficient due to introducing a simple Markovian structure for the output prediction error process and adaptive tuning some parameters of the Markov equation.

Temporal Change of Fractal Dimension of Explosion Earthquakes and Harmonic Tremors at Semeru Volcano, East Java, Indonesia, using Critical Exponent Method

Fractal analyses of successive event of explosion earthquake and harmonic tremor recorded at Semeru volcano were carried out to investigate the dynamical system regarding to their generating mechanism. The explosive eruptions accompanied by explosion earthquakes and following volcanic tremor which are generated by continuous emission of volcanic ash. The fractal dimension of successive event of explosion and harmonic tremor was estimated by Critical Exponent Method (CEM). It was found that the method yield a higher fractal dimension of explosion earthquakes and gradually decrease during the occurrence of harmonic tremor, and can be considerably as correlated complexity of the source mechanism from the variance of fractal dimension.