The Effect of a Free -Trade Agreement upon Agricultural Imports

A free-trade agreement is found to increase Thailand-s agricultural imports from New Zealand, despite the short span of time for which the agreement has been operational. The finding is described by autoregressive estimates that correct for possible unit roots in the data. The agreement-s effect upon imports is also estimated while considering an error-correction model of imports against gross domestic product.

A Perceptually Optimized Wavelet Embedded Zero Tree Image Coder

In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Mechanical Quadrature Methods and Their Extrapolations for Solving First Kind Boundary Integral Equations of Anisotropic Darcy-s Equation

The mechanical quadrature methods for solving the boundary integral equations of the anisotropic Darcy-s equations with Dirichlet conditions in smooth domains are presented. By applying the collectively compact theory, we prove the convergence and stability of approximate solutions. The asymptotic expansions for the error show that the methods converge with the order O (h3), where h is the mesh size. Based on these analysis, extrapolation methods can be introduced to achieve a higher convergence rate O (h5). An a posterior asymptotic error representation is derived in order to construct self-adaptive algorithms. Finally, the numerical experiments show the efficiency of our methods.

ML Detection with Symbol Estimation for Nonlinear Distortion of OFDM Signal

In this paper, a new technique of signal detection has been proposed for detecting the orthogonal frequency-division multiplexing (OFDM) signal in the presence of nonlinear distortion.There are several advantages of OFDM communications system.However, one of the existing problems is remain considered as the nonlinear distortion generated by high-power-amplifier at the transmitter end due to the large dynamic range of an OFDM signal. The proposed method is the maximum likelihood detection with the symbol estimation. When the training data are available, the neural network has been used to learn the characteristic of received signal and to estimate the new positions of the transmitted symbol which are provided to the maximum likelihood detector. Resulting in the system performance, the nonlinear distortions of a traveling wave tube amplifier with OFDM signal are considered in this paper.Simulation results of the bit-error-rate performance are obtained with 16-QAM OFDM systems.

Joint Microstatistic Multiuser Detection and Cancellation of Nonlinear Distortion Effects for the Uplink of MC-CDMA Systems Using Golay Codes

The study in this paper underlines the importance of correct joint selection of the spreading codes for uplink of multicarrier code division multiple access (MC-CDMA) at the transmitter side and detector at the receiver side in the presence of nonlinear distortion due to high power amplifier (HPA). The bit error rate (BER) of system for different spreading sequences (Walsh code, Gold code, orthogonal Gold code, Golay code and Zadoff-Chu code) and different kinds of receivers (minimum mean-square error receiver (MMSE-MUD) and microstatistic multi-user receiver (MSF-MUD)) is compared by means of simulations for MC-CDMA transmission system. Finally, the results of analysis will show, that the application of MSF-MUD in combination with Golay codes can outperform significantly the other tested spreading codes and receivers for all mostly used models of HPA.

An Efficient Approach to Mining Frequent Itemsets on Data Streams

The increasing importance of data stream arising in a wide range of advanced applications has led to the extensive study of mining frequent patterns. Mining data streams poses many new challenges amongst which are the one-scan nature, the unbounded memory requirement and the high arrival rate of data streams. In this paper, we propose a new approach for mining itemsets on data stream. Our approach SFIDS has been developed based on FIDS algorithm. The main attempts were to keep some advantages of the previous approach and resolve some of its drawbacks, and consequently to improve run time and memory consumption. Our approach has the following advantages: using a data structure similar to lattice for keeping frequent itemsets, separating regions from each other with deleting common nodes that results in a decrease in search space, memory consumption and run time; and Finally, considering CPU constraint, with increasing arrival rate of data that result in overloading system, SFIDS automatically detect this situation and discard some of unprocessing data. We guarantee that error of results is bounded to user pre-specified threshold, based on a probability technique. Final results show that SFIDS algorithm could attain about 50% run time improvement than FIDS approach.

Enhancing the Error-Correcting Performance of LDPC Codes through an Efficient Use of Decoding Iterations

The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.

Comparison Analysis of the Wald-s and the Bayes Type Sequential Methods for Testing Hypotheses

The Comparison analysis of the Wald-s and Bayestype sequential methods for testing hypotheses is offered. The merits of the new sequential test are: universality which consists in optimality (with given criteria) and uniformity of decision-making regions for any number of hypotheses; simplicity, convenience and uniformity of the algorithms of their realization; reliability of the obtained results and an opportunity of providing the errors probabilities of desirable values. There are given the Computation results of concrete examples which confirm the above-stated characteristics of the new method and characterize the considered methods in regard to each other.

Convergence Analysis of a Prediction based Adaptive Equalizer for IIR Channels

This paper presents the convergence analysis of a prediction based blind equalizer for IIR channels. Predictor parameters are estimated by using the recursive least squares algorithm. It is shown that the prediction error converges almost surely (a.s.) toward a scalar multiple of the unknown input symbol sequence. It is also proved that the convergence rate of the parameter estimation error is of the same order as that in the iterated logarithm law.

Visual Attention Analysis on Mutated Brand Name using Eye-Tracking: A Case Study

Brand name plays a vital role for in-shop buying behavior of consumers and mutated brand name may affect the selling of leading branded products. In Indian market, there are many products with mutated brand names which are either orthographically or phonologically similar. Due to presence of such products, Indian consumers very often fall under confusion when buying some regularly used stuff. Authors of the present paper have attempted to demonstrate relationship between less attention and false recognition of mutated brand names during a product selection process. To achieve this goal, visual attention study was conducted on 15 male college students using eye-tracker against a mutated brand name and errors in recognition were noted using questionnaire. Statistical analysis of the acquired data revealed that there was more false recognition of mutated brand name when less attention was paid during selection of favorite product. Moreover, it was perceived that eye tracking is an effective tool for analyzing false recognition of brand name mutation.

Health Monitoring of Power Transformers by Dissolved Gas Analysis using Regression Method and Study the Effect of Filtration on Oil

Economically transformers constitute one of the largest investments in a Power system. For this reason, transformer condition assessment and management is a high priority task. If a transformer fails, it would have a significant negative impact on revenue and service reliability. Monitoring the state of health of power transformers has traditionally been carried out using laboratory Dissolved Gas Analysis (DGA) tests performed at periodic intervals on the oil sample, collected from the transformers. DGA of transformer oil is the single best indicator of a transformer-s overall condition and is a universal practice today, which started somewhere in the 1960s. Failure can occur in a transformer due to different reasons. Some failures can be limited or prevented by maintenance. Oil filtration is one of the methods to remove the dissolve gases and prevent the deterioration of the oil. In this paper we analysis the DGA data by regression method and predict the gas concentration in the oil in the future. We bring about a comparative study of different traditional methods of regression and the errors generated out of their predictions. With the help of these data we can deduce the health of the transformer by finding the type of fault if it has occurred or will occur in future. Additional in this paper effect of filtration on the transformer health is highlight by calculating the probability of failure of a transformer with and without oil filtrating.

Correlation of Viscosity in Nanofluids using Genetic Algorithm-neural Network (GA-NN)

An accurate and proficient artificial neural network (ANN) based genetic algorithm (GA) is developed for predicting of nanofluids viscosity. A genetic algorithm (GA) is used to optimize the neural network parameters for minimizing the error between the predictive viscosity and the experimental one. The experimental viscosity in two nanofluids Al2O3-H2O and CuO-H2O from 278.15 to 343.15 K and volume fraction up to 15% were used from literature. The result of this study reveals that GA-NN model is outperform to the conventional neural nets in predicting the viscosity of nanofluids with mean absolute relative error of 1.22% and 1.77% for Al2O3-H2O and CuO-H2O, respectively. Furthermore, the results of this work have also been compared with others models. The findings of this work demonstrate that the GA-NN model is an effective method for prediction viscosity of nanofluids and have better accuracy and simplicity compared with the others models.

Ensembling Adaptively Constructed Polynomial Regression Models

The approach of subset selection in polynomial regression model building assumes that the chosen fixed full set of predefined basis functions contains a subset that is sufficient to describe the target relation sufficiently well. However, in most cases the necessary set of basis functions is not known and needs to be guessed – a potentially non-trivial (and long) trial and error process. In our research we consider a potentially more efficient approach – Adaptive Basis Function Construction (ABFC). It lets the model building method itself construct the basis functions necessary for creating a model of arbitrary complexity with adequate predictive performance. However, there are two issues that to some extent plague the methods of both the subset selection and the ABFC, especially when working with relatively small data samples: the selection bias and the selection instability. We try to correct these issues by model post-evaluation using Cross-Validation and model ensembling. To evaluate the proposed method, we empirically compare it to ABFC methods without ensembling, to a widely used method of subset selection, as well as to some other well-known regression modeling methods, using publicly available data sets.

Estimation of Real Power Transfer Allocation Using Intelligent Systems

This paper presents application artificial intelligent (AI) techniques, namely artificial neural network (ANN), adaptive neuro fuzzy interface system (ANFIS), to estimate the real power transfer between generators and loads. Since these AI techniques adopt supervised learning, it first uses modified nodal equation method (MNE) to determine real power contribution from each generator to loads. Then the results of MNE method and load flow information are utilized to estimate the power transfer using AI techniques. The 25-bus equivalent system of south Malaysia is utilized as a test system to illustrate the effectiveness of both AI methods compared to that of the MNE method. The mean squared error of the estimate of ANN and ANFIS power transfer allocation methods are 1.19E-05 and 2.97E-05, respectively. Furthermore, when compared to MNE method, ANN and ANFIS methods computes generator contribution to loads within 20.99 and 39.37msec respectively whereas the MNE method took 360msec for the calculation of same real power transfer allocation. 

Artificial Neural Network Approach for Short Term Load Forecasting for Illam Region

In this paper, the application of neural networks to study the design of short-term load forecasting (STLF) Systems for Illam state located in west of Iran was explored. One important architecture of neural networks named Multi-Layer Perceptron (MLP) to model STLF systems was used. Our study based on MLP was trained and tested using three years (2004-2006) data. The results show that MLP network has the minimum forecasting error and can be considered as a good method to model the STLF systems.

Bootstrap and MLS Methods-based Individual Bioequivalence Assessment

It is a one-sided hypothesis testing process for assessing bioequivalence. Bootstrap and modified large-sample(MLS) methods are considered to study individual bioequivalence(IBE), type I error and power of hypothesis tests are simulated and compared with FDA(2001). The results show that modified large-sample method is equivalent to the method of FDA(2001) .

Average Current Estimation Technique for Reliability Analysis of Multiple Semiconductor Interconnects

Average current analysis checking the impact of current flow is very important to guarantee the reliability of semiconductor systems. As semiconductor process technologies improve, the coupling capacitance often become bigger than self capacitances. In this paper, we propose an analytic technique for analyzing average current on interconnects in multi-conductor structures. The proposed technique has shown to yield the acceptable errors compared to HSPICE results while providing computational efficiency.

Gene Selection Guided by Feature Interdependence

Cancers could normally be marked by a number of differentially expressed genes which show enormous potential as biomarkers for a certain disease. Recent years, cancer classification based on the investigation of gene expression profiles derived by high-throughput microarrays has widely been used. The selection of discriminative genes is, therefore, an essential preprocess step in carcinogenesis studies. In this paper, we have proposed a novel gene selector using information-theoretic measures for biological discovery. This multivariate filter is a four-stage framework through the analyses of feature relevance, feature interdependence, feature redundancy-dependence and subset rankings, and having been examined on the colon cancer data set. Our experimental result show that the proposed method outperformed other information theorem based filters in all aspect of classification errors and classification performance.

Brain Drain of Doctors; Causes and Consequences in Pakistan

Pakistani doctors (MBBS) are emigrating towards developed countries for professional adjustments. This study aims to highlight causes and consequences of doctors- brain drain from Pakistan. Primary data was collected from Mayo Hospital, Lahore by interviewing doctors (n=100) through systematic random sampling technique. It found that various socio-economic and political conditions are working as push and pull factors for brain drain of doctors in Pakistan. Majority of doctors (83%) declared poor remunerations and professional infrastructure of health department as push factor of doctors- brain drain. 81% claimed that continuous instability in political situation and threats of terrorism are responsible for emigration of doctors. 84% respondents considered fewer opportunities of further studies responsible for their emigration. Brain drain of doctors is affecting health sector-s policies / programs, standard doctor-patient ratios and quality of health services badly.

Surface Flattening Assisted with 3D Mannequin Based On Minimum Energy

The topic of surface flattening plays a vital role in the field of computer aided design and manufacture. Surface flattening enables the production of 2D patterns and it can be used in design and manufacturing for developing a 3D surface to a 2D platform, especially in fashion design. This study describes surface flattening based on minimum energy methods according to the property of different fabrics. Firstly, through the geometric feature of a 3D surface, the less transformed area can be flattened on a 2D platform by geodesic. Then, strain energy that has accumulated in mesh can be stably released by an approximate implicit method and revised error function. In some cases, cutting mesh to further release the energy is a common way to fix the situation and enhance the accuracy of the surface flattening, and this makes the obtained 2D pattern naturally generate significant cracks. When this methodology is applied to a 3D mannequin constructed with feature lines, it enhances the level of computer-aided fashion design. Besides, when different fabrics are applied to fashion design, it is necessary to revise the shape of a 2D pattern according to the properties of the fabric. With this model, the outline of 2D patterns can be revised by distributing the strain energy with different results according to different fabric properties. Finally, this research uses some common design cases to illustrate and verify the feasibility of this methodology.