Statistical Approach to Basis Function Truncation in Digital Interpolation Filters

In this paper an alternative analysis in the time domain is described and the results of the interpolation process are presented by means of functions that are based on the rule of conditional mathematical expectation and the covariance function. A comparison between the interpolation error caused by low order filters and the classic sinc(t) truncated function is also presented. When fewer samples are used, low-order filters have less error. If the number of samples increases, the sinc(t) type functions are a better alternative. Generally speaking there is an optimal filter for each input signal which depends on the filter length and covariance function of the signal. A novel scheme of work for adaptive interpolation filters is also presented.

Cursor Position Estimation Model for Virtual Touch Screen Using Camera

Virtual touch screen using camera is an ordinary screen which uses a camera to imitate the touch screen by taking a picture of an indicator, e.g., finger, which is laid on the screen, converting the indicator tip position on the picture to the position on the screen, and moving the cursor on the screen to that position. In fact, the indicator is not laid on the screen directly, but it is intervened by the cover at some intervals. In spite of this gap, if the eye-indicator-camera angle is not large, the mapping from the indicator tip positions on the image to the corresponding cursor positions on the screen is not difficult and could be done with a little error. However, the larger the angle is, the bigger the error in the mapping occurs. This paper proposes cursor position estimation model for virtual touch screen using camera which could eliminate this kind of error. The proposed model (i) moves the on-screen pilot cursor to the screen position which locates on the screen at the position just behind the indicator tip when the indicator tip has been looked from the camera position, and then (ii) converts that pilot cursor position to the desirable cursor position (the position on the screen when it has been looked from the user-s eye through the indicator tip) by using the bilinear transformation. Simulation results show the correctness of the estimated cursor position by using the proposed model.

Noise-Improved Signal Detection in Nonlinear Threshold Systems

We discuss the signal detection through nonlinear threshold systems. The detection performance is assessed by the probability of error Per . We establish that: (1) when the signal is complete suprathreshold, noise always degrades the signal detection both in the single threshold system and in the parallel array of threshold devices. (2) When the signal is a little subthreshold, noise degrades signal detection in the single threshold system. But in the parallel array, noise can improve signal detection, i.e., stochastic resonance (SR) exists in the array. (3) When the signal is predominant subthreshold, noise always can improve signal detection and SR always exists not only in the single threshold system but also in the parallel array. (4) Array can improve signal detection by raising the number of threshold devices. These results extend further the applicability of SR in signal detection.

Image Transmission: A Case Study on Combined Scheme of LDPC-STBC in Asynchronous Cooperative MIMO Systems

this paper presents a novel scheme which is capable of reducing the error rate and improves the transmission performance in the asynchronous cooperative MIMO systems. A case study of image transmission is applied to prove the efficient of scheme. The linear dispersion structure is employed to accommodate the cooperative wireless communication network in the dynamic topology of structure, as well as to achieve higher throughput than conventional space–time codes based on orthogonal designs. The LDPC encoder without girth-4 and the STBC encoder with guard intervals are respectively introduced. The experiment results show that the combined coder of LDPC-STBC with guard intervals can be the good error correcting coders and BER performance in the asynchronous cooperative communication. In the case study of image transmission, the results show that in the transmission process, the image quality which is obtained by applied combined scheme is much better than it which is not applied the scheme in the asynchronous cooperative MIMO systems.

Reducing Power in Error Correcting Code using Genetic Algorithm

This paper proposes a method which reduces power consumption in single-error correcting, double error-detecting checker circuits that perform memory error correction code. Power is minimized with little or no impact on area and delay, using the degrees of freedom in selecting the parity check matrix of the error correcting codes. The genetic algorithm is employed to solve the non linear power optimization problem. The method is applied to two commonly used SEC-DED codes: standard Hamming and odd column weight Hsiao codes. Experiments were performed to show the performance of the proposed method.

Union is Strength in Lossy Image Compression

In this work, we present a comparison between different techniques of image compression. First, the image is divided in blocks which are organized according to a certain scan. Later, several compression techniques are applied, combined or alone. Such techniques are: wavelets (Haar's basis), Karhunen-Loève Transform, etc. Simulations show that the combined versions are the best, with minor Mean Squared Error (MSE), and higher Peak Signal to Noise Ratio (PSNR) and better image quality, even in the presence of noise.

Quantifying and Adjusting the Effects of Publication Bias in Continuous Meta-Analysis

This study uses simulated meta-analysis to assess the effects of publication bias on meta-analysis estimates and to evaluate the efficacy of the trim and fill method in adjusting for these biases. The estimated effect sizes and the standard error were evaluated in terms of the statistical bias and the coverage probability. The results demonstrate that if publication bias is not adjusted it could lead to up to 40% bias in the treatment effect estimates. Utilization of the trim and fill method could reduce the bias in the overall estimate by more than half. The method is optimum in presence of moderate underlying bias but has minimal effects in presence of low and severe publication bias. Additionally, the trim and fill method improves the coverage probability by more than half when subjected to the same level of publication bias as those of the unadjusted data. The method however tends to produce false positive results and will incorrectly adjust the data for publication bias up to 45 % of the time. Nonetheless, the bias introduced into the estimates due to this adjustment is minimal

Error Analysis of Nonconventional Electrical Moisture-meter under Simplified Conditions

An electrical apparatus for measuring moisture content was developed by our laboratory and uses dependence of electrical properties on water content in studied material. Error analysis of the apparatus was run by measuring different volumes of water in a simplified specimen, i.e. hollow plexiglass block, in order to avoid as many side-effects as possible. Obtained data were processed using both basic and advanced statistics and results were compared with each other. The influence of water content on accuracy of measured data was studied as well as the influence of variation of apparatus' proper arrangement or factual methodics of its usage. The overall coefficient of variation was 4%. There was no trend found in results of error dependence on water content. Comparison with current surveys led to a conclusion, that the studied apparatus can be used for indirect measurement of water content in porous materials, with expectable error and under known conditions. Factual experiments with porous materials are not involved, but are currently under investigation.

Application New Approach with Two Networks Slow and Fast on the Asynchronous Machine

In this paper, we propose a new modular approach called neuroglial consisting of two neural networks slow and fast which emulates a biological reality recently discovered. The implementation is based on complex multi-time scale systems; validation is performed on the model of the asynchronous machine. We applied the geometric approach based on the Gerschgorin circles for the decoupling of fast and slow variables, and the method of singular perturbations for the development of reductions models. This new architecture allows for smaller networks with less complexity and better performance in terms of mean square error and convergence than the single network model.

WDM-Based Storage Area Network (SAN) for Disaster Recovery Operations

This paper proposes a Wavelength Division Multiplexing (WDM) technology based Storage Area Network (SAN) for all type of Disaster recovery operation. It considers recovery when all paths failure in the network as well as the main SAN site failure also the all backup sites failure by the effect of natural disasters such as earthquakes, fires and floods, power outage, and terrorist attacks, as initially SAN were designed to work within distance limited environments[2]. Paper also presents a NEW PATH algorithm when path failure occurs. The simulation result and analysis is presented for the proposed architecture with performance consideration.

Joint Adaptive Block Matching Search (JABMS) Algorithm

In this paper a new Joint Adaptive Block Matching Search (JABMS) algorithm is proposed to generate motion vector and search a best match macro block by classifying the motion vector movement based on prediction error. Diamond Search (DS) algorithm generates high estimation accuracy when motion vector is small and Adaptive Rood Pattern Search (ARPS) algorithm can handle large motion vector but is not very accurate. The proposed JABMS algorithm which is capable of considering both small and large motions gives improved estimation accuracy and the computational cost is reduced by 15.2 times compared with Exhaustive Search (ES) algorithm and is 1.3 times less compared with Diamond search algorithm.

A Sub Pixel Resolution Method

One of the main limitations for the resolution of optical instruments is the size of the sensor-s pixels. In this paper we introduce a new sub pixel resolution algorithm to enhance the resolution of images. This method is based on the analysis of multiimages which are fast recorded during the fine relative motion of image and pixel arrays of CCDs. It is shown that by applying this method for a sample noise free image one will enhance the resolution with 10-14 order of error.

Principal Component Regression in Noninvasive Pineapple Soluble Solids Content Assessment Based On Shortwave Near Infrared Spectrum

The Principal component regression (PCR) is a combination of principal component analysis (PCA) and multiple linear regression (MLR). The objective of this paper is to revise the use of PCR in shortwave near infrared (SWNIR) (750-1000nm) spectral analysis. The idea of PCR was explained mathematically and implemented in the non-destructive assessment of the soluble solid content (SSC) of pineapple based on SWNIR spectral data. PCR achieved satisfactory results in this application with root mean squared error of calibration (RMSEC) of 0.7611 Brix°, coefficient of determination (R2) of 0.5865 and root mean squared error of crossvalidation (RMSECV) of 0.8323 Brix° with principal components (PCs) of 14.

Active Vibration Control of Flexible Beam using Differential Evolution Optimisation

This paper presents the development of an active vibration control using direct adaptive controller to suppress the vibration of a flexible beam system. The controller is realized based on linear parametric form. Differential evolution optimisation algorithm is used to optimize the controller using single objective function by minimizing the mean square error of the observed vibration signal. Furthermore, an alternative approach is developed to systematically search for the best controller model structure together with it parameter values. The performance of the control scheme is presented and analysed in both time and frequency domain. Simulation results demonstrate that the proposed scheme is able to suppress the unwanted vibration effectively.

Optimal Combination for Modal Pushover Analysis by Using Genetic Algorithm

In order to consider the effects of the higher modes in the pushover analysis, during the recent years several multi-modal pushover procedures have been presented. In these methods the response of the considered modes are combined by the square-rootof- sum-of-squares (SRSS) rule while application of the elastic modal combination rules in the inelastic phases is no longer valid. In this research the feasibility of defining an efficient alternative combination method is investigated. Two steel moment-frame buildings denoted SAC-9 and SAC-20 under ten earthquake records are considered. The nonlinear responses of the structures are estimated by the directed algebraic combination of the weighted responses of the separate modes. The weight of the each mode is defined so that the resulted response of the combination has a minimum error to the nonlinear time history analysis. The genetic algorithm (GA) is used to minimize the error and optimize the weight factors. The obtained optimal factors for each mode in different cases are compared together to find unique appropriate weight factors for each mode in all cases.

Smartphones for In-home Diagnostics in Telemedicine

Many contemporary telemedical applications rely on regular consultations over the phone or video conferencing which consumes valuable resources such as the time of the doctors. Some applications or treatments allow automated diagnostics on the patient side which only notifies the doctors in case a significant worsening of patient’s condition is measured. Such programs can save valuable resources but an important implementation issue is how to ensure effective and cheap diagnostics on the patient side. First, specific diagnostic devices on patient side are expensive and second, they need to be user-˜friendly to encourage patient’s cooperation and reduce errors in usage which may cause noise in diagnostic data. This article proposes the use of modern smartphones and various build-in or attachable sensors as universal diagnostic devices applicable in a wider range of telemedical programs and demonstrates their application on a case-study – a program for schizophrenic relapse prevention.

A Comparison of Artificial Neural Networks for Prediction of Suspended Sediment Discharge in River- A Case Study in Malaysia

Prediction of highly non linear behavior of suspended sediment flow in rivers has prime importance in the field of water resources engineering. In this study the predictive performance of two Artificial Neural Networks (ANNs) namely, the Radial Basis Function (RBF) Network and the Multi Layer Feed Forward (MLFF) Network have been compared. Time series data of daily suspended sediment discharge and water discharge at Pari River was used for training and testing the networks. A number of statistical parameters i.e. root mean square error (RMSE), mean absolute error (MAE), coefficient of efficiency (CE) and coefficient of determination (R2) were used for performance evaluation of the models. Both the models produced satisfactory results and showed a good agreement between the predicted and observed data. The RBF network model provided slightly better results than the MLFF network model in predicting suspended sediment discharge.

Automatic 2D/2D Registration using Multiresolution Pyramid based Mutual Information in Image Guided Radiation Therapy

Medical image registration is the key technology in image guided radiation therapy (IGRT) systems. On the basis of the previous work on our IGRT prototype with a biorthogonal x-ray imaging system, we described a method focused on the 2D/2D rigid-body registration using multiresolution pyramid based mutual information in this paper. Three key steps were involved in the method : firstly, four 2D images were obtained including two x-ray projection images and two digital reconstructed radiographies(DRRs ) as the input for the registration ; Secondly, each pair of the corresponding x-ray image and DRR image were matched using multiresolution pyramid based mutual information under the ITK registration framework ; Thirdly, we got the final couch offset through a coordinate transformation by calculating the translations acquired from the two pairs of the images. A simulation example of a parotid gland tumor case and a clinical example of an anthropomorphic head phantom were employed in the verification tests. In addition, the influence of different CT slice thickness were tested. The simulation results showed that the positioning errors were 0.068±0.070, 0.072±0.098, 0.154±0.176mm along three axes which were lateral, longitudinal and vertical. The clinical test indicated that the positioning errors of the planned isocenter were 0.066, 0.07, 2.06mm on average with a CT slice thickness of 2.5mm. It can be concluded that our method with its verified accuracy and robustness can be effectively used in IGRT systems for patient setup.

Predicting Oil Content of Fresh Palm Fruit Using Transmission-Mode Ultrasonic Technique

In this paper, an ultrasonic technique is proposed to predict oil content in a fresh palm fruit. This is accomplished by measuring the attenuation based on ultrasonic transmission mode. Several palm fruit samples with known oil content by Soxhlet extraction (ISO9001:2008) were tested with our ultrasonic measurement. Amplitude attenuation data results for all palm samples were collected. The Feedforward Neural Networks (FNNs) are applied to predict the oil content for the samples. The Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) of the FNN model for predicting oil content percentage are 7.6186 and 5.2287 with the correlation coefficient (R) of 0.9193.

Input Variable Selection for RBFN-based Electric Utility's CO2 Emissions Forecasting

This study investigates the performance of radial basis function networks (RBFN) in forecasting the monthly CO2 emissions of an electric power utility. We also propose a method for input variable selection. This method is based on identifying the general relationships between groups of input candidates and the output. The effect that each input has on the forecasting error is examined by removing all inputs except the variable to be investigated from its group, calculating the networks parameter and performing the forecast. Finally, the new forecasting error is compared with the reference model. Eight input variables were identified as the most relevant, which is significantly less than our reference model with 30 input variables. The simulation results demonstrate that the model with the 8 inputs selected using the method introduced in this study performs as accurate as the reference model, while also being the most parsimonious.