Overhead Estimation over Capacity of Mobile WiMAX

The IEEE802.16 standard which has emerged as Broadband Wireless Access (BWA) technology, promises to deliver high data rate over large areas to a large number of subscribers in the near future. This paper analyze the effect of overheads over capacity of downlink (DL) of orthogonal frequency division multiple access (OFDMA)–based on the IEEE802.16e mobile WiMAX system with and without overheads. The analysis focuses in particular on the impact of Adaptive Modulation and Coding (AMC) as well as deriving an algorithm to determine the maximum numbers of subscribers that each specific WiMAX sector may support. An analytical study of the WiMAX propagation channel by using Cost- 231 Hata Model is presented. Numerical results and discussion estimated by using Matlab to simulate the algorithm for different multi-users parameters.

Use of Agricultural Waste for the Removal of Nickel Ions from Aqueous Solutions: Equilibrium and Kinetics Studies

The potential of economically cheaper cellulose containing natural materials like rice husk was assessed for nickel adsorption from aqueous solutions. The effects of pH, contact time, sorbent dose, initial metal ion concentration and temperature on the uptake of nickel were studied in batch process. The removal of nickel was dependent on the physico-chemical characteristics of the adsorbent, adsorbate concentration and other studied process parameters. The sorption data has been correlated with Langmuir, Freundlich and Dubinin-Radush kevich (D-R) adsorption models. It was found that Freundlich and Langmuir isotherms fitted well to the data. Maximum nickel removal was observed at pH 6.0. The efficiency of rice husk for nickel removal was 51.8% for dilute solutions at 20 g L-1 adsorbent dose. FTIR, SEM and EDAX were recorded before and after adsorption to explore the number and position of the functional groups available for nickel binding on to the studied adsorbent and changes in surface morphology and elemental constitution of the adsorbent. Pseudo-second order model explains the nickel kinetics more effectively. Reusability of the adsorbent was examined by desorption in which HCl eluted 78.93% nickel. The results revealed that nickel is considerably adsorbed on rice husk and it could be and economic method for the removal of nickel from aqueous solutions.

Group Contribution Parameters for Nonrandom Lattice Fluid Equation of State involving COSMO-RS

Group contribution based models are widely used in industrial applications for its convenience and flexibility. Although a number of group contribution models have been proposed, there were certain limitations inherent to those models. Models based on group contribution excess Gibbs free energy are limited to low pressures and models based on equation of state (EOS) cannot properly describe highly nonideal mixtures including acids without introducing additional modification such as chemical theory. In the present study new a new approach derived from quantum chemistry have been used to calculate necessary EOS group interaction parameters. The COSMO-RS method, based on quantum mechanics, provides a reliable tool for fluid phase thermodynamics. Benefits of the group contribution EOS are the consistent extension to hydrogen-bonded mixtures and the capability to predict polymer-solvent equilibria up to high pressures. The authors are confident that with a sufficient parameter matrix the performance of the lattice EOS can be improved significantly.

Improving Air Temperature Prediction with Artificial Neural Networks

The mitigation of crop loss due to damaging freezes requires accurate air temperature prediction models. Previous work established that the Ward-style artificial neural network (ANN) is a suitable tool for developing such models. The current research focused on developing ANN models with reduced average prediction error by increasing the number of distinct observations used in training, adding additional input terms that describe the date of an observation, increasing the duration of prior weather data included in each observation, and reexamining the number of hidden nodes used in the network. Models were created to predict air temperature at hourly intervals from one to 12 hours ahead. Each ANN model, consisting of a network architecture and set of associated parameters, was evaluated by instantiating and training 30 networks and calculating the mean absolute error (MAE) of the resulting networks for some set of input patterns. The inclusion of seasonal input terms, up to 24 hours of prior weather information, and a larger number of processing nodes were some of the improvements that reduced average prediction error compared to previous research across all horizons. For example, the four-hour MAE of 1.40°C was 0.20°C, or 12.5%, less than the previous model. Prediction MAEs eight and 12 hours ahead improved by 0.17°C and 0.16°C, respectively, improvements of 7.4% and 5.9% over the existing model at these horizons. Networks instantiating the same model but with different initial random weights often led to different prediction errors. These results strongly suggest that ANN model developers should consider instantiating and training multiple networks with different initial weights to establish preferred model parameters.

Extent of Highway Capacity Loss Due to Rainfall

Traffic flow in adverse weather conditions have been investigated in this study for general traffic, week day and week end traffic. The empirical evidence is strong in support of the view that rainfall affects macroscopic traffic flow parameters. Data generated from a basic highway section along J5 in Johor Bahru, Malaysia was synchronized with 161 rain events over a period of three months. This revealed a 4.90%, 6.60% and 11.32% reduction in speed for light rain, moderate rain and heavy rain conditions respectively. The corresponding capacity reductions in the three rainfall regimes are 1.08% for light rain, 6.27% for moderate rain and 29.25% for heavy rain. In the week day traffic, speed drops of 8.1% and 16.05% were observed for light and heavy conditions. The moderate rain condition speed increased by 12.6%. The capacity drops for week day traffic are 4.40% for light rain, 9.77% for moderate rain and 45.90% for heavy rain. The weekend traffic indicated speed difference between the dry condition and the three rainy conditions as 6.70% for light rain, 8.90% for moderate rain and 13.10% for heavy rain. The capacity changes computed for the weekend traffic were 0.20% in light rain, 13.90% in moderate rain and 16.70% in heavy rain. No traffic instabilities were observed throughout the observation period and the capacities reported for each rain condition were below the norain condition capacity. Rainfall has tremendous impact on traffic flow and this may have implications for shock wave propagation.

Puff Noise Detection and Cancellation for Robust Speech Recognition

In this paper, an algorithm for detecting and attenuating puff noises frequently generated under the mobile environment is proposed. As a baseline system, puff detection system is designed based on Gaussian Mixture Model (GMM), and 39th Mel Frequency Cepstral Coefficient (MFCC) is extracted as feature parameters. To improve the detection performance, effective acoustic features for puff detection are proposed. In addition, detected puff intervals are attenuated by high-pass filtering. The speech recognition rate was measured for evaluation and confusion matrix and ROC curve are used to confirm the validity of the proposed system.

A New Performance Characterization of Transient Analysis Method

This paper proposes a new performance characterization for the test strategy intended for second order filters denominated Transient Analysis Method (TRAM). We evaluate the ability of the addressed test strategy for detecting deviation faults under simultaneous statistical fluctuation of the non-faulty parameters. For this purpose, we use Monte Carlo simulations and a fault model that considers as faulty only one component of the filter under test while the others components adopt random values (within their tolerance band) obtained from their statistical distributions. The new data reported here show (for the filters under study) the presence of hard-to-test components and relatively low fault coverage values for small deviation faults. These results suggest that the fault coverage value obtained using only nominal values for the non-faulty components (the traditional evaluation of TRAM) seem to be a poor predictor of the test performance.

Predicting Protein Function using Decision Tree

The drug discovery process starts with protein identification because proteins are responsible for many functions required for maintenance of life. Protein identification further needs determination of protein function. Proposed method develops a classifier for human protein function prediction. The model uses decision tree for classification process. The protein function is predicted on the basis of matched sequence derived features per each protein function. The research work includes the development of a tool which determines sequence derived features by analyzing different parameters. The other sequence derived features are determined using various web based tools.

Plastic Flow through Taper Dies: A Threedimensional Analysis

The plastic flow of metal in the extrusion process is an important factor in controlling the mechanical properties of the extruded products. It is, however, difficult to predict the metal flow in three dimensional extrusions of sections due to the involvement of re-entrant corners. The present study is to find an upper bound solution for the extrusion of triangular sectioned through taper dies from round sectioned billet. A discontinuous kinematically admissible velocity field (KAVF) is proposed. From the proposed KAVF, the upper bound solution on non-dimensional extrusion pressure is determined with respect to the chosen process parameters. The theoretical results are compared with experimental results to check the validity of the proposed velocity field. An extrusion setup is designed and fabricated for the said purpose, and all extrusions are carried out using circular billets. Experiments are carried out with commercially available lead at room temperature.

Design of Genetic-Algorithm Based Robust Power System Stabilizer

This paper presents a systematic approach for the design of power system stabilizer using genetic algorithm and investigates the robustness of the GA based PSS. The proposed approach employs GA search for optimal setting of PSS parameters. The performance of the proposed GPSS under small and large disturbances, loading conditions and system parameters is tested. The eigenvalue analysis and nonlinear simulation results show the effectiveness of the GPSS to damp out the system oscillations. It is found tat the dynamic performance with the GPSS shows improved results, over conventionally tuned PSS over a wide range of operating conditions.

Image Modeling Using Gibbs-Markov Random Field and Support Vector Machines Algorithm

This paper introduces a novel approach to estimate the clique potentials of Gibbs Markov random field (GMRF) models using the Support Vector Machines (SVM) algorithm and the Mean Field (MF) theory. The proposed approach is based on modeling the potential function associated with each clique shape of the GMRF model as a Gaussian-shaped kernel. In turn, the energy function of the GMRF will be in the form of a weighted sum of Gaussian kernels. This formulation of the GMRF model urges the use of the SVM with the Mean Field theory applied for its learning for estimating the energy function. The approach has been tested on synthetic texture images and is shown to provide satisfactory results in retrieving the synthesizing parameters.

Orthogonal Array Application and Response Surface Method Approach for Optimal Product Values: An Application for Oil Blending Process

This paper presents a methodical approach for designing and optimizing process parameters in oil blending industries. Twenty seven replicated experiments were conducted for production of A-Z crown super oil (SAE20W/50) employing L9 orthogonal array to establish process response parameters. Power law model was fitted to experimental data and the obtained model was optimized applying the central composite design (CCD) of response surface methodology (RSM). Quadratic model was found to be significant for production of A-Z crown supper oil. The study recognized and specified four new lubricant formulations that conform to ISO oil standard in the course of analyzing the batch productions of A-Z crown supper oil as: L1: KV = 21.8293Cst, BS200 = 9430.00Litres, Ad102=11024.00Litres, PVI = 2520 Litres, L2: KV = 22.513Cst, BS200 = 12430.00 Litres, Ad102 = 11024.00 Litres, PVI = 2520 Litres, L3: KV = 22.1671Cst, BS200 = 9430.00 Litres, Ad102 = 10481.00 Litres, PVI= 2520 Litres, L4: KV = 22.8605Cst, BS200 = 12430.00 Litres, Ad102 = 10481.00 Litres, PVI = 2520 Litres. The analysis of variance showed that quadratic model is significant for kinematic viscosity production while the R-sq value statistic of 0.99936 showed that the variation of kinematic viscosity is due to its relationship with the control factors. This study therefore resulted to appropriate blending proportions of lubricants base oil and additives and recommends the optimal kinematic viscosity of A-Z crown super oil (SAE20W/50) to be 22.86Cst.

Mathematical Approach towards Fault Detection and Isolation of Linear Dynamical Systems

The main objective of this work is to provide a fault detection and isolation based on Markov parameters for residual generation and a neural network for fault classification. The diagnostic approach is accomplished in two steps: In step 1, the system is identified using a series of input / output variables through an identification algorithm. In step 2, the fault is diagnosed comparing the Markov parameters of faulty and non faulty systems. The Artificial Neural Network is trained using predetermined faulty conditions serves to classify the unknown fault. In step 1, the identification is done by first formulating a Hankel matrix out of Input/ output variables and then decomposing the matrix via singular value decomposition technique. For identifying the system online sliding window approach is adopted wherein an open slit slides over a subset of 'n' input/output variables. The faults are introduced at arbitrary instances and the identification is carried out in online. Fault residues are extracted making a comparison of the first five Markov parameters of faulty and non faulty systems. The proposed diagnostic approach is illustrated on benchmark problems with encouraging results.

An Ontology for Spatial Relevant Objects in a Location-aware System: Case Study: A Tourist Guide System

Location-aware computing is a type of pervasive computing that utilizes user-s location as a dominant factor for providing urban services and application-related usages. One of the important urban services is navigation instruction for wayfinders in a city especially when the user is a tourist. The services which are presented to the tourists should provide adapted location aware instructions. In order to achieve this goal, the main challenge is to find spatial relevant objects and location-dependent information. The aim of this paper is the development of a reusable location-aware model to handle spatial relevancy parameters in urban location-aware systems. In this way we utilized ontology as an approach which could manage spatial relevancy by defining a generic model. Our contribution is the introduction of an ontological model based on the directed interval algebra principles. Indeed, it is assumed that the basic elements of our ontology are the spatial intervals for the user and his/her related contexts. The relationships between them would model the spatial relevancy parameters. The implementation language for the model is OWLs, a web ontology language. The achieved results show that our proposed location-aware model and the application adaptation strategies provide appropriate services for the user.

Calculation of Reorder Point Level under Stochastic Parameters: A Case Study in Healthcare Area

We consider a single-echelon, single-item inventory system where both demand and lead-time are stochastic. Continuous review policy is used to control the inventory system. The objective is to calculate the reorder point level under stochastic parameters. A case study is presented in Neonatal Intensive Care Unit.

Robustness of Hybrid Learning Acceleration Feedback Control Scheme in Flexible Manipulators

This paper describes a practical approach to design and develop a hybrid learning with acceleration feedback control (HLC) scheme for input tracking and end-point vibration suppression of flexible manipulator systems. Initially, a collocated proportionalderivative (PD) control scheme using hub-angle and hub-velocity feedback is developed for control of rigid-body motion of the system. This is then extended to incorporate a further hybrid control scheme of the collocated PD control and iterative learning control with acceleration feedback using genetic algorithms (GAs) to optimize the learning parameters. Experimental results of the response of the manipulator with the control schemes are presented in the time and frequency domains. The performance of the HLC is assessed in terms of input tracking, level of vibration reduction at resonance modes and robustness with various payloads.

Solar Cell Parameters Estimation Using Simulated Annealing Algorithm

This paper presents Simulated Annealing based approach to estimate solar cell model parameters. Single diode solar cell model is used in this study to validate the proposed approach outcomes. The developed technique is used to estimate different model parameters such as generated photocurrent, saturation current, series resistance, shunt resistance, and ideality factor that govern the current-voltage relationship of a solar cell. A practical case study is used to test and verify the consistency of accurately estimating various parameters of single diode solar cell model. Comparative study among different parameter estimation techniques is presented to show the effectiveness of the developed approach.

A Testbed for the Experiments Performed in Missing Value Treatments

The occurrence of missing values in database is a serious problem for Data Mining tasks, responsible for degrading data quality and accuracy of analyses. In this context, the area has shown a lack of standardization for experiments to treat missing values, introducing difficulties to the evaluation process among different researches due to the absence in the use of common parameters. This paper proposes a testbed intended to facilitate the experiments implementation and provide unbiased parameters using available datasets and suited performance metrics in order to optimize the evaluation and comparison between the state of art missing values treatments.

Investigation of Transmission Line Overvoltages and their Deduction Approach

The two significant overvoltages in power system, switching overvoltage and lightning overvoltage, are investigated in this paper. Firstly, the effect of various power system parameters on Line Energization overvoltages is evaluated by simulation in ATP. The dominant parameters include line parameters; short-circuit impedance and circuit breaker parameters. Solutions to reduce switching overvoltages are reviewed and controlled closing using switchsync controllers is proposed as proper method. This paper also investigates lightning overvoltages in the overhead-cable transition. Simulations are performed in PSCAD/EMTDC. Surge arresters are applied in both ends of cable to fulfill the insulation coordination. The maximum amplitude of overvoltages inside the cable is surveyed which should be of great concerns in insulation coordination studies.

Automatic Sleep Stage Scoring with Wavelet Packets Based on Single EEG Recording

Sleep stage scoring is the process of classifying the stage of the sleep in which the subject is in. Sleep is classified into two states based on the constellation of physiological parameters. The two states are the non-rapid eye movement (NREM) and the rapid eye movement (REM). The NREM sleep is also classified into four stages (1-4). These states and the state wakefulness are distinguished from each other based on the brain activity. In this work, a classification method for automated sleep stage scoring based on a single EEG recording using wavelet packet decomposition was implemented. Thirty two ploysomnographic recording from the MIT-BIH database were used for training and validation of the proposed method. A single EEG recording was extracted and smoothed using Savitzky-Golay filter. Wavelet packets decomposition up to the fourth level based on 20th order Daubechies filter was used to extract features from the EEG signal. A features vector of 54 features was formed. It was reduced to a size of 25 using the gain ratio method and fed into a classifier of regression trees. The regression trees were trained using 67% of the records available. The records for training were selected based on cross validation of the records. The remaining of the records was used for testing the classifier. The overall correct rate of the proposed method was found to be around 75%, which is acceptable compared to the techniques in the literature.