Towards a Load Balancing Framework for an SMS–Based Service Invocation Environment

The drastic increase in the usage of SMS technology has led service providers to seek for a solution that enable users of mobile devices to access services through SMSs. This has resulted in the proposal of solutions towards SMS-based service invocation in service oriented environments. However, the dynamic nature of service-oriented environments coupled with sudden load peaks generated by service request, poses performance challenges to infrastructures for supporting SMS-based service invocation. To address this problem we adopt load balancing techniques. A load balancing model with adaptive load balancing and load monitoring mechanisms as its key constructs is proposed. The load balancing model then led to realization of Least Loaded Load Balancing Framework (LLLBF). Evaluation of LLLBF benchmarked with round robin (RR) scheme on the queuing approach showed LLLBF outperformed RR in terms of response time and throughput. However, LLLBF achieved better result in the cost of high processing power.

An Approach to Image Extraction and Accurate Skin Detection from Web Pages

This paper proposes a system to extract images from web pages and then detect the skin color regions of these images. As part of the proposed system, using BandObject control, we built a Tool bar named 'Filter Tool Bar (FTB)' by modifying the Pavel Zolnikov implementation. The Yahoo! Team provides us with the Yahoo! SDK API, which also supports image search and is really useful. In the proposed system, we introduced three new methods for extracting images from the web pages (after loading the web page by using the proposed FTB, before loading the web page physically from the localhost, and before loading the web page from any server). These methods overcome the drawback of the regular expressions method for extracting images suggested by Ilan Assayag. The second part of the proposed system is concerned with the detection of the skin color regions of the extracted images. So, we studied two famous skin color detection techniques. The first technique is based on the RGB color space and the second technique is based on YUV and YIQ color spaces. We modified the second technique to overcome the failure of detecting complex image's background by using the saturation parameter to obtain an accurate skin detection results. The performance evaluation of the efficiency of the proposed system in extracting images before and after loading the web page from localhost or any server in terms of the number of extracted images is presented. Finally, the results of comparing the two skin detection techniques in terms of the number of pixels detected are presented.

Optimum Cascaded Design for Speech Enhancement Using Kalman Filter

Speech enhancement is the process of eliminating noise and increasing the quality of a speech signal, which is contaminated with other kinds of distortions. This paper is on developing an optimum cascaded system for speech enhancement. This aim is attained without diminishing any relevant speech information and without much computational and time complexity. LMS algorithm, Spectral Subtraction and Kalman filter have been deployed as the main de-noising algorithms in this work. Since these algorithms suffer from respective shortcomings, this work has been undertaken to design cascaded systems in different combinations and the evaluation of such cascades by qualitative (listening) and quantitative (SNR) tests.

A New Model for Question Answering Systems

Most of the Question Answering systems composed of three main modules: question processing, document processing and answer processing. Question processing module plays an important role in QA systems. If this module doesn't work properly, it will make problems for other sections. Moreover answer processing module is an emerging topic in Question Answering, where these systems are often required to rank and validate candidate answers. These techniques aiming at finding short and precise answers are often based on the semantic classification. This paper discussed about a new model for question answering which improved two main modules, question processing and answer processing. There are two important components which are the bases of the question processing. First component is question classification that specifies types of question and answer. Second one is reformulation which converts the user's question into an understandable question by QA system in a specific domain. Answer processing module, consists of candidate answer filtering, candidate answer ordering components and also it has a validation section for interacting with user. This module makes it more suitable to find exact answer. In this paper we have described question and answer processing modules with modeling, implementing and evaluating the system. System implemented in two versions. Results show that 'Version No.1' gave correct answer to 70% of questions (30 correct answers to 50 asked questions) and 'version No.2' gave correct answers to 94% of questions (47 correct answers to 50 asked questions).

Exploiting Two Intelligent Models to Predict Water Level: A Field Study of Urmia Lake, Iran

Water level forecasting using records of past time series is of importance in water resources engineering and management. For example, water level affects groundwater tables in low-lying coastal areas, as well as hydrological regimes of some coastal rivers. Then, a reliable prediction of sea-level variations is required in coastal engineering and hydrologic studies. During the past two decades, the approaches based on the Genetic Programming (GP) and Artificial Neural Networks (ANN) were developed. In the present study, the GP is used to forecast daily water level variations for a set of time intervals using observed water levels. The measurements from a single tide gauge at Urmia Lake, Northwest Iran, were used to train and validate the GP approach for the period from January 1997 to July 2008. Statistics, the root mean square error and correlation coefficient, are used to verify model by comparing with a corresponding outputs from Artificial Neural Network model. The results show that both these artificial intelligence methodologies are satisfactory and can be considered as alternatives to the conventional harmonic analysis.

Swarm Intelligence based Optimal Linear Phase FIR High Pass Filter Design using Particle Swarm Optimization with Constriction Factor and Inertia Weight Approach

In this paper, an optimal design of linear phase digital high pass finite impulse response (FIR) filter using Particle Swarm Optimization with Constriction Factor and Inertia Weight Approach (PSO-CFIWA) has been presented. In the design process, the filter length, pass band and stop band frequencies, feasible pass band and stop band ripple sizes are specified. FIR filter design is a multi-modal optimization problem. The conventional gradient based optimization techniques are not efficient for digital filter design. Given the filter specifications to be realized, the PSO-CFIWA algorithm generates a set of optimal filter coefficients and tries to meet the ideal frequency response characteristic. In this paper, for the given problem, the designs of the optimal FIR high pass filters of different orders have been performed. The simulation results have been compared to those obtained by the well accepted algorithms such as Parks and McClellan algorithm (PM), genetic algorithm (GA). The results justify that the proposed optimal filter design approach using PSOCFIWA outperforms PM and GA, not only in the accuracy of the designed filter but also in the convergence speed and solution quality.

Multi-matrix Real-coded Genetic Algorithm for Minimising Total Costs in Logistics Chain Network

The importance of supply chain and logistics management has been widely recognised. Effective management of the supply chain can reduce costs and lead times and improve responsiveness to changing customer demands. This paper proposes a multi-matrix real-coded Generic Algorithm (MRGA) based optimisation tool that minimises total costs associated within supply chain logistics. According to finite capacity constraints of all parties within the chain, Genetic Algorithm (GA) often produces infeasible chromosomes during initialisation and evolution processes. In the proposed algorithm, chromosome initialisation procedure, crossover and mutation operations that always guarantee feasible solutions were embedded. The proposed algorithm was tested using three sizes of benchmarking dataset of logistic chain network, which are typical of those faced by most global manufacturing companies. A half fractional factorial design was carried out to investigate the influence of alternative crossover and mutation operators by varying GA parameters. The analysis of experimental results suggested that the quality of solutions obtained is sensitive to the ways in which the genetic parameters and operators are set.

Novel Rao-Blackwellized Particle Filter for Mobile Robot SLAM Using Monocular Vision

This paper presents the novel Rao-Blackwellised particle filter (RBPF) for mobile robot simultaneous localization and mapping (SLAM) using monocular vision. The particle filter is combined with unscented Kalman filter (UKF) to extending the path posterior by sampling new poses that integrate the current observation which drastically reduces the uncertainty about the robot pose. The landmark position estimation and update is also implemented through UKF. Furthermore, the number of resampling steps is determined adaptively, which seriously reduces the particle depletion problem, and introducing the evolution strategies (ES) for avoiding particle impoverishment. The 3D natural point landmarks are structured with matching Scale Invariant Feature Transform (SIFT) feature pairs. The matching for multi-dimension SIFT features is implemented with a KD-Tree in the time cost of O(log2 N). Experiment results on real robot in our indoor environment show the advantages of our methods over previous approaches.

Model Order Reduction of Linear Time Variant High Speed VLSI Interconnects using Frequency Shift Technique

Accurate modeling of high speed RLC interconnects has become a necessity to address signal integrity issues in current VLSI design. To accurately model a dispersive system of interconnects at higher frequencies; a full-wave analysis is required. However, conventional circuit simulation of interconnects with full wave models is extremely CPU expensive. We present an algorithm for reducing large VLSI circuits to much smaller ones with similar input-output behavior. A key feature of our method, called Frequency Shift Technique, is that it is capable of reducing linear time-varying systems. This enables it to capture frequency-translation and sampling behavior, important in communication subsystems such as mixers, RF components and switched-capacitor filters. Reduction is obtained by projecting the original system described by linear differential equations into a lower dimension. Experiments have been carried out using Cadence Design Simulator cwhich indicates that the proposed technique achieves more % reduction with less CPU time than the other model order reduction techniques existing in literature. We also present applications to RF circuit subsystems, obtaining size reductions and evaluation speedups of orders of magnitude with insignificant loss of accuracy.

Flexible Sensor Array with Programmable Measurement System

This study is concerned with pH solution detection using 2 × 4 flexible sensor array based on a plastic polyethylene terephthalate (PET) substrate that is coated a conductive layer and a ruthenium dioxide (RuO2) sensitive membrane with the technologies of screen-printing and RF sputtering. For data analysis, we also prepared a dynamic measurement system for acquiring the response voltage and analyzing the characteristics of the working electrodes (WEs), such as sensitivity and linearity. In this condition, an array measurement system was designed to acquire the original signal from sensor array, and it is based on the method of digital signal processing (DSP). The DSP modifies the unstable acquisition data to a direct current (DC) output using the technique of digital filter. Hence, this sensor array can obtain a satisfactory yield, 62.5%, through the design measurement and analysis system in our laboratory.

A Study of Neuro-Fuzzy Inference System for Gross Domestic Product Growth Forecasting

In this paper we present a Adaptive Neuro-Fuzzy System (ANFIS) with inputs the lagged dependent variable for the prediction of Gross domestic Product growth rate in six countries. We compare the results with those of Autoregressive (AR) model. We conclude that the forecasting performance of neuro-fuzzy-system in the out-of-sample period is much more superior and can be a very useful alternative tool used by the national statistical services and the banking and finance industry.

Physico-chemical Treatment of Tar-Containing Wastewater Generated from Biomass Gasification Plants

Treatment of tar-containing wastewater is necessary for the successful operation of biomass gasification plants (BGPs). In the present study, tar-containing wastewater was treated using lime and alum for the removal of in-organics, followed by adsorption on powdered activated carbon (PAC) for the removal of organics. Limealum experiments were performed in a jar apparatus and activated carbon studies were performed in an orbital shaker. At optimum concentrations, both lime and alum individually proved to be capable of removing color, total suspended solids (TSS) and total dissolved solids (TDS), but in both cases, pH adjustment had to be carried out after treatment. The combination of lime and alum at the dose ratio of 0.8:0.8 g/L was found to be optimum for the removal of inorganics. The removal efficiency achieved at optimum concentrations were 78.6, 62.0, 62.5 and 52.8% for color, alkalinity, TSS and TDS, respectively. The major advantages of the lime-alum combination were observed to be as follows: no requirement of pH adjustment before and after treatment and good settleability of sludge. Coagulation-precipitation followed by adsorption on PAC resulted in 92.3% chemical oxygen demand (COD) removal and 100% phenol removal at equilibrium. Ammonia removal efficiency was found to be 11.7% during coagulation-flocculation and 36.2% during adsorption on PAC. Adsorption of organics on PAC in terms of COD and phenol followed Freundlich isotherm with Kf = 0.55 & 18.47 mg/g and n = 1.01 & 1.45, respectively. This technology may prove to be one of the fastest and most techno-economically feasible methods for the treatment of tar-containing wastewater generated from BGPs.

Adaptive Non-linear Filtering Technique for Image Restoration

Removing noise from the any processed images is very important. Noise should be removed in such a way that important information of image should be preserved. A decisionbased nonlinear algorithm for elimination of band lines, drop lines, mark, band lost and impulses in images is presented in this paper. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and evaluation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. However, the restricted window size renders median operation less effective whenever noise is excessive in that case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of Mean Square Error [MSE], Peak-Signal-to-Noise Ratio [PSNR], Signal-to-Noise Ratio Improved [SNRI], Percentage Of Noise Attenuated [PONA], and Percentage Of Spoiled Pixels [POSP]. This is compared with standard algorithms already in use and improved performance of the proposed algorithm is presented. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms which are required for removal of different artifacts.

Dynamic Optimization of Industrial Servomechanisms using Motion Laws Based On Bezier Curves

The motion planning procedure described in this paper has been developed in order to eliminate or reduce the residual vibrations of electromechanical positioning systems, without augmenting the motion time (usually imposed by production requirements), nor introducing overtime for vibration damping. The proposed technique is based on a suitable choice of the motion law assigned to the servomotor that drives the mechanism. The reference profile is defined by a Bezier curve, whose shape can be easily changed by modifying some numerical parameters. By means of an optimization technique these parameters can be modified without altering the continuity conditions imposed on the displacement and on its time derivatives at the initial and final time instants.

Studies on Race Car Aerodynamics at Wing in Ground Effect

Numerical studies on race car aerodynamics at wing in ground effect have been carried out using a steady 3d, double precision, pressure-based, and standard k-epsilon turbulence model. Through various parametric analytical studies we have observed that at a particular speed and ground clearance of the wings a favorable negative lift was found high at a particular angle of attack for all the physical models considered in this paper. The fact is that if the ground clearance height to chord length (h/c) is too small, the developing boundary layers from either side (the ground and the lower surface of the wing) can interact, leading to an altered variation of the aerodynamic characteristics at wing in ground effect. Therefore a suitable ground clearance must be predicted throughout the racing for a better performance of the race car, which obviously depends upon the coupled effects of the topography, wing orientation with respect to the ground, the incoming flow features and/or the race car speed. We have concluded that for the design of high performance and high speed race cars the adjustable wings capable to alter the ground clearance and the angles of attack is the best design option for any race car for racing safely with variable speeds.

A Method under Uncertain Information for the Selection of Students in Interdisciplinary Studies

We present a method for the selection of students in interdisciplinary studies based on the hybrid averaging operator. We assume that the available information given in the problem is uncertain so it is necessary to use interval numbers. Therefore, we suggest a new type of hybrid aggregation called uncertain induced generalized hybrid averaging (UIGHA) operator. It is an aggregation operator that considers the weighted average (WA) and the ordered weighted averaging (OWA) operator in the same formulation. Therefore, we are able to consider the degree of optimism of the decision maker and grades of importance in the same approach. By using interval numbers, we are able to represent the information considering the best and worst possible results so the decision maker gets a more complete view of the decision problem. We develop an illustrative example of the proposed scheme in the selection of students in interdisciplinary studies. We see that with the use of the UIGHA operator we get a more complete representation of the selection problem. Then, the decision maker is able to consider a wide range of alternatives depending on his interests. We also show other potential applications that could be used by using the UIGHA operator in educational problems about selection of different types of resources such as students, professors, etc.

A New Version of Unscented Kalman Filter

This paper presents a new algorithm which yields a nonlinear state estimator called iterated unscented Kalman filter. This state estimator makes use of both statistical and analytical linearization techniques in different parts of the filtering process. It outperforms the other three nonlinear state estimators: unscented Kalman filter (UKF), extended Kalman filter (EKF) and iterated extended Kalman filter (IEKF) when there is severe nonlinearity in system equation and less nonlinearity in measurement equation. The algorithm performance has been verified by illustrating some simulation results.

Pakistan Sign Language Recognition Using Statistical Template Matching

Sign language recognition has been a topic of research since the first data glove was developed. Many researchers have attempted to recognize sign language through various techniques. However none of them have ventured into the area of Pakistan Sign Language (PSL). The Boltay Haath project aims at recognizing PSL gestures using Statistical Template Matching. The primary input device is the DataGlove5 developed by 5DT. Alternative approaches use camera-based recognition which, being sensitive to environmental changes are not always a good choice.This paper explains the use of Statistical Template Matching for gesture recognition in Boltay Haath. The system recognizes one handed alphabet signs from PSL.

Hydrolytic Properties of Ellagic Acid in Commercial Pomegranate Juices

Pomegranate and pomegranate juices (PJs) have taken great attention for their health benefits in the last years. As there is an increasing concern about potential health benefits of ellagic acid, it is of great interest to evaluate alterations in ellagic acid concentration of commercial PJs. The purpose of this study is to analyze total phenolic, free and total ellagic acid content of six commercial PJs sold in Turkish markets using HPLC. The results showed that some commercial PJs had markedly high total phenolic and ellagic acid content. Total phenolic substances of commercial PJs range from 796.71 to 4608.91 mg GAE/l. Free amount of ellagic acid in commercial PJs range from 27.64 to 111.78 mg/l. Samples are hydrolyzed with concentrated HCl at 93oC for 2 and 24 hour and influences of temperature and time parameters on hydrolization were investigated. Thermal processing for pasteurization increased ellagic acid via ellagitannins hydrolysis.

Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise

In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.