Abstract: One of the determinants of a firm-s prosperity is the
customers- perceived service quality and satisfaction. While service
quality is wide in scope, and consists of various dimensions, there
may be differences in the relative importance of these dimensions in
affecting customers- overall satisfaction of service quality.
Identifying the relative rank of different dimensions of service quality
is very important in that it can help managers to find out which
service dimensions have a greater effect on customers- overall
satisfaction. Such an insight will consequently lead to more effective
resource allocation which will finally end in higher levels of
customer satisfaction. This issue – despite its criticality- has not
received enough attention so far. Therefore, using a sample of 240
bank customers in Iran, an artificial neural network is developed to
address this gap in the literature. As customers- evaluation of service
quality is a subjective process, artificial neural networks –as a brain
metaphor- may appear to have a potentiality to model such a
complicated process. Proposing a neural network which is able to
predict the customers- overall satisfaction of service quality with a
promising level of accuracy is the first contribution of this study. In
addition, prioritizing the service quality dimensions in affecting
customers- overall satisfaction –by using sensitivity analysis of
neural network- is the second important finding of this paper.
Abstract: Since the pioneering work of Zadeh, fuzzy set theory has been applied to a myriad of areas. Song and Chissom introduced the concept of fuzzy time series and applied some methods to the enrollments of the University of Alabama. In recent years, a number of techniques have been proposed for forecasting based on fuzzy set theory methods. These methods have either used enrollment numbers or differences of enrollments as the universe of discourse. We propose using the year to year percentage change as the universe of discourse. In this communication, the approach of Jilani, Burney, and Ardil is modified by using the year to year percentage change as the universe of discourse. We use enrollment figures for the University of Alabama to illustrate our proposed method. The proposed method results in better forecasting accuracy than existing models.
Abstract: There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. In this paper, we tried to predict the level of impact of the existing faults in software systems. Neuro-Fuzzy based predictor models is applied NASA-s public domain defect dataset coded in C programming language. As Correlation-based Feature Selection (CFS) evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. So, CFS is used for the selecting the best metrics that have highly correlated with level of severity of faults. The results are compared with the prediction results of Logistic Models (LMT) that was earlier quoted as the best technique in [17]. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provide a relatively better prediction accuracy as compared to other models and hence, can be used for the modeling of the level of impact of faults in function based systems.
Abstract: In this paper we propose a robust environmental sound classification approach, based on spectrograms features driven from log-Gabor filters. This approach includes two methods. In the first methods, the spectrograms are passed through an appropriate log-Gabor filter banks and the outputs are averaged and underwent an optimal feature selection procedure based on a mutual information criteria. The second method uses the same steps but applied only to three patches extracted from each spectrogram.
To investigate the accuracy of the proposed methods, we conduct experiments using a large database containing 10 environmental sound classes. The classification results based on Multiclass Support Vector Machines show that the second method is the most efficient with an average classification accuracy of 89.62 %.
Abstract: The information revealed by derivatives can help to
better characterize digital near-end crosstalk signatures with the
ultimate goal of identifying the specific aggressor signal.
Unfortunately, derivatives tend to be very sensitive to even low
levels of noise. In this work we approximated the derivatives of both
quiet and noisy digital signals using a wavelet-based technique. The
results are presented for Gaussian digital edges, IBIS Model digital
edges, and digital edges in oscilloscope data captured from an actual
printed circuit board. Tradeoffs between accuracy and noise
immunity are presented. The results show that the wavelet technique
can produce first derivative approximations that are accurate to
within 5% or better, even under noisy conditions. The wavelet
technique can be used to calculate the derivative of a digital signal
edge when conventional methods fail.
Abstract: The nature of consumer products causes the difficulty
in forecasting the future demands and the accuracy of the forecasts
significantly affects the overall performance of the supply chain
system. In this study, two data mining methods, artificial neural
network (ANN) and support vector machine (SVM), were utilized to
predict the demand of consumer products. The training data used was
the actual demand of six different products from a consumer product
company in Thailand. The results indicated that SVM had a better
forecast quality (in term of MAPE) than ANN in every category of
products. Moreover, another important finding was the margin
difference of MAPE from these two methods was significantly high
when the data was highly correlated.
Abstract: A simple method for the simultaneous determination
of hippuric acid and benzoic acid in urine using reversed-phase high
performance liquid chromatography was described. Chromatography
was performed on a Nova-Pak C18 (3.9 x 150 mm) column with a
mobile phase of mixed solution methanol: water: acetic acid
(20:80:0.2) and UV detection at 254 nm. The calibration curve was
linear within concentration range at 0.125 to 6.0 mg/ml of hippuric
acid and benzoic acid. The recovery, accuracy and coefficient
variance of hippuric acid were 104.54%, 0.2% and 0.2% respectively
and for benzoic acid were 98.48%, 1.25% and 0.60% respectively.
The detection limit of this method was 0.01ng/l for hippuric acid and
0.06ng/l for benzoic acid. This method has been applied to the
analysis of urine samples from the suspected of toluene abuser or
glue sniffer among secondary school students at Johor Bahru.
Abstract: This paper presents a new algorithm for the channel estimation of the OFDM system based on a pilot signal for the new generation of high data rate communication systems. In orthogonal frequency division multiplexing (OFDM) systems over fast-varying fading channels, channel estimation and tracking is generally carried out by transmitting known pilot symbols in given positions of the frequency-time grid. In this paper, we propose to derive an improved algorithm based on the calculation of the mean and the variance of the adjacent pilot signals for a specific distribution of the pilot signals in the OFDM frequency-time grid then calculating of the entire unknown channel coefficients from the equation of the mean and the variance. Simulation results shows that the performance of the OFDM system increase as the length of the channel increase where the accuracy of the estimated channel will be increased using this low complexity algorithm, also the number of the pilot signal needed to be inserted in the OFDM signal will be reduced which lead to increase in the throughput of the signal over the OFDM system in compared with other type of the distribution such as Comb type and Block type channel estimation.
Abstract: Discovering new biological knowledge from the highthroughput biological data is a major challenge to bioinformatics today. To address this challenge, we developed a new approach for protein classification. Proteins that are evolutionarily- and thereby functionally- related are said to belong to the same classification. Identifying protein classification is of fundamental importance to document the diversity of the known protein universe. It also provides a means to determine the functional roles of newly discovered protein sequences. Our goal is to predict the functional classification of novel protein sequences based on a set of features extracted from each protein sequence. The proposed technique used datasets extracted from the Structural Classification of Proteins (SCOP) database. A set of spectral domain features based on Fast Fourier Transform (FFT) is used. The proposed classifier uses multilayer back propagation (MLBP) neural network for protein classification. The maximum classification accuracy is about 91% when applying the classifier to the full four levels of the SCOP database. However, it reaches a maximum of 96% when limiting the classification to the family level. The classification results reveal that spectral domain contains information that can be used for classification with high accuracy. In addition, the results emphasize that sequence similarity measures are of great importance especially at the family level.
Abstract: Chronic hepatitis B can evolve to cirrhosis and liver
cancer. Interferon is the only effective treatment, for carefully selected
patients, but it is very expensive. Some of the selection criteria are
based on liver biopsy, an invasive, costly and painful medical procedure.
Therefore, developing efficient non-invasive selection systems,
could be in the patients benefit and also save money. We investigated
the possibility to create intelligent systems to assist the Interferon
therapeutical decision, mainly by predicting with acceptable accuracy
the results of the biopsy. We used a knowledge discovery in integrated
medical data - imaging, clinical, and laboratory data. The resulted
intelligent systems, tested on 500 patients with chronic hepatitis
B, based on C5.0 decision trees and boosting, predict with 100%
accuracy the results of the liver biopsy. Also, by integrating the other
patients selection criteria, they offer a non-invasive support for the
correct Interferon therapeutic decision. To our best knowledge, these
decision systems outperformed all similar systems published in the
literature, and offer a realistic opportunity to replace liver biopsy in
this medical context.
Abstract: To solve the problem of multisensor data fusion under
non-Gaussian channel noise. The advanced M-estimates are known
to be robust solution while trading off some accuracy. In order to
improve the estimation accuracy while still maintaining the equivalent
robustness, a two-stage robust fusion algorithm is proposed using
preliminary rejection of outliers then an optimal linear fusion. The
numerical experiments show that the proposed algorithm is equivalent
to the M-estimates in the case of uncorrelated local estimates and
significantly outperforms the M-estimates when local estimates are
correlated.
Abstract: The goal of this paper is to develop a model to
integrate “pricing" and “advertisement" for short life cycle products,
such as branded fashion clothing products. To achieve this goal, we
apply the concept of “Dynamic Pricing". There are two classes of
advertisements, for the brand (regardless of product) and for a
particular product. Advertising the brand affects the demand and
price of all the products. Thus, the model considers all these products
in relation with each other. We develop two different methods to
integrate both types of advertisement and pricing. The first model is
developed within the framework of dynamic programming. However,
due to the complexity of the model, this method cannot be applicable
for large size problems. Therefore, we develop another method,
called hieratical approach, which is capable of handling the real
world problems. Finally, we show the accuracy of this method, both
theoretically and also by simulation.
Abstract: Retinal vascularity assessment plays an important role in diagnosis of ophthalmic pathologies. The employment of digital images for this purpose makes possible a computerized approach and has motivated development of many methods for automated vascular tree segmentation. Metrics based on contingency tables for binary classification have been widely used for evaluating performance of these algorithms and, concretely, the accuracy has been mostly used as measure of global performance in this topic. However, this metric shows very poor matching with human perception as well as other notable deficiencies. Here, a new similarity function for measuring quality of retinal vessel segmentations is proposed. This similarity function is based on characterizing the vascular tree as a connected structure with a measurable area and length. Tests made indicate that this new approach shows better behaviour than the current one does. Generalizing, this concept of measuring descriptive properties may be used for designing functions for measuring more successfully segmentation quality of other complex structures.
Abstract: In this paper, two matrix iterative methods are presented to solve the matrix equation A1X1B1 + A2X2B2 + ... + AlXlBl = C the minimum residual problem l i=1 AiXiBi−CF = minXi∈BRni×ni l i=1 AiXiBi−CF and the matrix nearness problem [X1, X2, ..., Xl] = min[X1,X2,...,Xl]∈SE [X1,X2, ...,Xl] − [X1, X2, ..., Xl]F , where BRni×ni is the set of bisymmetric matrices, and SE is the solution set of above matrix equation or minimum residual problem. These matrix iterative methods have faster convergence rate and higher accuracy than former methods. Paige’s algorithms are used as the frame method for deriving these matrix iterative methods. The numerical example is used to illustrate the efficiency of these new methods.
Abstract: The algorithms of convex hull have been extensively studied in literature, principally because of their wide range of applications in different areas. This article presents an efficient algorithm to construct approximate convex hull from a set of n points in the plane in O(n + k) time, where k is the approximation error control parameter. The proposed algorithm is suitable for applications preferred to reduce the computation time in exchange of accuracy level such as animation and interaction in computer graphics where rapid and real-time graphics rendering is indispensable.
Abstract: Attitude Determination (AD) of a spacecraft using the
phase measurements of the Global Navigation Satellite System
(GNSS) is an active area of research. Various attitude determination
algorithms have been developed in yester years for spacecrafts using
different sensors but the last two decades have witnessed a
phenomenal increase in research related with GPS receivers as a
stand-alone sensor for determining the attitude of satellite using the
phase measurements of the signals from GNSS. The GNSS-based
Attitude determination algorithms have been experimented in many
real missions. The problem of AD algorithms using GNSS phase
measurements has two important parts; the ambiguity resolution and
the determining of attitude. Ambiguity resolution is the widely
addressed topic in literature for implementing the AD algorithm
using GNSS phase measurements for achieving the accuracy of
millimeter level. This paper broadly overviews the different
techniques for resolving the integer ambiguities encountered in AD
using GNSS phase measurements.
Abstract: This paper presents a finite point method based on
directional derivatives for diffusion equation on 2D scattered points.
To discretize the diffusion operator at a given point, a six-point stencil
is derived by employing explicit numerical formulae of directional
derivatives, namely, for the point under consideration, only five
neighbor points are involved, the number of which is the smallest for
discretizing diffusion operator with first-order accuracy. A method for
selecting neighbor point set is proposed, which satisfies the solvability
condition of numerical derivatives. Some numerical examples are
performed to show the good performance of the proposed method.
Abstract: The purpose of this paper is to detect human in images.
This paper proposes a method for extracting human body feature descriptors consisting of projected edge component series. The feature descriptor can express appearances and shapes of human with local
and global distribution of edges. Our method evaluated with a linear SVM classifier on Daimler-Chrysler pedestrian dataset, and test with
various sub-region size. The result shows that the accuracy level of
proposed method similar to Histogram of Oriented Gradients(HOG)
feature descriptor and feature extraction process is simple and faster than existing methods.
Abstract: In order to develop forest management strategies in
tropical forest in Malaysia, surveying the forest resources and
monitoring the forest area affected by logging activities is essential.
There are tremendous effort has been done in classification of land
cover related to forest resource management in this country as it is a
priority in all aspects of forest mapping using remote sensing and
related technology such as GIS. In fact classification process is a
compulsory step in any remote sensing research. Therefore, the main
objective of this paper is to assess classification accuracy of
classified forest map on Landsat TM data from difference number of
reference data (200 and 388 reference data). This comparison was
made through observation (200 reference data), and interpretation
and observation approaches (388 reference data). Five land cover
classes namely primary forest, logged over forest, water bodies, bare
land and agricultural crop/mixed horticultural can be identified by
the differences in spectral wavelength. Result showed that an overall
accuracy from 200 reference data was 83.5 % (kappa value
0.7502459; kappa variance 0.002871), which was considered
acceptable or good for optical data. However, when 200 reference
data was increased to 388 in the confusion matrix, the accuracy
slightly improved from 83.5% to 89.17%, with Kappa statistic
increased from 0.7502459 to 0.8026135, respectively. The accuracy
in this classification suggested that this strategy for the selection of
training area, interpretation approaches and number of reference data
used were importance to perform better classification result.
Abstract: The sensitivity of orifice plate metering to disturbed
flow (either asymmetric or swirling) is a subject of great concern to
flow meter users and manufacturers. The distortions caused by pipe
fittings and pipe installations upstream of the orifice plate are major
sources of this type of non-standard flows. These distortions can alter
the accuracy of metering to an unacceptable degree. In this work, a
multi-scale object known as metal foam has been used to generate a
predetermined turbulent flow upstream of the orifice plate. The
experimental results showed that the combination of an orifice plate
and metal foam flow conditioner is broadly insensitive to upstream
disturbances. This metal foam demonstrated a good performance in
terms of removing swirl and producing a repeatable flow profile
within a short distance downstream of the device. The results of using
a combination of a metal foam flow conditioner and orifice plate for
non-standard flow conditions including swirling flow and asymmetric
flow show this package can preserve the accuracy of metering up to
the level required in the standards.