Abstract: This study discusses the stumbling blocks stifling the
adoption of GPS technology in the public sector of Pakistan. This
study has been carried out in order to describe the value of GPS
technology and its adoption at various public sector organisations in
Pakistan. Sample size for the research conducted was 200; personnel
working in public sector having age above 29 years were surveyed.
Data collected for this research has been quantitatively analysed with
the help of SPSS. Regression analysis, correlation and cross
tabulation were the techniques used to determine the strength of
relationship between key variables. Findings of this research indicate
that main hurdles in GPS adoption in the public sector of Pakistan are
lack of awareness about GPS among masses in general and the
stakeholders in particular, lack of initiative on part of government in
promoting new technologies, unavailability of GPS infrastructure in
Pakistan and prohibitions on map availability because of security
reasons.
Abstract: An algorithm for learning an overcomplete dictionary
using a Cauchy mixture model for sparse decomposition of an underdetermined
mixing system is introduced. The mixture density
function is derived from a ratio sample of the observed mixture
signals where 1) there are at least two but not necessarily more
mixture signals observed, 2) the source signals are statistically
independent and 3) the sources are sparse. The basis vectors of the
dictionary are learned via the optimization of the location parameters
of the Cauchy mixture components, which is shown to be more
accurate and robust than the conventional data mining methods
usually employed for this task. Using a well known sparse
decomposition algorithm, we extract three speech signals from two
mixtures based on the estimated dictionary. Further tests with
additive Gaussian noise are used to demonstrate the proposed
algorithm-s robustness to outliers.
Abstract: During last decades, worldwide researchers dedicated
efforts to develop machine-based seismic Early Warning systems,
aiming at reducing the huge human losses and economic damages.
The elaboration time of seismic waveforms is to be reduced in order
to increase the time interval available for the activation of safety
measures. This paper suggests a Data Mining model able to correctly
and quickly estimate dangerousness of the running seismic event.
Several thousand seismic recordings of Japanese and Italian
earthquakes were analyzed and a model was obtained by means of a
Bayesian Network (BN), which was tested just over the first
recordings of seismic events in order to reduce the decision time and
the test results were very satisfactory.
The model was integrated within an Early Warning System
prototype able to collect and elaborate data from a seismic sensor
network, estimate the dangerousness of the running earthquake and
take the decision of activating the warning promptly.
Abstract: Optical networks are high capacity networks that meet
the rapidly growing demand for bandwidth in the terrestrial
telecommunications industry. This paper studies and evaluates singlemode
and multimode fiber transmission by varying the distance. It
focuses on their performance in LAN environment. This is achieved
by observing the pulse spreading and attenuation in optical spectrum
and eye-diagram that are obtained using OptSim simulator. The
behaviors of two modes with different distance of data transmission
are studied, evaluated and compared.
Abstract: To simulate heating systems in buildings, a research oriented computer code has been developed in Sharif University of Technology in Iran where the climate, existing heating equipment in buildings, consumer behavior and their interactions are considered for simulating energy consumption in conventional systems such as heaters, radiators and fan-coils. In order to validate the computer code, the available data of five buildings was used and the computed consumed energy was compared with the estimated energy extracted from monthly bills. The initial heating system was replaced by the alternative system and the effect of this change was observed on the energy consumption. As a result, the effect of changing heating equipment on energy consumption was investigated in different climates. Changing heater to radiator renders energy conservation up to 50% in all climates and changing radiator to fan-coil decreases energy consumption in climates with cold and dry winter.
Abstract: The Mahin area is a part of Tarom- Hashtjin zone that
located in west of Qazvin province in northwest of Iran. Many copper
and base metals ore deposits are hosted by this zone. High potential
localities identification in this area is very necessary. The objective of
this research, is finding hydrothermal alteration zones by remote
sensing methods and best processing technique of Advanced
Spaceborne Thermal Emission and Reflection Radiometer (ASTER)
data. Different methods such as band ratio, Principal Component
Analysis (PCA), Minimum Noise Fraction (MNF) and Least Square
Fit (LS-Fit) were used for mapping hydrothermal alteration zones.
Abstract: This research presents the development of simulation
modeling for WIP management in semiconductor fabrication.
Manufacturing simulation modeling is needed for productivity
optimization analysis due to the complex process flows involved
more than 35 percent re-entrance processing steps more than 15 times
at same equipment. Furthermore, semiconductor fabrication required
to produce high product mixed with total processing steps varies from
300 to 800 steps and cycle time between 30 to 70 days. Besides the
complexity, expansive wafer cost that potentially impact the
company profits margin once miss due date is another motivation to
explore options to experiment any analysis using simulation
modeling. In this paper, the simulation model is developed using
existing commercial software platform AutoSched AP, with
customized integration with Manufacturing Execution Systems
(MES) and Advanced Productivity Family (APF) for data collections
used to configure the model parameters and data source. Model
parameters such as processing steps cycle time, equipment
performance, handling time, efficiency of operator are collected
through this customization. Once the parameters are validated, few
customizations are made to ensure the prior model is executed. The
accuracy for the simulation model is validated with the actual output
per day for all equipments. The comparison analysis from result of
the simulation model compared to actual for achieved 95 percent
accuracy for 30 days. This model later was used to perform various
what if analysis to understand impacts on cycle time and overall
output. By using this simulation model, complex manufacturing
environment like semiconductor fabrication (fab) now have
alternative source of validation for any new requirements impact
analysis.
Abstract: Traffic flow in adverse weather conditions have been investigated in this study for general traffic, week day and week end traffic. The empirical evidence is strong in support of the view that rainfall affects macroscopic traffic flow parameters. Data generated from a basic highway section along J5 in Johor Bahru, Malaysia was synchronized with 161 rain events over a period of three months. This revealed a 4.90%, 6.60% and 11.32% reduction in speed for light rain, moderate rain and heavy rain conditions respectively. The corresponding capacity reductions in the three rainfall regimes are 1.08% for light rain, 6.27% for moderate rain and 29.25% for heavy rain. In the week day traffic, speed drops of 8.1% and 16.05% were observed for light and heavy conditions. The moderate rain condition speed increased by 12.6%. The capacity drops for week day traffic are 4.40% for light rain, 9.77% for moderate rain and 45.90% for heavy rain. The weekend traffic indicated speed difference between the dry condition and the three rainy conditions as 6.70% for light rain, 8.90% for moderate rain and 13.10% for heavy rain. The capacity changes computed for the weekend traffic were 0.20% in light rain, 13.90% in moderate rain and 16.70% in heavy rain. No traffic instabilities were observed throughout the observation period and the capacities reported for each rain condition were below the norain condition capacity. Rainfall has tremendous impact on traffic flow and this may have implications for shock wave propagation.
Abstract: This paper presents a dominant color descriptor
technique for medical image retrieval. The medical image system
will collect and store into medical database. The purpose of
dominant color descriptor (DCD) technique is to retrieve medical
image and to display similar image using queried image. First, this
technique will search and retrieve medical image based on keyword
entered by user. After image is found, the system will assign this
image as a queried image. DCD technique will calculate the image
value of dominant color. Then, system will search and retrieve again
medical image based on value of dominant color query image.
Finally, the system will display similar images with the queried
image to user. Simple application has been developed and tested
using dominant color descriptor. Result based on experiment
indicates this technique is effective and can be used for medical
image retrieval.
Abstract: The after–sales activities are nowadays acknowledged
as a relevant source of revenue, profit and competitive advantage in
most manufacturing industries. Top and middle management,
therefore, should focus on the definition of a structured business
performance measurement system for the after-sales business. The
paper aims at filling this gap, and presents an integrated methodology
for the after-sales network performance measurement, and provides
an empirical application to automotive case companies and their
official service network. This is the first study that presents an
integrated multivariate approach for total assessment and
improvement of after-sale services.
Abstract: In this paper we compare the accuracy of data mining
methods to classifying students in order to predicting student-s class
grade. These predictions are more useful for identifying weak
students and assisting management to take remedial measures at early
stages to produce excellent graduate that will graduate at least with
second class upper. Firstly we examine single classifiers accuracy on
our data set and choose the best one and then ensembles it with a
weak classifier to produce simple voting method. We present results
show that combining different classifiers outperformed other single
classifiers for predicting student performance.
Abstract: The goal of this paper is to segment the countries
based on the value of export from Iran during 14 years ending at 2005. To measure the dissimilarity among export baskets of different countries, we define Dissimilarity Export Basket (DEB) function and
use this distance function in K-means algorithm. The DEB function
is defined based on the concepts of the association rules and the
value of export group-commodities. In this paper, clustering quality
function and clusters intraclass inertia are defined to, respectively,
calculate the optimum number of clusters and to compare the
functionality of DEB versus Euclidean distance. We have also study
the effects of importance weight in DEB function to improve
clustering quality. Lastly when segmentation is completed, a
designated RFM model is used to analyze the relative profitability of
each cluster.
Abstract: In this contribution an innovative platform is being
presented that integrates intelligent agents and evolutionary
computation techniques in legacy e-learning environments. It
introduces the design and development of a scalable and
interoperable integration platform supporting:
I) various assessment agents for e-learning environments,
II) a specific resource retrieval agent for the provision of
additional information from Internet sources matching the
needs and profile of the specific user and
III) a genetic algorithm designed to extract efficient information
(classifying rules) based on the students- answering input
data.
The agents are implemented in order to provide intelligent
assessment services based on computational intelligence techniques
such as Bayesian Networks and Genetic Algorithms.
The proposed Genetic Algorithm (GA) is used in order to extract
efficient information (classifying rules) based on the students-
answering input data. The idea of using a GA in order to fulfil this
difficult task came from the fact that GAs have been widely used in
applications including classification of unknown data.
The utilization of new and emerging technologies like web
services allows integrating the provided services to any web based
legacy e-learning environment.
Abstract: An evolutionary method whose selection and recombination
operations are based on generalization error-bounds of
support vector machine (SVM) can select a subset of potentially
informative genes for SVM classifier very efficiently [7]. In this
paper, we will use the derivative of error-bound (first-order criteria)
to select and recombine gene features in the evolutionary process,
and compare the performance of the derivative of error-bound with
the error-bound itself (zero-order) in the evolutionary process. We
also investigate several error-bounds and their derivatives to compare
the performance, and find the best criteria for gene selection
and classification. We use 7 cancer-related human gene expression
datasets to evaluate the performance of the zero-order and first-order
criteria of error-bounds. Though both criteria have the same strategy
in theoretically, experimental results demonstrate the best criterion
for microarray gene expression data.
Abstract: To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.
Abstract: In this paper, the detection of a fault in the Global Positioning System (GPS) measurement is addressed. The class of faults considered is a bias in the GPS pseudorange measurements. This bias is modeled as an unknown constant. The fault could be the result of a receiver fault or signal fault such as multipath error. A bias bank is constructed based on set of possible fault hypotheses. Initially, there is equal probability of occurrence for any of the biases in the bank. Subsequently, as the measurements are processed, the probability of occurrence for each of the biases is sequentially updated. The fault with a probability approaching unity will be declared as the current fault in the GPS measurement. The residual formed from the GPS and Inertial Measurement Unit (IMU) measurements is used to update the probability of each fault. Results will be presented to show the performance of the presented algorithm.
Abstract: Scene interpretation systems need to match (often ambiguous)
low-level input data to concepts from a high-level ontology.
In many domains, these decisions are uncertain and benefit greatly
from proper context. This paper demonstrates the use of decision
trees for estimating class probabilities for regions described by feature
vectors, and shows how context can be introduced in order to improve
the matching performance.
Abstract: In literature, there are metrics for identifying the
quality of reusable components but the framework that makes use of
these metrics to precisely predict reusability of software components
is still need to be worked out. These reusability metrics if identified
in the design phase or even in the coding phase can help us to reduce
the rework by improving quality of reuse of the software component
and hence improve the productivity due to probabilistic increase in
the reuse level. As CK metric suit is most widely used metrics for
extraction of structural features of an object oriented (OO) software;
So, in this study, tuned CK metric suit i.e. WMC, DIT, NOC, CBO
and LCOM, is used to obtain the structural analysis of OO-based
software components. An algorithm has been proposed in which the
inputs can be given to K-Means Clustering system in form of
tuned values of the OO software component and decision tree is
formed for the 10-fold cross validation of data to evaluate the in
terms of linguistic reusability value of the component. The developed
reusability model has produced high precision results as desired.
Abstract: This paper presents and evaluates a new classification
method that aims to improve classifiers performances and speed up
their training process. The proposed approach, called labeled
classification, seeks to improve convergence of the BP (Back
propagation) algorithm through the addition of an extra feature
(labels) to all training examples. To classify every new example, tests
will be carried out each label. The simplicity of implementation is the
main advantage of this approach because no modifications are
required in the training algorithms. Therefore, it can be used with
others techniques of acceleration and stabilization. In this work, two
models of the labeled classification are proposed: the LMLP
(Labeled Multi Layered Perceptron) and the LNFC (Labeled Neuro
Fuzzy Classifier). These models are tested using Iris, wine, texture
and human thigh databases to evaluate their performances.
Abstract: Multicarrier transmission system such as Orthogonal
Frequency Division Multiplexing (OFDM) is a promising technique
for high bit rate transmission in wireless communication system.
OFDM is a spectrally efficient modulation technique that can achieve
high speed data transmission over multipath fading channels without
the need for powerful equalization techniques. However the price
paid for this high spectral efficiency and less intensive equalization
is low power efficiency. OFDM signals are very sensitive to nonlinear
effects due to the high Peak-to-Average Power Ratio (PAPR),
which leads to the power inefficiency in the RF section of the
transmitter. This paper investigates the effect of PAPR reduction on
the performance parameter of multicarrier communication system.
Performance parameters considered are power consumption of Power
Amplifier (PA) and Digital-to-Analog Converter (DAC), power amplifier
efficiency, SNR of DAC and BER performance of the system.
From our analysis it is found that irrespective of PAPR reduction
technique being employed, the power consumption of PA and DAC
reduces and power amplifier efficiency increases due to reduction in
PAPR. Moreover, it has been shown that for a given BER performance
the requirement of Input-Backoff (IBO) reduces with reduction in
PAPR.