Abstract: Computation of facility location problem for every
location in the country is not easy simultaneously. Solving the
problem is described by using cluster computing. A technique is to
design parallel algorithm by using local search with single swap
method in order to solve that problem on clusters. Parallel
implementation is done by the use of portable parallel programming,
Message Passing Interface (MPI), on Microsoft Windows Compute
Cluster. In this paper, it presents the algorithm that used local search
with single swap method and implementation of the system of a
facility to be opened by using MPI on cluster. If large datasets are
considered, the process of calculating a reasonable cost for a facility
becomes time consuming. The result shows parallel computation of
facility location problem on cluster speedups and scales well as
problem size increases.
Abstract: Designing, implementing, and debugging concurrency
control algorithms in a real system is a complex, tedious, and errorprone
process. Further, understanding concurrency control
algorithms and distributed computations is itself a difficult task.
Visualization can help with both of these problems. Thus, we have
developed an exploratory environment in which people can prototype
and test various versions of concurrency control algorithms, study
and debug distributed computations, and view performance statistics
of distributed systems. In this paper, we describe the exploratory
environment and show how it can be used to explore concurrency
control algorithms for the interactive steering of distributed
computations.
Abstract: Unified Speech Audio Coding (USAC), the latest MPEG standardization for unified speech and audio coding, uses a speech/audio classification algorithm to distinguish speech and audio segments of the input signal. The quality of the recovered audio can be increased by well-designed orchestra/percussion classification and subsequent processing. However, owing to the shortcoming of the system, introducing an orchestra/percussion classification and modifying subsequent processing can enormously increase the quality of the recovered audio. This paper proposes an orchestra/percussion classification algorithm for the USAC system which only extracts 3 scales of Mel-Frequency Cepstral Coefficients (MFCCs) rather than traditional 13 scales of MFCCs and use Iterative Dichotomiser 3 (ID3) Decision Tree rather than other complex learning method, thus the proposed algorithm has lower computing complexity than most existing algorithms. Considering that frequent changing of attributes may lead to quality loss of the recovered audio signal, this paper also design a modified subsequent process to help the whole classification system reach an accurate rate as high as 97% which is comparable to classical 99%.
Abstract: Terminal localization for indoor Wireless Local Area
Networks (WLANs) is critical for the deployment of location-aware
computing inside of buildings. A major challenge is obtaining high
localization accuracy in presence of fluctuations of the received signal
strength (RSS) measurements caused by multipath fading. This paper
focuses on reducing the effect of the distance-varying noise by spatial
filtering of the measured RSS. Two different survey point geometries
are tested with the noise reduction technique: survey points arranged
in sets of clusters and survey points uniformly distributed over the
network area. The results show that the location accuracy improves
by 16% when the filter is used and by 18% when the filter is applied
to a clustered survey set as opposed to a straight-line survey set.
The estimated locations are within 2 m of the true location, which
indicates that clustering the survey points provides better localization
accuracy due to superior noise removal.
Abstract: A Web-services based grid infrastructure is evolving to be readily available in the near future. In this approach, the Web services are inherited (encapsulated or functioned) into the same existing Grid services class. In practice there is not much difference between the existing Web and grid infrastructure. Grid services emerged as stateful web services. In this paper, we present the key components of web-services based grid and also how the resource discovery is performed on web-services based grid considering resource discovery, as a critical service, to be provided by any type of grid.
Abstract: This paper proposes a new approach to offer a private
cloud service in HPC clusters. In particular, our approach relies on
automatically scheduling users- customized environment request as a
normal job in batch system. After finishing virtualization request jobs,
those guest operating systems will dismiss so that compute nodes will
be released again for computing. We present initial work on the
innovative integration of HPC batch system and virtualization tools
that aims at coexistence such that they suffice for meeting the
minimizing interference required by a traditional HPC cluster. Given
the design of initial infrastructure, the proposed effort has the potential
to positively impact on synergy model. The results from the
experiment concluded that goal for provisioning customized cluster
environment indeed can be fulfilled by using virtual machines, and
efficiency can be improved with proper setup and arrangements.
Abstract: Multi-site damage (MSD) has been a challenge to
aircraft, civil and power plant structures. In real life components are subjected to cracking at many vulnerable locations such as the bolt
holes. However, we do not consider for the presence of multiple cracks. Unlike components with a single crack, these components are
difficult to predict. When two cracks approach one another, their
stress fields influence each other and produce enhancing or shielding effect depending on the position of the cracks. In the present study,
numerical studies on fracture analysis have been conducted by using
the developed code based on the modified virtual crack closure integral (MVCCI) technique and finite element analysis (FEA) software ABAQUS for computing SIF of plates with multiple cracks.
Various parametric studies have been carried out and the results have
been compared with literature where ever available and also with the solution, obtained by using ABAQUS. By conducting extensive
numerical studies expressions for SIF have been obtained for collinear cracks and non-aligned cracks.
Abstract: In a wireless communication system, a
predistorter(PD) is often employed to alleviate nonlinear distortions
due to operating a power amplifier near saturation, thereby improving
the system performance and reducing the interference to adjacent
channels. This paper presents a new adaptive polynomial digital
predistorter(DPD). The proposed DPD uses Coordinate Rotation
Digital Computing(CORDIC) processors and PD process by pipelined
architecture. It is simpler and faster than conventional adaptive
polynomial DPD. The performance of the proposed DPD is proved by
MATLAB simulation.
Abstract: In this paper, we consider the effect of the initial
sample size on the performance of a sequential approach that used
in selecting a good enough simulated system, when the number
of alternatives is very large. We implement a sequential approach
on M=M=1 queuing system under some parameter settings, with a
different choice of the initial sample sizes to explore the impacts on
the performance of this approach. The results show that the choice
of the initial sample size does affect the performance of our selection
approach.
Abstract: With the rapid development in the field of life
sciences and the flooding of genomic information, the need for faster
and scalable searching methods has become urgent. One of the
approaches that were investigated is indexing. The indexing methods
have been categorized into three categories which are the lengthbased
index algorithms, transformation-based algorithms and mixed
techniques-based algorithms. In this research, we focused on the
transformation based methods. We embedded the N-gram method
into the transformation-based method to build an inverted index
table. We then applied the parallel methods to speed up the index
building time and to reduce the overall retrieval time when querying
the genomic database. Our experiments show that the use of N-Gram
transformation algorithm is an economical solution; it saves time and
space too. The result shows that the size of the index is smaller than
the size of the dataset when the size of N-Gram is 5 and 6. The
parallel N-Gram transformation algorithm-s results indicate that the
uses of parallel programming with large dataset are promising which
can be improved further.
Abstract: Business process modeling has become an accepted
means for designing and describing business operations. Thereby,
consistency of business process models, i.e., the absence of modeling
faults, is of upmost importance to organizations. This paper presents
a concept and subsequent implementation for detecting faults in
business process models and for computing a measure of their
consistency. It incorporates not only syntactic consistency but also
semantic consistency, i.e., consistency regarding the meaning of
model elements from a business perspective.
Abstract: In this paper, different approaches to solve the
forward kinematics of a three DOF actuator redundant hydraulic
parallel manipulator are presented. On the contrary to series
manipulators, the forward kinematic map of parallel manipulators
involves highly coupled nonlinear equations, which are almost
impossible to solve analytically. The proposed methods are using
neural networks identification with different structures to solve the
problem. The accuracy of the results of each method is analyzed in
detail and the advantages and the disadvantages of them in
computing the forward kinematic map of the given mechanism is
discussed in detail. It is concluded that ANFIS presents the best
performance compared to MLP, RBF and PNN networks in this
particular application.
Abstract: With the advent of emerging personal computing paradigms such as ubiquitous and mobile computing, Web contents are becoming accessible from a wide range of mobile devices. Since these devices do not have the same rendering capabilities, Web contents need to be adapted for transparent access from a variety of client agents. Such content adaptation is exploited for either an individual element or a set of consecutive elements in a Web document and results in better rendering and faster delivery to the client device. Nevertheless, Web content adaptation sets new challenges for semantic markup. This paper presents an advanced components platform, called SMC, enabling the development of mobility applications and services according to a channel model based on the principles of Services Oriented Architecture (SOA). It then goes on to describe the potential for integration with the Semantic Web through a novel framework of external semantic annotation that prescribes a scheme for representing semantic markup files and a way of associating Web documents with these external annotations. The role of semantic annotation in this framework is to describe the contents of individual documents themselves, assuring the preservation of the semantics during the process of adapting content rendering. Semantic Web content adaptation is a way of adding value to Web contents and facilitates repurposing of Web contents (enhanced browsing, Web Services location and access, etc).
Abstract: This paper is motivated by the aspect of uncertainty in
financial decision making, and how artificial intelligence and soft
computing, with its uncertainty reducing aspects can be used for
algorithmic trading applications that trade in high frequency.
This paper presents an optimized high frequency trading system that
has been combined with various moving averages to produce a hybrid
system that outperforms trading systems that rely solely on moving
averages. The paper optimizes an adaptive neuro-fuzzy inference
system that takes both the price and its moving average as input,
learns to predict price movements from training data consisting of
intraday data, dynamically switches between the best performing
moving averages, and performs decision making of when to buy or
sell a certain currency in high frequency.
Abstract: Quantum computation using qubits made of two component Bose-Einstein condensates (BECs) is analyzed. We construct a general framework for quantum algorithms to be executed using the collective states of the BECs. The use of BECs allows for an increase of energy scales via bosonic enhancement, resulting in two qubit gate operations that can be performed at a time reduced by a factor of N, where N is the number of bosons per qubit. We illustrate the scheme by an application to Deutsch-s and Grover-s algorithms, and discuss possible experimental implementations. Decoherence effects are analyzed under both general conditions and for the experimental implementation proposed.
Abstract: A general stochastic spatial MIMO channel model is
proposed for evaluating various MIMO techniques in this paper. It can
generate MIMO channels complying with various MIMO
configurations such as smart antenna, spatial diversity and spatial
multiplexing. The modeling method produces the stochastic fading
involving delay spread, Doppler spread, DOA (direction of arrival),
AS (angle spread), PAS (power azimuth Spectrum) of the scatterers,
antenna spacing and the wavelength. It can be applied in various
MIMO technique researches flexibly with low computing complexity.
Abstract: The join dependency provides the basis for obtaining
lossless join decomposition in a classical relational schema. The
existence of Join dependency shows that that the tables always
represent the correct data after being joined. Since the classical
relational databases cannot handle imprecise data, they were
extended to fuzzy relational databases so that uncertain, ambiguous,
imprecise and partially known information can also be stored in
databases in a formal way. However like classical databases, the
fuzzy relational databases also undergoes decomposition during
normalization, the issue of joining the decomposed fuzzy relations
remains intact. Our effort in the present paper is to emphasize on this
issue. In this paper we define fuzzy join dependency in the
framework of type-1 fuzzy relational databases & type-2 fuzzy
relational databases using the concept of fuzzy equality which is
defined using fuzzy functions. We use the fuzzy equi-join operator
for computing the fuzzy equality of two attribute values. We also
discuss the dependency preservation property on execution of this
fuzzy equi- join and derive the necessary condition for the fuzzy
functional dependencies to be preserved on joining the decomposed
fuzzy relations. We also derive the conditions for fuzzy join
dependency to exist in context of both type-1 and type-2 fuzzy
relational databases. We find that unlike the classical relational
databases even the existence of a trivial join dependency does not
ensure lossless join decomposition in type-2 fuzzy relational
databases. Finally we derive the conditions for the fuzzy equality to
be non zero and the qualification of an attribute for fuzzy key.
Abstract: In this research paper we have presented control
architecture for robotic arm movement and trajectory planning using
Fuzzy Logic (FL) and Genetic Algorithms (GAs). This architecture is
used to compensate the uncertainties like; movement, friction and
settling time in robotic arm movement. The genetic algorithms and
fuzzy logic is used to meet the objective of optimal control
movement of robotic arm. This proposed technique represents a
general model for redundant structures and may extend to other
structures. Results show optimal angular movement of joints as result
of evolutionary process. This technique has edge over the other
techniques as minimum mathematics complexity used.
Abstract: The aim of this research is to use artificial neural networks computing technology for estimating the net heating value (NHV) of crude oil by its Properties. The approach is based on training the neural network simulator uses back-propagation as the learning algorithm for a predefined range of analytically generated well test response. The network with 8 neurons in one hidden layer was selected and prediction of this network has been good agreement with experimental data.
Abstract: Deep and radical social reforms of the last century-s
nineties in many Eastern European countries caused changes in
Information Technology-s (IT) field. Inefficient information
technologies were rapidly replaced with forefront IT solutions, e.g.,
in Eastern European countries there is a high level penetration of
qualitative high-speed Internet. The authors have taken part in the
introduction of those changes in Latvia-s leading IT research
institute. Grounding on their experience authors in this paper offer an
IT services based model for analysis the mentioned changes- and
development processes in the higher education and research fields,
i.e., for research e-infrastructure-s development. Compare to the
international practice such services were developed in Eastern Europe
in an untraditional way, which provided swift and positive
technological changes.