Abstract: In this researcha particle swarm optimization (PSO)
algorithm is proposedfor no-wait flowshopsequence dependent
setuptime scheduling problem with weighted earliness-tardiness
penalties as the criterion (|,
|Σ
"
).The
smallestposition value (SPV) rule is applied to convert the continuous
value of position vector of particles in PSO to job permutations.A
timing algorithm is generated to find the optimal schedule and
calculate the objective function value of a given sequence in PSO
algorithm. Twodifferent neighborhood structures are applied to
improve the solution quality of PSO algorithm.The first one is based
on variable neighborhood search (VNS) and the second one is a
simple one with invariable structure. In order to compare the
performance of two neighborhood structures, random test problems
are generated and solved by both neighborhood
approaches.Computational results show that the VNS algorithmhas
better performance than the other one especially for the large sized
problems.
Abstract: Loop detectors report traffic characteristics in real
time. They are at the core of traffic control process. Intuitively,
one would expect that as density of detection increases, so would
the quality of estimates derived from detector data. However, as
detector deployment increases, the associated operating and
maintenance cost increases. Thus, traffic agencies often need to
decide where to add new detectors and which detectors should
continue receiving maintenance, given their resource constraints.
This paper evaluates the effect of detector spacing on freeway
travel time estimation. A freeway section (Interstate-15) in Salt
Lake City metropolitan region is examined. The research reveals
that travel time accuracy does not necessarily deteriorate with
increased detector spacing. Rather, the actual location of detectors
has far greater influence on the quality of travel time estimates.
The study presents an innovative computational approach that
delivers optimal detector locations through a process that relies on
Genetic Algorithm formulation.
Abstract: The PRAF family of proteins is a plant specific family of proteins with distinct domain architecture and various unique sequence/structure traits. We have carried out an extensive search of the Arabidopsis genome using an automated pipeline and manual methods to verify previously known and identify unknown instances of PRAF proteins, characterize their sequence and build 3D structures of their individual domains. Integrating the sequence, structure and whatever little known experimental details for each of these proteins and their domains, we present a comprehensive characterization of the different domains in these proteins and their variant properties.
Abstract: Due to the increasing penetration of wind energy, it is
necessary to possess design tools that are able to simulate the impact
of these installations in utility grids. In order to provide a net
contribution to this issue a detailed wind park model has been
developed and is briefly presented. However, the computational costs
associated with the performance of such a detailed model in
describing the behavior of a wind park composed by a considerable
number of units may render its practical application very difficult. To
overcome this problem integral manifolds theory has been applied to
reduce the order of the detailed wind park model, and therefore
create the conditions for the development of a dynamic equivalent
which is able to retain the relevant dynamics with respect to the
existing a.c. system. In this paper integral manifold method has been
introduced for order reduction. Simulation results of the proposed
method represents that integral manifold method results fit the
detailed model results with a higher precision than singular
perturbation method.
Abstract: One of the most used assumptions in logic programming
and deductive databases is the so-called Closed World Assumption
(CWA), according to which the atoms that cannot be inferred
from the programs are considered to be false (i.e. a pessimistic
assumption). One of the most successful semantics of conventional
logic programs based on the CWA is the well-founded semantics.
However, the CWA is not applicable in all circumstances when
information is handled. That is, the well-founded semantics, if
conventionally defined, would behave inadequately in different cases.
The solution we adopt in this paper is to extend the well-founded
semantics in order for it to be based also on other assumptions. The
basis of (default) negative information in the well-founded semantics
is given by the so-called unfounded sets. We extend this concept
by considering optimistic, pessimistic, skeptical and paraconsistent
assumptions, used to complete missing information from a program.
Our semantics, called extended well-founded semantics, expresses
also imperfect information considered to be missing/incomplete,
uncertain and/or inconsistent, by using bilattices as multivalued
logics. We provide a method of computing the extended well-founded
semantics and show that Kripke-Kleene semantics is captured by
considering a skeptical assumption. We show also that the complexity
of the computation of our semantics is polynomial time.
Abstract: In this paper an efficient incomplete factorization preconditioner is proposed for the Least Mean Squares (LMS) adaptive filter. The proposed preconditioner is approximated from a priori knowledge of the factors of input correlation matrix with an incomplete strategy, motivated by the sparsity patter of the upper triangular factor in the QRD-RLS algorithm. The convergence properties of IPLMS algorithm are comparable with those of transform domain LMS(TDLMS) algorithm. Simulation results show efficiency and robustness of the proposed algorithm with reduced computational complexity.
Abstract: The design of a gravity dam is performed through an
interactive process involving a preliminary layout of the structure
followed by a stability and stress analysis. This study presents a
method to define the optimal top width of gravity dam with genetic
algorithm. To solve the optimization task (minimize the cost of the
dam), an optimization routine based on genetic algorithms (GAs) was
implemented into an Excel spreadsheet. It was found to perform well
and GA parameters were optimized in a parametric study. Using the
parameters found in the parametric study, the top width of gravity
dam optimization was performed and compared to a gradient-based
optimization method (classic method). The accuracy of the results
was within close proximity. In optimum dam cross section, the ratio
of is dam base to dam height is almost equal to 0.85, and ratio of dam
top width to dam height is almost equal to 0.13. The computerized
methodology may provide the help for computation of the optimal
top width for a wide range of height of a gravity dam.
Abstract: Cell formation is the first step in the design of cellular
manufacturing systems. In this study, a general purpose
computational scheme employing a hybrid tabu search algorithm as
the core is proposed to solve the cell formation problem and its
variants. In the proposed scheme, great flexibilities are left to the
users. The core solution searching algorithm embedded in the scheme
can be easily changed to any other meta-heuristic algorithms, such as
the simulated annealing, genetic algorithm, etc., based on the
characteristics of the problems to be solved or the preferences the
users might have. In addition, several counters are designed to control
the timing of conducting intensified solution searching and diversified
solution searching strategies interactively.
Abstract: Three new algorithms based on minimization of autocorrelation of transmitted symbols and the SLM approach which are computationally less demanding have been proposed. In the first algorithm, autocorrelation of complex data sequence is minimized to a value of 1 that results in reduction of PAPR. Second algorithm generates multiple random sequences from the sequence generated in the first algorithm with same value of autocorrelation i.e. 1. Out of these, the sequence with minimum PAPR is transmitted. Third algorithm is an extension of the second algorithm and requires minimum side information to be transmitted. Multiple sequences are generated by modifying a fixed number of complex numbers in an OFDM data sequence using only one factor. The multiple sequences represent the same data sequence and the one giving minimum PAPR is transmitted. Simulation results for a 256 subcarrier OFDM system show that significant reduction in PAPR is achieved using the proposed algorithms.
Abstract: Classical Bose-Chaudhuri-Hocquenghem (BCH) codes C that contain their dual codes can be used to construct quantum stabilizer codes this chapter studies the properties of such codes. It had been shown that a BCH code of length n which contains its dual code satisfies the bound on weight of any non-zero codeword in C and converse is also true. One impressive difficulty in quantum communication and computation is to protect informationcarrying quantum states against undesired interactions with the environment. To address this difficulty, many good quantum errorcorrecting codes have been derived as binary stabilizer codes. We were able to shed more light on the structure of dual containing BCH codes. These results make it possible to determine the parameters of quantum BCH codes in terms of weight of non-zero dual codeword.
Abstract: Support vector machines (SVMs) are considered to be
the best machine learning algorithms for minimizing the predictive
probability of misclassification. However, their drawback is that for
large data sets the computation of the optimal decision boundary is a
time consuming function of the size of the training set. Hence several
methods have been proposed to speed up the SVM algorithm. Here
three methods used to speed up the computation of the SVM
classifiers are compared experimentally using a musical genre
classification problem. The simplest method pre-selects a random
sample of the data before the application of the SVM algorithm. Two
additional methods use proximity graphs to pre-select data that are
near the decision boundary. One uses k-Nearest Neighbor graphs and
the other Relative Neighborhood Graphs to accomplish the task.
Abstract: A novel methodology has been used to design an
evaporator coil of a refrigerant. The methodology used is through a
complete Computer Aided Design /Computer Aided Engineering
approach, by means of a Computational Fluid Dynamic/Finite
Element Analysis model which is executed many times for the
thermal-fluid exploration of several designs' configuration by an
commercial optimizer. Hence the design is carried out automatically
by parallel computations, with an optimization package taking the
decisions rather than the design engineer. The engineer instead takes
decision regarding the physical settings and initializing of the
computational models to employ, the number and the extension of the
geometrical parameters of the coil fins and the optimization tools to
be employed. The final design of the coil geometry found to be better
than the initial design.
Abstract: The term interactive education indicates the meaning
related with multidisciplinary aspects of distance education following
contemporary means around a common basis with different
functional requirements. The aim of this paper is to reflect the new
techniques in education with the new methods and inventions. These
methods are better supplied by interactivity. The integration of
interactive facilities in the discipline of education with distance
learning is not a new concept but in addition the usage of these
methods on design issue is newly being adapted to design education.
In this paper the general approach of this method and after the
analysis of different samples, the advantages and disadvantages of
these approaches are being identified. The method of this paper is to
evaluate the related samples and then analyzing the main hypothesis.
The main focus is to mention the formation processes of this
education. Technological developments in education should be
filtered around the necessities of the design education and the
structure of the system could then be formed or renewed. The
conclusion indicates that interactive methods of education in design
issue is a meaning capturing not only technical and computational
intelligence aspects but also aesthetical and artistic approaches
coming together around the same purpose.
Abstract: Power Spectral Density (PSD) computed by taking the Fourier transform of auto-correlation functions (Wiener-Khintchine Theorem) gives better result, in case of noisy data, as compared to the Periodogram approach. However, the computational complexity of Wiener-Khintchine approach is more than that of the Periodogram approach. For the computation of short time Fourier transform (STFT), this problem becomes even more prominent where computation of PSD is required after every shift in the window under analysis. In this paper, recursive version of the Wiener-Khintchine theorem has been derived by using the sliding DFT approach meant for computation of STFT. The computational complexity of the proposed recursive Wiener-Khintchine algorithm, for a window size of N, is O(N).
Abstract: In MPEG and H.26x standards, to eliminate the
temporal redundancy we use motion estimation. Given that the
motion estimation stage is very complex in terms of computational
effort, a hardware implementation on a re-configurable circuit is
crucial for the requirements of different real time multimedia
applications. In this paper, we present hardware architecture for
motion estimation based on "Full Search Block Matching" (FSBM)
algorithm. This architecture presents minimum latency, maximum
throughput, full utilization of hardware resources such as embedded
memory blocks, and combining both pipelining and parallel
processing techniques. Our design is described in VHDL language,
verified by simulation and implemented in a Stratix II
EP2S130F1020C4 FPGA circuit. The experiment result show that the
optimum operating clock frequency of the proposed design is 89MHz
which achieves 160M pixels/sec.
Abstract: In this contribution a newly developed e-learning environment is presented, which incorporates Intelligent Agents and Computational Intelligence Techniques. The new e-learning environment is constituted by three parts, the E-learning platform Front-End, the Student Questioner Reasoning and the Student Model Agent. These parts are distributed geographically in dispersed computer servers, with main focus on the design and development of these subsystems through the use of new and emerging technologies. These parts are interconnected in an interoperable way, using web services for the integration of the subsystems, in order to enhance the user modelling procedure and achieve the goals of the learning process.
Abstract: In this work, a Modified Functional Link Artificial
Neural Network (M-FLANN) is proposed which is simpler than a
Multilayer Perceptron (MLP) and improves upon the universal
approximation capability of Functional Link Artificial Neural
Network (FLANN). MLP and its variants: Direct Linear Feedthrough
Artificial Neural Network (DLFANN), FLANN and
M-FLANN have been implemented to model a simulated Water Bath
System and a Continually Stirred Tank Heater (CSTH). Their
convergence speed and generalization ability have been compared.
The networks have been tested for their interpolation and
extrapolation capability using noise-free and noisy data. The results
show that M-FLANN which is computationally cheap, performs
better and has greater generalization ability than other networks
considered in the work.
Abstract: In this paper, a semi-fragile watermarking scheme is proposed for color image authentication. In this particular scheme, the color image is first transformed from RGB to YST color space, suitable for watermarking the color media. Each channel is divided into 4×4 non-overlapping blocks and its each 2×2 sub-block is selected. The embedding space is created by setting the two LSBs of selected sub-block to zero, which will hold the authentication and recovery information. For verification of work authentication and parity bits denoted by 'a' & 'p' are computed for each 2×2 subblock. For recovery, intensity mean of each 2×2 sub-block is computed and encoded upto six to eight bits depending upon the channel selection. The size of sub-block is important for correct localization and fast computation. For watermark distribution 2DTorus Automorphism is implemented using a private key to have a secure mapping of blocks. The perceptibility of watermarked image is quite reasonable both subjectively and objectively. Our scheme is oblivious, correctly localizes the tampering and able to recovery the original work with probability of near one.
Abstract: The analytical prediction of the decay heat results
from the fast neutron fission of actinides was initiated under a project, 10-MAT1134-3, funded by king Abdulaziz City of Science
and Technology (KASCT), Long-Term Comprehensive National Plan for Science, Technology and Innovations, managed by a team
from King Abdulaziz University (KAU), Saudi Arabia, and
supervised by Argonne National Laboratory (ANL) has collaborated
with KAU's team to assist in the computational analysis. In this paper, the numerical solution of coupled linear differential equations
that describe the decays and buildups of minor fission product MFA, has been used to predict the total decay heat and its components from the fast neutron fission of 235U and 239Pu. The reliability of the present approach is illustrated via systematic
comparisons with the measurements reported by the University of
Tokyo, in YAYOI reactor.
Abstract: The finite-difference time-domain (FDTD) method is
one of the most widely used computational methods in
electromagnetic. This paper describes the design of two-dimensional
(2D) FDTD simulation software for transverse magnetic (TM)
polarization using Berenger's split-field perfectly matched layer
(PML) formulation. The software is developed using Matlab
programming language. Numerical examples validate the software.