Abstract: A procedure commonly used in Job Shop Scheduling Problem (JSSP) to evaluate the neighborhoods functions that use the non-deterministic algorithms is the calculation of the critical path in a digraph. This paper presents an experimental study of the cost of computation that exists when the calculation of the critical path in the solution for instances in which a JSSP of large size is involved. The results indicate that if the critical path is use in order to generate neighborhoods in the meta-heuristics that are used in JSSP, an elevated cost of computation exists in spite of the fact that the calculation of the critical path in any digraph is of polynomial complexity.
Abstract: This paper investigates the application of Particle Swarm Optimization (PSO) technique for coordinated design of a Power System Stabilizer (PSS) and a Thyristor Controlled Series Compensator (TCSC)-based controller to enhance the power system stability. The design problem of PSS and TCSC-based controllers is formulated as a time domain based optimization problem. PSO algorithm is employed to search for optimal controller parameters. By minimizing the time-domain based objective function, in which the deviation in the oscillatory rotor speed of the generator is involved; stability performance of the system is improved. To compare the capability of PSS and TCSC-based controller, both are designed independently first and then in a coordinated manner for individual and coordinated application. The proposed controllers are tested on a weakly connected power system. The eigenvalue analysis and non-linear simulation results are presented to show the effectiveness of the coordinated design approach over individual design. The simulation results show that the proposed controllers are effective in damping low frequency oscillations resulting from various small disturbances like change in mechanical power input and reference voltage setting.
Abstract: Workflow Management Systems (WfMS) alloworganizations to streamline and automate business processes and reengineer their structure. One important requirement for this type of system is the management and computation of the Quality of Service(QoS) of processes and workflows. Currently, a range of Web processes and workflow languages exist. Each language can be characterized by the set of patterns they support. Developing andimplementing a suitable and generic algorithm to compute the QoSof processes that have been designed using different languages is a difficult task. This is because some patterns are specific to particular process languages and new patterns may be introduced in future versions of a language. In this paper, we describe an adaptive algorithm implemented to cope with these two problems. The algorithm is called adaptive since it can be dynamically changed as the patterns of a process language also change.
Abstract: A novel idea presented in this paper is to combine
multihop routing with single-frequency networks (SFNs) for a
broadcasting scenario. An SFN is a set of multiple nodes that transmit
the same data simultaneously, resulting in transmitter macrodiversity.
Two of the most important performance factors of multihop
networks, node reachability and routing robustness, are analyzed.
Simulation results show that our proposed SFN-D routing algorithm
improves the node reachability by 37 percentage points as compared
to non-SFN multihop routing. It shows a diversity gain of 3.7 dB,
meaning that 3.7 dB lower transmission powers are required for the
same reachability. Even better results are possible for larger
networks. If an important node becomes inactive, this algorithm can
find new routes that a non-SFN scheme would not be able to find.
Thus, two of the major problems in multihopping are addressed;
achieving robust routing as well as improving node reachability or
reducing transmission power.
Abstract: In the Equivalent Transformation (ET) computation
model, a program is constructed by the successive accumulation of
ET rules. A method by meta-computation by which a correct ET
rule is generated has been proposed. Although the method covers a
broad range in the generation of ET rules, all important ET rules
are not necessarily generated. Generation of more ET rules can be
achieved by supplementing generation methods which are specialized
for important ET rules. A Specialization-by-Equation (Speq) rule is
one of those important rules. A Speq rule describes a procedure in
which two variables included in an atom conjunction are equalized
due to predicate constraints. In this paper, we propose an algorithm
that systematically and recursively generate Speq rules and discuss
its effectiveness in the synthesis of ET programs. A Speq rule is
generated based on proof of a logical formula consisting of given
atom set and dis-equality. The proof is carried out by utilizing some
ET rules and the ultimately obtained rules in generating Speq rules.
Abstract: Microstrip lines, widely used for good reason, are
broadband in frequency and provide circuits that are compact and
light in weight. They are generally economical to produce since they
are readily adaptable to hybrid and monolithic integrated circuit (IC)
fabrication technologies at RF and microwave frequencies. Although,
the existing EM simulation models used for the synthesis and
analysis of microstrip lines are reasonably accurate, they are
computationally intensive and time consuming. Neural networks
recently gained attention as fast and flexible vehicles to microwave
modeling, simulation and optimization. After learning and
abstracting from microwave data, through a process called training,
neural network models are used during microwave design to provide
instant answers to the task learned.This paper presents simple and
accurate ANN models for the synthesis and analysis of Microstrip
lines to more accurately compute the characteristic parameters and
the physical dimensions respectively for the required design
specifications.
Abstract: Psoriasis is a chronic inflammatory skin condition
which affects 2-3% of population around the world. Psoriasis Area
and Severity Index (PASI) is a gold standard to assess psoriasis
severity as well as the treatment efficacy. Although a gold standard,
PASI is rarely used because it is tedious and complex. In practice,
PASI score is determined subjectively by dermatologists, therefore
inter and intra variations of assessment are possible to happen even
among expert dermatologists. This research develops an algorithm to
assess psoriasis lesion for PASI scoring objectively. Focus of this
research is thickness assessment as one of PASI four parameters
beside area, erythema and scaliness. Psoriasis lesion thickness is
measured by averaging the total elevation from lesion base to lesion
surface. Thickness values of 122 3D images taken from 39 patients
are grouped into 4 PASI thickness score using K-means clustering.
Validation on lesion base construction is performed using twelve
body curvature models and show good result with coefficient of
determinant (R2) is equal to 1.
Abstract: This paper presents a implementation of an object tracking system in a video sequence. This object tracking is an important task in many vision applications. The main steps in video analysis are two: detection of interesting moving objects and tracking of such objects from frame to frame. In a similar vein, most tracking algorithms use pre-specified methods for preprocessing. In our work, we have implemented several object tracking algorithms (Meanshift, Camshift, Kalman filter) with different preprocessing methods. Then, we have evaluated the performance of these algorithms for different video sequences. The obtained results have shown good performances according to the degree of applicability and evaluation criteria.
Abstract: This work proposes an approach to address automatic
text summarization. This approach is a trainable summarizer, which
takes into account several features, including sentence position,
positive keyword, negative keyword, sentence centrality, sentence
resemblance to the title, sentence inclusion of name entity, sentence
inclusion of numerical data, sentence relative length, Bushy path of
the sentence and aggregated similarity for each sentence to generate
summaries. First we investigate the effect of each sentence feature on
the summarization task. Then we use all features score function to
train genetic algorithm (GA) and mathematical regression (MR)
models to obtain a suitable combination of feature weights. The
proposed approach performance is measured at several compression
rates on a data corpus composed of 100 English religious articles.
The results of the proposed approach are promising.
Abstract: Author presents the results of a study conducted to identify criteria of efficient information system (IS) with serviceoriented architecture (SOA) realization and proposes a ranking method to evaluate SOA information systems using a set of architecture quality criteria before the systems are implemented. The method is used to compare 7 SOA projects and ranking result for SOA efficiency of the projects is provided. The choice of SOA realization project depends on following criteria categories: IS internal work and organization, SOA policies, guidelines and change management, processes and business services readiness, risk management and mitigation. The last criteria category was analyzed on the basis of projects statistics.
Abstract: In the paper an effective context based lossless coding
technique is presented. Three principal and few auxiliary contexts are
defined. The predictor adaptation technique is an improved CoBALP
algorithm, denoted CoBALP+. Cumulated predictor error combining
8 bias estimators is calculated. It is shown experimentally that
indeed, the new technique is time-effective while it outperforms the
well known methods having reasonable time complexity, and is
inferior only to extremely computationally complex ones.
Abstract: A new approach is adopted in this paper based
on Turk and Pentland-s eigenface method. It was found that the
probability density function of the distance between the projection
vector of the input face image and the average projection vector of
the subject in the face database, follows Rayleigh distribution. In
order to decrease the false acceptance rate and increase the
recognition rate, the input face image has been recognized using two
thresholds including the acceptance threshold and the rejection
threshold. We also find out that the value of two thresholds will be
close to each other as number of trials increases. During the training,
in order to reduce the number of trials, the projection vectors for each
subject has been averaged. The recognition experiments using the
proposed algorithm show that the recognition rate achieves to
92.875% whilst the average number of judgment is only 2.56 times.
Abstract: Text document categorization involves large amount
of data or features. The high dimensionality of features is a
troublesome and can affect the performance of the classification.
Therefore, feature selection is strongly considered as one of the
crucial part in text document categorization. Selecting the best
features to represent documents can reduce the dimensionality of
feature space hence increase the performance. There were many
approaches has been implemented by various researchers to
overcome this problem. This paper proposed a novel hybrid approach
for feature selection in text document categorization based on Ant
Colony Optimization (ACO) and Information Gain (IG). We also
presented state-of-the-art algorithms by several other researchers.
Abstract: This paper presents the development of a wavelet
based algorithm, for distinguishing between magnetizing inrush
currents and power system fault currents, which is quite adequate,
reliable, fast and computationally efficient tool. The proposed
technique consists of a preprocessing unit based on discrete wavelet
transform (DWT) in combination with an artificial neural network
(ANN) for detecting and classifying fault currents. The DWT acts as
an extractor of distinctive features in the input signals at the relay
location. This information is then fed into an ANN for classifying
fault and magnetizing inrush conditions. A 220/55/55 V, 50Hz
laboratory transformer connected to a 380 V power system were
simulated using ATP-EMTP. The DWT was implemented by using
Matlab and Coiflet mother wavelet was used to analyze primary
currents and generate training data. The simulated results presented
clearly show that the proposed technique can accurately discriminate
between magnetizing inrush and fault currents in transformer
protection.
Abstract: In this paper, we propose a Connect6 solver which
adopts a hybrid approach based on a tree-search algorithm and image
processing techniques. The solver must deal with the complicated
computation and provide high performance in order to make real-time
decisions. The proposed approach enables the solver to be
implemented on a single Spartan-6 XC6SLX45 FPGA produced by
XILINX without using any external devices. The compact
implementation is achieved through image processing techniques to
optimize a tree-search algorithm of the Connect6 game. The tree
search is widely used in computer games and the optimal search brings
the best move in every turn of a computer game. Thus, many
tree-search algorithms such as Minimax algorithm and artificial
intelligence approaches have been widely proposed in this field.
However, there is one fundamental problem in this area; the
computation time increases rapidly in response to the growth of the
game tree. It means the larger the game tree is, the bigger the circuit
size is because of their highly parallel computation characteristics.
Here, this paper aims to reduce the size of a Connect6 game tree using
image processing techniques and its position symmetric property. The
proposed solver is composed of four computational modules: a
two-dimensional checkmate strategy checker, a template matching
module, a skilful-line predictor, and a next-move selector. These
modules work well together in selecting next moves from some
candidates and the total amount of their circuits is small. The details of
the hardware design for an FPGA implementation are described and
the performance of this design is also shown in this paper.
Abstract: A direct adaptive controller for a class of unknown nonlinear discrete-time systems is presented in this article. The proposed controller is constructed by fuzzy rules emulated network (FREN). With its simple structure, the human knowledge about the plant is transferred to be if-then rules for setting the network. These adjustable parameters inside FREN are tuned by the learning mechanism with time varying step size or learning rate. The variation of learning rate is introduced by main theorem to improve the system performance and stabilization. Furthermore, the boundary of adjustable parameters is guaranteed through the on-line learning and membership functions properties. The validation of the theoretical findings is represented by some illustrated examples.
Abstract: All-to-all personalized communication, also known as complete exchange, is one of the most dense communication patterns in parallel computing. In this paper, we propose new indirect algorithms for complete exchange on all-port ring and torus. The new algorithms fully utilize all communication links and transmit messages along shortest paths to completely achieve the theoretical lower bounds on message transmission, which have not be achieved among other existing indirect algorithms. For 2D r × c ( r % c ) all-port torus, the algorithm has time complexities of optimal transmission cost and O(c) message startup cost. In addition, the proposed algorithms accommodate non-power-of-two tori where the number of nodes in each dimension needs not be power-of-two or square. Finally, the algorithms are conceptually simple and symmetrical for every message and every node so that they can be easily implemented and achieve the optimum in practice.
Abstract: In this paper, a new learning algorithm based on a
hybrid metaheuristic integrating Differential Evolution (DE) and
Reduced Variable Neighborhood Search (RVNS) is introduced to train
the classification method PROAFTN. To apply PROAFTN, values of
several parameters need to be determined prior to classification. These
parameters include boundaries of intervals and relative weights for
each attribute. Based on these requirements, the hybrid approach,
named DEPRO-RVNS, is presented in this study. In some cases, the
major problem when applying DE to some classification problems
was the premature convergence of some individuals to local optima.
To eliminate this shortcoming and to improve the exploration and
exploitation capabilities of DE, such individuals were set to iteratively
re-explored using RVNS. Based on the generated results on
both training and testing data, it is shown that the performance of
PROAFTN is significantly improved. Furthermore, the experimental
study shows that DEPRO-RVNS outperforms well-known machine
learning classifiers in a variety of problems.
Abstract: Morphogenesis is the process that underpins the selforganised development and regeneration of biological systems. The ability to mimick morphogenesis in artificial systems has great potential for many engineering applications, including production of biological tissue, design of robust electronic systems and the co-ordination of parallel computing. Previous attempts to mimick these complex dynamics within artificial systems have relied upon the use of evolutionary algorithms that have limited their size and complexity. This paper will present some insight into the underlying dynamics of morphogenesis, then show how to, without the assistance of evolutionary algorithms, design cellular architectures that converge to complex patterns.
Abstract: Medical image segmentation based on image smoothing followed by edge detection assumes a great degree of importance in the field of Image Processing. In this regard, this paper proposes a novel algorithm for medical image segmentation based on vigorous smoothening by identifying the type of noise and edge diction ideology which seems to be a boom in medical image diagnosis. The main objective of this algorithm is to consider a particular medical image as input and make the preprocessing to remove the noise content by employing suitable filter after identifying the type of noise and finally carrying out edge detection for image segmentation. The algorithm consists of three parts. First, identifying the type of noise present in the medical image as additive, multiplicative or impulsive by analysis of local histograms and denoising it by employing Median, Gaussian or Frost filter. Second, edge detection of the filtered medical image is carried out using Canny edge detection technique. And third part is about the segmentation of edge detected medical image by the method of Normalized Cut Eigen Vectors. The method is validated through experiments on real images. The proposed algorithm has been simulated on MATLAB platform. The results obtained by the simulation shows that the proposed algorithm is very effective which can deal with low quality or marginal vague images which has high spatial redundancy, low contrast and biggish noise, and has a potential of certain practical use of medical image diagnosis.