Abstract: Nowaday-s, many organizations use systems that
support business process as a whole or partially. However, in some
application domains, like software development and health care
processes, a normative Process Aware System (PAS) is not suitable,
because a flexible support is needed to respond rapidly to new
process models. On the other hand, a flexible Process Aware System
may be vulnerable to undesirable and fraudulent executions, which
imposes a tradeoff between flexibility and security. In order to make
this tradeoff available, a genetic-based anomaly detection model for
logs of Process Aware Systems is presented in this paper. The
detection of an anomalous trace is based on discovering an
appropriate process model by using genetic process mining and
detecting traces that do not fit the appropriate model as anomalous
trace; therefore, when used in PAS, this model is an automated
solution that can support coexistence of flexibility and security.
Abstract: The genetic algorithm (GA) based solution techniques
are found suitable for optimization because of their ability of
simultaneous multidimensional search. Many GA-variants have been
tried in the past to solve optimal power flow (OPF), one of the
nonlinear problems of electric power system. The issues like
convergence speed and accuracy of the optimal solution obtained
after number of generations using GA techniques and handling
system constraints in OPF are subjects of discussion. The results
obtained for GA-Fuzzy OPF on various power systems have shown
faster convergence and lesser generation costs as compared to other
approaches. This paper presents an enhanced GA-Fuzzy OPF (EGAOPF)
using penalty factors to handle line flow constraints and load
bus voltage limits for both normal network and contingency case
with congestion. In addition to crossover and mutation rate
adaptation scheme that adapts crossover and mutation probabilities
for each generation based on fitness values of previous generations, a
block swap operator is also incorporated in proposed EGA-OPF. The
line flow limits and load bus voltage magnitude limits are handled by
incorporating line overflow and load voltage penalty factors
respectively in each chromosome fitness function. The effects of
different penalty factors settings are also analyzed under contingent
state.
Abstract: In this paper, a nonlinear acoustic echo cancellation
(AEC) system is proposed, whereby 3rd order Volterra filtering is
utilized along with a variable step-size Gauss-Seidel pseudo affine
projection (VSSGS-PAP) algorithm. In particular, the proposed
nonlinear AEC system is developed by considering a double-talk
situation with near-end signal variation. Simulation results
demonstrate that the proposed approach yields better nonlinear AEC
performance than conventional approaches.
Abstract: The weighting exponent m is called the fuzzifier that
can have influence on the clustering performance of fuzzy c-means
(FCM) and mÎ[1.5,2.5] is suggested by Pal and Bezdek [13]. In this
paper, we will discuss the robust properties of FCM and show that the
parameter m will have influence on the robustness of FCM. According
to our analysis, we find that a large m value will make FCM more
robust to noise and outliers. However, if m is larger than the theoretical
upper bound proposed by Yu et al. [14], the sample mean will become
the unique optimizer. Here, we suggest to implement the FCM
algorithm with mÎ[1.5,4] under the restriction when m is smaller
than the theoretical upper bound.
Abstract: In the past few years, the use of wireless sensor networks (WSNs) potentially increased in applications such as intrusion detection, forest fire detection, disaster management and battle field. Sensor nodes are generally battery operated low cost devices. The key challenge in the design and operation of WSNs is to prolong the network life time by reducing the energy consumption among sensor nodes. Node clustering is one of the most promising techniques for energy conservation. This paper presents a novel clustering algorithm which maximizes the network lifetime by reducing the number of communication among sensor nodes. This approach also includes new distributed cluster formation technique that enables self-organization of large number of nodes, algorithm for maintaining constant number of clusters by prior selection of cluster head and rotating the role of cluster head to evenly distribute the energy load among all sensor nodes.
Abstract: This paper presents a design of source encoding
calculator software which applies the two famous algorithms in the
field of information theory- the Shannon-Fano and the Huffman
schemes. This design helps to easily realize the algorithms without
going into a cumbersome, tedious and prone to error manual
mechanism of encoding the signals during the transmission. The
work describes the design of the software, how it works, comparison
with related works, its efficiency, its usefulness in the field of
information technology studies and the future prospects of the
software to engineers, students, technicians and alike. The designed
“Encodia" software has been developed, tested and found to meet the
intended requirements. It is expected that this application will help
students and teaching staff in their daily doing of information theory
related tasks. The process is ongoing to modify this tool so that it can
also be more intensely useful in research activities on source coding.
Abstract: The goal of steganography is to avoid drawing
suspicion to the transmission of a hidden message. If suspicion is
raised, steganography may fail. The success of steganography
depends on the secrecy of the action. If steganography is detected,
the system will fail but data security depends on the robustness of the
applied algorithm. In this paper, we propose a novel plausible
deniability scheme in steganography by using a diversionary message
and encrypt it with a DES-based algorithm. Then, we compress the
secret message and encrypt it by the receiver-s public key along with
the stego key and embed both messages in a carrier using an
embedding algorithm. It will be demonstrated how this method can
support plausible deniability and is robust against steganalysis.
Abstract: An accurate optimal design of laminated composite
structures may present considerable difficulties due to the complexity
and multi-modality of the functional design space. The Big Bang
– Big Crunch (BB-BC) optimization method is a relatively new
technique and has already proved to be a valuable tool for structural
optimization. In the present study the exceptional efficiency of the
method is demonstrated by an example of the lay-up optimization
of multilayered anisotropic cylinders based on a three-dimensional
elasticity solution. It is shown that, due to its simplicity and speed,
the BB-BC is much more efficient for this class of problems when
compared to the genetic algorithms.
Abstract: In these days, multimedia data is transmitted and
processed in compressed format. Due to the decoding procedure and
filtering for edge detection, the feature extraction process of MPEG-7
Edge Histogram Descriptor is time-consuming as well as
computationally expensive. To improve efficiency of compressed
image retrieval, we propose a new edge histogram generation
algorithm in DCT domain in this paper. Using the edge information
provided by only two AC coefficients of DCT coefficients, we can get
edge directions and strengths directly in DCT domain. The
experimental results demonstrate that our system has good
performance in terms of retrieval efficiency and effectiveness.
Abstract: Fossil fuel-firing power plants dominate electric
power generation in Taiwan, which are also the major contributor to
Green House gases (GHG). CO2 is the most important greenhouse
gas that cause global warming. This paper penetrates the relationship
between carbon trading for GHG reduction and power generation
expansion planning (GEP) problem for the electrical utility. The
Particle Swarm Optimization (PSO) Algorithm is presented to deal
with the generation expansion planning strategy of the utility with
independent power providers (IPPs). The utility has to take both the
IPPs- participation and environment impact into account when a new
generation unit is considering expanded from view of supply side.
Abstract: Cross layer optimization based on utility functions has
been recently studied extensively, meanwhile, numerous types of
utility functions have been examined in the corresponding literature.
However, a major drawback is that most utility functions take a fixed
mathematical form or are based on simple combining, which can
not fully exploit available information. In this paper, we formulate a
framework of cross layer optimization based on Adaptively Weighted
Utility Functions (AWUF) for fairness balancing in OFDMA networks.
Under this framework, a two-step allocation algorithm is
provided as a sub-optimal solution, whose control parameters can be
updated in real-time to accommodate instantaneous QoS constrains.
The simulation results show that the proposed algorithm achieves
high throughput while balancing the fairness among multiple users.
Abstract: In this paper we present a Feed-Foward Neural
Networks Autoregressive (FFNN-AR) model with genetic algorithms
training optimization in order to predict the gross domestic product
growth of six countries. Specifically we propose a kind of weighted
regression, which can be used for econometric purposes, where the
initial inputs are multiplied by the neural networks final optimum
weights from input-hidden layer of the training process. The
forecasts are compared with those of the ordinary autoregressive
model and we conclude that the proposed regression-s forecasting
results outperform significant those of autoregressive model.
Moreover this technique can be used in Autoregressive-Moving
Average models, with and without exogenous inputs, as also the
training process with genetics algorithms optimization can be
replaced by the error back-propagation algorithm.
Abstract: This paper describes a newly designed decentralized
nonlinear control strategy to control a robot manipulator. Based on the
concept of the nonlinear state feedback theory and decentralized
concept is developed to improve the drawbacks in previous works
concerned with complicate intelligent control and low cost effective
sensor. The control methodology is derived in the sense of Lyapunov
theorem so that the stability of the control system is guaranteed. The
decentralized algorithm does not require other joint angle and velocity
information. Individual Joint controller is implemented using a digital
processor with nearly actuator to make it possible to achieve good
dynamics and modular. Computer simulation result has been
conducted to validate the effectiveness of the proposed control scheme
under the occurrence of possible uncertainties and different reference
trajectories. The merit of the proposed control system is indicated in
comparison with a classical control system.
Abstract: Blind Signature were introduced by Chaum. In this
scheme, a signer can “sign” a document without knowing the
document contain. This is particularly important in electronic voting.
CryptO-0N2 is an electronic voting protocol which is development of
CryptO-0N. During its development this protocol has not been
furnished with the requirement of blind signature, so the choice of
voters can be determined by counting center. In this paper will be
presented of implementation of blind signature using RSA algorithm.
Abstract: Many real-world data sets consist of a very high dimensional feature space. Most clustering techniques use the distance or similarity between objects as a measure to build clusters. But in high dimensional spaces, distances between points become relatively uniform. In such cases, density based approaches may give better results. Subspace Clustering algorithms automatically identify lower dimensional subspaces of the higher dimensional feature space in which clusters exist. In this paper, we propose a new clustering algorithm, ISC – Intelligent Subspace Clustering, which tries to overcome three major limitations of the existing state-of-art techniques. ISC determines the input parameter such as є – distance at various levels of Subspace Clustering which helps in finding meaningful clusters. The uniform parameters approach is not suitable for different kind of databases. ISC implements dynamic and adaptive determination of Meaningful clustering parameters based on hierarchical filtering approach. Third and most important feature of ISC is the ability of incremental learning and dynamic inclusion and exclusions of subspaces which lead to better cluster formation.
Abstract: Mining frequent tree patterns have many useful
applications in XML mining, bioinformatics, network routing, etc.
Most of the frequent subtree mining algorithms (i.e. FREQT,
TreeMiner and CMTreeMiner) use anti-monotone property in the
phase of candidate subtree generation. However, none of these
algorithms have verified the correctness of this property in tree
structured data. In this research it is shown that anti-monotonicity
does not generally hold, when using weighed support in tree pattern
discovery. As a result, tree mining algorithms that are based on this
property would probably miss some of the valid frequent subtree
patterns in a collection of trees. In this paper, we investigate the
correctness of anti-monotone property for the problem of weighted
frequent subtree mining. In addition we propose W3-Miner, a new
algorithm for full extraction of frequent subtrees. The experimental
results confirm that W3-Miner finds some frequent subtrees that the
previously proposed algorithms are not able to discover.
Abstract: Checkpointing is one of the commonly used techniques to provide fault-tolerance in distributed systems so that the system can operate even if one or more components have failed. However, mobile computing systems are constrained by low bandwidth, mobility, lack of stable storage, frequent disconnections and limited battery life. Hence, checkpointing protocols having lesser number of synchronization messages and fewer checkpoints are preferred in mobile environment. There are two different approaches, although not orthogonal, to checkpoint mobile computing systems namely, time-based and index-based. Our protocol is a fusion of these two approaches, though not first of its kind. In the present exposition, an index-based checkpointing protocol has been developed, which uses time to indirectly coordinate the creation of consistent global checkpoints for mobile computing systems. The proposed algorithm is non-blocking, adaptive, and does not use any control message. Compared to other contemporary checkpointing algorithms, it is computationally more efficient because it takes lesser number of checkpoints and does not need to compute dependency relationships. A brief account of important and relevant works in both the fields, time-based and index-based, has also been included in the presentation.
Abstract: We present a new algorithm for nonlinear dimensionality reduction that consistently uses global information, and that enables understanding the intrinsic geometry of non-convex manifolds. Compared to methods that consider only local information, our method appears to be more robust to noise. Unlike most methods that incorporate global information, the proposed approach automatically handles non-convexity of the data manifold. We demonstrate the performance of our algorithm and compare it to state-of-the-art methods on synthetic as well as real data.
Abstract: There are a lot of extensions made to the classic model of multi-layer perceptron (MLP). A notable amount of them has been designed to hasten the learning process without considering the quality of generalization. The paper proposes a new MLP extension based on exploiting topology of the input layer of the network. Experimental results show the extended model to improve upon generalization capability in certain cases. The new model requires additional computational resources to compare to the classic model, nevertheless the loss in efficiency isn-t regarded to be significant.
Abstract: This paper describes a new algorithm of arrangement
in parallel, based on Odd-Even Mergesort, called division and
concurrent mixes. The main idea of the algorithm is to achieve that
each processor uses a sequential algorithm for ordering a part of the
vector, and after that, for making the processors work in pairs in
order to mix two of these sections ordered in a greater one, also
ordered; after several iterations, the vector will be completely
ordered. The paper describes the implementation of the new
algorithm on a Message Passing environment (such as MPI). Besides,
it compares the obtained experimental results with the quicksort
sequential algorithm and with the parallel implementations (also on
MPI) of the algorithms quicksort and bitonic sort. The comparison
has been realized in an 8 processors cluster under GNU/Linux which
is running on a unique PC processor.