Abstract: In this paper, two versions of an iterative loopless
algorithm for the classical towers of Hanoi problem with O(1) storage complexity and O(2n) time complexity are presented. Based
on this algorithm the number of different moves in each of pegs with its direction is formulated.
Abstract: This paper investigates the problem of sampling from transactional data streams. We introduce CFISDS as a content based sampling algorithm that works on a landmark window model of data streams and preserve more informed sample in sample space. This algorithm that work based on closed frequent itemset mining tasks, first initiate a concept lattice using initial data, then update lattice structure using an incremental mechanism.Incremental mechanism insert, update and delete nodes in/from concept lattice in batch manner. Presented algorithm extracts the final samples on demand of user. Experimental results show the accuracy of CFISDS on synthetic and real datasets, despite on CFISDS algorithm is not faster than exist sampling algorithms such as Z and DSS.
Abstract: Generalized Center String (GCS) problem are
generalized from Common Approximate Substring problem
and Common substring problems. GCS are known to be
NP-hard allowing the problems lies in the explosion of
potential candidates. Finding longest center string without
concerning the sequence that may not contain any motifs is
not known in advance in any particular biological gene
process. GCS solved by frequent pattern-mining techniques
and known to be fixed parameter tractable based on the
fixed input sequence length and symbol set size. Efficient
method known as Bpriori algorithms can solve GCS with
reasonable time/space complexities. Bpriori 2 and Bpriori
3-2 algorithm are been proposed of any length and any
positions of all their instances in input sequences. In this
paper, we reduced the time/space complexity of Bpriori
algorithm by Constrained Based Frequent Pattern mining
(CBFP) technique which integrates the idea of Constraint
Based Mining and FP-tree mining. CBFP mining technique
solves the GCS problem works for all center string of any
length, but also for the positions of all their mutated copies
of input sequence. CBFP mining technique construct TRIE
like with FP tree to represent the mutated copies of center
string of any length, along with constraints to restraint
growth of the consensus tree. The complexity analysis for
Constrained Based FP mining technique and Bpriori
algorithm is done based on the worst case and average case
approach. Algorithm's correctness compared with the
Bpriori algorithm using artificial data is shown.
Abstract: In Virtual organization, Knowledge Discovery (KD)
service contains distributed data resources and computing grid nodes.
Computational grid is integrated with data grid to form Knowledge
Grid, which implements Apriori algorithm for mining association
rule on grid network. This paper describes development of parallel
and distributed version of Apriori algorithm on Globus Toolkit using
Message Passing Interface extended with Grid Services (MPICHG2).
The creation of Knowledge Grid on top of data and
computational grid is to support decision making in real time
applications. In this paper, the case study describes design and
implementation of local and global mining of frequent item sets. The
experiments were conducted on different configurations of grid
network and computation time was recorded for each operation. We
analyzed our result with various grid configurations and it shows
speedup of computation time is almost superlinear.
Abstract: This paper presents a comparison between two Pulse
Width Modulation (PWM) algorithms applied to a three-level Neutral
Point Clamped (NPC) Voltage Source Inverter (VSI). The first
algorithm applied is the triangular-sinusoidal strategy; the second is
the Space Vector Pulse Width Modulation (SVPWM) strategy. In the
first part, we present a topology of three-level NCP VSI. After that,
we develop the two PWM strategies to control this converter. At the
end the experimental results are presented.
Abstract: Signature represents an individual characteristic of a
person which can be used for his / her validation. For such application
proper modeling is essential. Here we propose an offline signature
recognition and verification scheme which is based on extraction of
several features including one hybrid set from the input signature
and compare them with the already trained forms. Feature points
are classified using statistical parameters like mean and variance.
The scanned signature is normalized in slant using a very simple
algorithm with an intention to make the system robust which is
found to be very helpful. The slant correction is further aided by the
use of an Artificial Neural Network (ANN). The suggested scheme
discriminates between originals and forged signatures from simple
and random forgeries. The primary objective is to reduce the two
crucial parameters-False Acceptance Rate (FAR) and False Rejection
Rate (FRR) with lesser training time with an intension to make the
system dynamic using a cluster of ANNs forming a multiple classifier
system.
Abstract: In this paper, we study the multi-scenario knapsack problem, a variant of the well-known NP-Hard single knapsack problem. We investigate the use of an adaptive algorithm for solving heuristically the problem. The used method combines two complementary phases: a size reduction phase and a dynamic 2- opt procedure one. First, the reduction phase applies a polynomial reduction strategy; that is used for reducing the size problem. Second, the adaptive search procedure is applied in order to attain a feasible solution Finally, the performances of two versions of the proposed algorithm are evaluated on a set of randomly generated instances.
Abstract: In this paper, based on steady-state models of Flexible
AC Transmission System (FACTS) devices, the sizing of static
synchronous series compensator (SSSC) controllers in transmission
network is formed as an optimization problem. The objective of this
problem is to reduce the transmission losses in the network. The
optimization problem is solved using particle swarm optimization
(PSO) technique. The Newton-Raphson load flow algorithm is
modified to consider the insertion of the SSSC devices in the
network. A numerical example, illustrating the effectiveness of the
proposed algorithm, is introduced. In addition, a novel model of a 3-
phase voltage source converter (VSC) that is suitable for series
connected FACTS a controller is introduced. The model is verified
by simulation using Power System Blockset (PSB) and Simulink
software.
Abstract: Crude oil blending is an important unit operation in
petroleum refining industry. A good model for the blending system is
beneficial for supervision operation, prediction of the export
petroleum quality and realizing model-based optimal control. Since
the blending cannot follow the ideal mixing rule in practice, we
propose a static neural network to approximate the blending
properties. By the dead-zone approach, we propose a new robust
learning algorithm and give theoretical analysis. Real data of crude
oil blending is applied to illustrate the neuro modeling approach.
Abstract: In this paper we propose a class of second derivative multistep methods for solving some well-known classes of Lane- Emden type equations which are nonlinear ordinary differential equations on the semi-infinite domain. These methods, which have good stability and accuracy properties, are useful in deal with stiff ODEs. We show superiority of these methods by applying them on the some famous Lane-Emden type equations.
Abstract: Self-organizing map (SOM) is a well known data reduction technique used in data mining. Data visualization can reveal structure in data sets that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOMs, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of a generic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOMs. The application of our method to unlabeled call data for a mobile phone operator demonstrates its feasibility. PSO algorithm utilizes U-matrix of SOMs to determine cluster boundaries; the results of this novel automatic method correspond well to boundary detection through visual inspection of code vectors and k-means algorithm.
Abstract: This paper presents an efficient algorithm for
optimization of radial distribution systems by a network
reconfiguration to balance feeder loads and eliminate overload
conditions. The system load-balancing index is used to determine the
loading conditions of the system and maximum system loading
capacity. The index value has to be minimum in the optimal network
reconfiguration of load balancing. A method based on Tabu search
algorithm, The Tabu search algorithm is employed to search for the
optimal network reconfiguration. The basic idea behind the search is
a move from a current solution to its neighborhood by effectively
utilizing a memory to provide an efficient search for optimality. It
presents low computational effort and is able to find good quality
configurations. Simulation results for a radial 69-bus system with
distributed generations and capacitors placement. The study results
show that the optimal on/off patterns of the switches can be identified
to give the best network reconfiguration involving balancing of
feeder loads while respecting all the constraints.
Abstract: A data warehouse (DW) is a system which has value and role for decision-making by querying. Queries to DW are critical regarding to their complexity and length. They often access millions of tuples, and involve joins between relations and aggregations. Materialized views are able to provide the better performance for DW queries. However, these views have maintenance cost, so materialization of all views is not possible. An important challenge of DW environment is materialized view selection because we have to realize the trade-off between performance and view maintenance. Therefore, in this paper, we introduce a new approach aimed to solve this challenge based on Two-Phase Optimization (2PO), which is a combination of Simulated Annealing (SA) and Iterative Improvement (II), with the use of Multiple View Processing Plan (MVPP). Our experiments show that 2PO outperform the original algorithms in terms of query processing cost and view maintenance cost.
Abstract: Game theory could be used to analyze the conflicted
issues in the field of information hiding. In this paper, 2-phase game
can be used to build the embedder-attacker system to analyze the
limits of hiding capacity of embedding algorithms: the embedder
minimizes the expected damage and the attacker maximizes it. In the
system, the embedder first consumes its resource to build embedded
units (EU) and insert the secret information into EU. Then the attacker
distributes its resource evenly to the attacked EU. The expected
equilibrium damage, which is maximum damage in value from the
point of view of the attacker and minimum from the embedder against
the attacker, is evaluated by the case when the attacker attacks a
subset from all the EU. Furthermore, the optimal equilibrium capacity
of hiding information is calculated through the optimal number of EU
with the embedded secret information. Finally, illustrative examples
of the optimal equilibrium capacity are presented.
Abstract: This paper describes the pipeline architecture of
high-speed modified Booth multipliers. The proposed multiplier
circuits are based on the modified Booth algorithm and the pipeline
technique which are the most widely used to accelerate the
multiplication speed. In order to implement the optimally pipelined
multipliers, many kinds of experiments have been conducted. The
speed of the multipliers is greatly improved by properly deciding the
number of pipeline stages and the positions for the pipeline registers to
be inserted. We described the proposed modified Booth multiplier
circuits in Verilog HDL and synthesized the gate-level circuits using
0.13um standard cell library. The resultant multiplier circuits show
better performance than others. Since the proposed multipliers operate
at GHz ranges, they can be used in the systems requiring very high
performance.
Abstract: Matching algorithms have significant importance in
speaker recognition. Feature vectors of the unknown utterance are
compared to feature vectors of the modeled speakers as a last step in
speaker recognition. A similarity score is found for every model in
the speaker database. Depending on the type of speaker recognition,
these scores are used to determine the author of unknown speech
samples. For speaker verification, similarity score is tested against a
predefined threshold and either acceptance or rejection result is
obtained. In the case of speaker identification, the result depends on
whether the identification is open set or closed set. In closed set
identification, the model that yields the best similarity score is
accepted. In open set identification, the best score is tested against a
threshold, so there is one more possible output satisfying the
condition that the speaker is not one of the registered speakers in
existing database. This paper focuses on closed set speaker
identification using a modified version of a well known matching
algorithm. The results of new matching algorithm indicated better
performance on YOHO international speaker recognition database.
Abstract: Segmentation of a color image composed of different
kinds of regions can be a hard problem, namely to compute for an
exact texture fields. The decision of the optimum number of
segmentation areas in an image when it contains similar and/or un
stationary texture fields. A novel neighborhood-based segmentation
approach is proposed. A genetic algorithm is used in the proposed
segment-pass optimization process. In this pass, an energy function,
which is defined based on Markov Random Fields, is minimized. In
this paper we use an adaptive threshold estimation method for image
thresholding in the wavelet domain based on the generalized
Gaussian distribution (GGD) modeling of sub band coefficients. This
method called Normal Shrink is computationally more efficient and
adaptive because the parameters required for estimating the threshold
depend on sub band data energy that used in the pre-stage of
segmentation. A quad tree is employed to implement the multi
resolution framework, which enables the use of different strategies at
different resolution levels, and hence, the computation can be
accelerated. The experimental results using the proposed
segmentation approach are very encouraging.
Abstract: The efficient use of available licensed spectrum is
becoming more and more critical with increasing demand and usage
of the radio spectrum. This paper shows how the use of spectrum as
well as dynamic spectrum management can be effectively managed
and spectrum allocation schemes in the wireless communication
systems be implemented and used, in future. This paper would be an
attempt towards better utilization of the spectrum. This research will
focus on the decision-making process mainly, with an
assumption that the radio environment has already been sensed and
the QoS requirements for the application have been specified either
by the sensed radio environment or by the secondary user itself. We
identify and study the characteristic parameters of Cognitive Radio
and use Genetic Algorithm for spectrum allocation. Performance
evaluation is done using MATLAB toolboxes.
Abstract: An optimal control of Reverse Osmosis (RO) plant is
studied in this paper utilizing the auto tuning concept in conjunction
with PID controller. A control scheme composing an auto tuning
stochastic technique based on an improved Genetic Algorithm (GA) is
proposed. For better evaluation of the process in GA, objective
function defined newly in sense of root mean square error has been
used. Also in order to achieve better performance of GA, more
pureness and longer period of random number generation in operation
are sought. The main improvement is made by replacing the uniform
distribution random number generator in conventional GA technique
to newly designed hybrid random generator composed of Cauchy
distribution and linear congruential generator, which provides
independent and different random numbers at each individual steps in
Genetic operation. The performance of newly proposed GA tuned
controller is compared with those of conventional ones via simulation.
Abstract: The binary phase-only filter digital watermarking
embeds the phase information of the discrete Fourier transform of the
image into the corresponding magnitudes for better image authentication.
The paper proposed an approach of how to implement watermark
embedding by quantizing the magnitude, with discussing how to
regulate the quantization steps based on the frequencies of the magnitude
coefficients of the embedded watermark, and how to embed the
watermark at low frequency quantization. The theoretical analysis and
simulation results show that algorithm flexibility, security, watermark
imperceptibility and detection performance of the binary phase-only
filter digital watermarking can be effectively improved with quantization
based watermark embedding, and the robustness against JPEG
compression will also be increased to some extent.