Abstract: The genetic algorithm (GA) based solution techniques
are found suitable for optimization because of their ability of
simultaneous multidimensional search. Many GA-variants have been
tried in the past to solve optimal power flow (OPF), one of the
nonlinear problems of electric power system. The issues like
convergence speed and accuracy of the optimal solution obtained
after number of generations using GA techniques and handling
system constraints in OPF are subjects of discussion. The results
obtained for GA-Fuzzy OPF on various power systems have shown
faster convergence and lesser generation costs as compared to other
approaches. This paper presents an enhanced GA-Fuzzy OPF (EGAOPF)
using penalty factors to handle line flow constraints and load
bus voltage limits for both normal network and contingency case
with congestion. In addition to crossover and mutation rate
adaptation scheme that adapts crossover and mutation probabilities
for each generation based on fitness values of previous generations, a
block swap operator is also incorporated in proposed EGA-OPF. The
line flow limits and load bus voltage magnitude limits are handled by
incorporating line overflow and load voltage penalty factors
respectively in each chromosome fitness function. The effects of
different penalty factors settings are also analyzed under contingent
state.
Abstract: In the past few years, the use of wireless sensor networks (WSNs) potentially increased in applications such as intrusion detection, forest fire detection, disaster management and battle field. Sensor nodes are generally battery operated low cost devices. The key challenge in the design and operation of WSNs is to prolong the network life time by reducing the energy consumption among sensor nodes. Node clustering is one of the most promising techniques for energy conservation. This paper presents a novel clustering algorithm which maximizes the network lifetime by reducing the number of communication among sensor nodes. This approach also includes new distributed cluster formation technique that enables self-organization of large number of nodes, algorithm for maintaining constant number of clusters by prior selection of cluster head and rotating the role of cluster head to evenly distribute the energy load among all sensor nodes.
Abstract: This paper presents a design of source encoding
calculator software which applies the two famous algorithms in the
field of information theory- the Shannon-Fano and the Huffman
schemes. This design helps to easily realize the algorithms without
going into a cumbersome, tedious and prone to error manual
mechanism of encoding the signals during the transmission. The
work describes the design of the software, how it works, comparison
with related works, its efficiency, its usefulness in the field of
information technology studies and the future prospects of the
software to engineers, students, technicians and alike. The designed
“Encodia" software has been developed, tested and found to meet the
intended requirements. It is expected that this application will help
students and teaching staff in their daily doing of information theory
related tasks. The process is ongoing to modify this tool so that it can
also be more intensely useful in research activities on source coding.
Abstract: For about two decades scientists have been
developing techniques for enhancing the quality of medical images
using Fourier transform, DWT (Discrete wavelet transform),PDE
model etc., Gabor wavelet on hexagonal sampled grid of the images
is proposed in this work. This method has optimal approximation
theoretic performances, for a good quality image. The computational
cost is considerably low when compared to similar processing in the
rectangular domain. As X-ray images contain light scattered pixels,
instead of unique sigma, the parameter sigma of 0.5 to 3 is found to
satisfy most of the image interpolation requirements in terms of high
Peak Signal-to-Noise Ratio (PSNR) , lower Mean Squared Error
(MSE) and better image quality by adopting windowing technique.
Abstract: This study deals with a multi-criteria optimization
problem which has been transformed into a single objective
optimization problem using Response Surface Methodology (RSM),
Artificial Neural Network (ANN) and Grey Relational Analyses
(GRA) approach. Grey-RSM and Grey-ANN are hybrid techniques
which can be used for solving multi-criteria optimization problem.
There have been two main purposes of this research as follows.
1. To determine optimum and robust fiber dyeing process
conditions by using RSM and ANN based on GRA,
2. To obtain the best suitable model by comparing models
developed by different methodologies.
The design variables for fiber dyeing process in textile are
temperature, time, softener, anti-static, material quantity, pH,
retarder, and dispergator. The quality characteristics to be evaluated
are nominal color consistency of fiber, maximum strength of fiber,
minimum color of dyeing solution. GRA-RSM with exact level
value, GRA-RSM with interval level value and GRA-ANN models
were compared based on GRA output value and MSE (Mean Square
Error) performance measurement of outputs with each other. As a
result, GRA-ANN with interval value model seems to be suitable
reducing the variation of dyeing process for GRA output value of the
model.
Abstract: This paper explores the implementation of adaptive
coding and modulation schemes for Multiple-Input Multiple-Output
Orthogonal Frequency Division Multiplexing (MIMO-OFDM) feedback
systems. Adaptive coding and modulation enables robust and
spectrally-efficient transmission over time-varying channels. The basic
premise is to estimate the channel at the receiver and feed this estimate
back to the transmitter, so that the transmission scheme can be
adapted relative to the channel characteristics. Two types of codebook
based channel feedback techniques are used in this work. The longterm
and short-term CSI at the transmitter is used for efficient channel
utilization. OFDM is a powerful technique employed in communication
systems suffering from frequency selectivity. Combined with
multiple antennas at the transmitter and receiver, OFDM proves to be
robust against delay spread. Moreover, it leads to significant data rates
with improved bit error performance over links having only a single
antenna at both the transmitter and receiver. The coded modulation
increases the effective transmit power relative to uncoded variablerate
variable-power MQAM performance for MIMO-OFDM feedback
system. Hence proposed arrangement becomes an attractive approach
to achieve enhanced spectral efficiency and improved error rate
performance for next generation high speed wireless communication
systems.
Abstract: Music segmentation is a key issue in music information
retrieval (MIR) as it provides an insight into the
internal structure of a composition. Structural information about
a composition can improve several tasks related to MIR such
as searching and browsing large music collections, visualizing
musical structure, lyric alignment, and music summarization.
The authors of this paper present the MTSSM framework, a twolayer
framework for the multi-track segmentation of symbolic
music. The strength of this framework lies in the combination of
existing methods for local track segmentation and the application
of global structure information spanning via multiple tracks.
The first layer of the MTSSM uses various string matching
techniques to detect the best candidate segmentations for each
track of a multi-track composition independently. The second
layer combines all single track results and determines the best
segmentation for each track in respect to the global structure of
the composition.
Abstract: There are several approaches in trying to solve the
Quantitative 1Structure-Activity Relationship (QSAR) problem.
These approaches are based either on statistical methods or on
predictive data mining. Among the statistical methods, one should
consider regression analysis, pattern recognition (such as cluster
analysis, factor analysis and principal components analysis) or partial
least squares. Predictive data mining techniques use either neural
networks, or genetic programming, or neuro-fuzzy knowledge. These
approaches have a low explanatory capability or non at all. This
paper attempts to establish a new approach in solving QSAR
problems using descriptive data mining. This way, the relationship
between the chemical properties and the activity of a substance
would be comprehensibly modeled.
Abstract: The Reverse Monte Carlo (RMC) simulation is applied in the study of an aqueous electrolyte LiCl6H2O. On the basis of the available experimental neutron scattering data, RMC computes pair radial distribution functions in order to explore the structural features of the system. The obtained results include some unrealistic features. To overcome this problem, we use the Hybrid Reverse Monte Carlo (HRMC), incorporating an energy constraint in addition to the commonly used constraints derived from experimental data. Our results show a good agreement between experimental and computed partial distribution functions (PDFs) as well as a significant improvement in pair partial distribution curves. This kind of study can be considered as a useful test for a defined interaction model for conventional simulation techniques.
Abstract: Many real-world data sets consist of a very high dimensional feature space. Most clustering techniques use the distance or similarity between objects as a measure to build clusters. But in high dimensional spaces, distances between points become relatively uniform. In such cases, density based approaches may give better results. Subspace Clustering algorithms automatically identify lower dimensional subspaces of the higher dimensional feature space in which clusters exist. In this paper, we propose a new clustering algorithm, ISC – Intelligent Subspace Clustering, which tries to overcome three major limitations of the existing state-of-art techniques. ISC determines the input parameter such as є – distance at various levels of Subspace Clustering which helps in finding meaningful clusters. The uniform parameters approach is not suitable for different kind of databases. ISC implements dynamic and adaptive determination of Meaningful clustering parameters based on hierarchical filtering approach. Third and most important feature of ISC is the ability of incremental learning and dynamic inclusion and exclusions of subspaces which lead to better cluster formation.
Abstract: Checkpointing is one of the commonly used techniques to provide fault-tolerance in distributed systems so that the system can operate even if one or more components have failed. However, mobile computing systems are constrained by low bandwidth, mobility, lack of stable storage, frequent disconnections and limited battery life. Hence, checkpointing protocols having lesser number of synchronization messages and fewer checkpoints are preferred in mobile environment. There are two different approaches, although not orthogonal, to checkpoint mobile computing systems namely, time-based and index-based. Our protocol is a fusion of these two approaches, though not first of its kind. In the present exposition, an index-based checkpointing protocol has been developed, which uses time to indirectly coordinate the creation of consistent global checkpoints for mobile computing systems. The proposed algorithm is non-blocking, adaptive, and does not use any control message. Compared to other contemporary checkpointing algorithms, it is computationally more efficient because it takes lesser number of checkpoints and does not need to compute dependency relationships. A brief account of important and relevant works in both the fields, time-based and index-based, has also been included in the presentation.
Abstract: We decribe a formal specification and verification of the Rabin public-key scheme in the formal proof system Is-abelle/HOL. The idea is to use the two views of cryptographic verification: the computational approach relying on the vocabulary of probability theory and complexity theory and the formal approach based on ideas and techniques from logic and programming languages. The analysis presented uses a given database to prove formal properties of our implemented functions with computer support. Thema in task in designing a practical formalization of correctness as well as security properties is to cope with the complexity of cryptographic proving. We reduce this complexity by exploring a light-weight formalization that enables both appropriate formal definitions as well as eficient formal proofs. This yields the first computer-proved implementation of the Rabin public-key scheme in Isabelle/HOL. Consequently, we get reliable proofs with a minimal error rate augmenting the used database. This provides a formal basis for more computer proof constructions in this area.
Abstract: The myoelectric signal (MES) is one of the Biosignals
utilized in helping humans to control equipments. Recent approaches
in MES classification to control prosthetic devices employing pattern
recognition techniques revealed two problems, first, the classification
performance of the system starts degrading when the number of
motion classes to be classified increases, second, in order to solve the
first problem, additional complicated methods were utilized which
increase the computational cost of a multifunction myoelectric
control system. In an effort to solve these problems and to achieve a
feasible design for real time implementation with high overall
accuracy, this paper presents a new method for feature extraction in
MES recognition systems. The method works by extracting features
using Wavelet Packet Transform (WPT) applied on the MES from
multiple channels, and then employs Fuzzy c-means (FCM)
algorithm to generate a measure that judges on features suitability for
classification. Finally, Principle Component Analysis (PCA) is
utilized to reduce the size of the data before computing the
classification accuracy with a multilayer perceptron neural network.
The proposed system produces powerful classification results (99%
accuracy) by using only a small portion of the original feature set.
Abstract: In this paper, we propose a new class of Volterra series based filters for image enhancement and restoration. Generally the linear filters reduce the noise and cause blurring at the edges. Some nonlinear filters based on median operator or rank operator deal with only impulse noise and fail to cancel the most common Gaussian distributed noise. A class of second order Volterra filters is proposed to optimize the trade-off between noise removal and edge preservation. In this paper, we consider both the Gaussian and mixed Gaussian-impulse noise to test the robustness of the filter. Image enhancement and restoration results using the proposed Volterra filter are found to be superior to those obtained with standard linear and nonlinear filters.
Abstract: This paper proposes a new version of the Particle
Swarm Optimization (PSO) namely, Modified PSO (MPSO) for
model order formulation of Single Input Single Output (SISO) linear
time invariant continuous systems. In the General PSO, the
movement of a particle is governed by three behaviors namely
inertia, cognitive and social. The cognitive behavior helps the
particle to remember its previous visited best position. In Modified
PSO technique split the cognitive behavior into two sections like
previous visited best position and also previous visited worst
position. This modification helps the particle to search the target very
effectively. MPSO approach is proposed to formulate the higher
order model. The method based on the minimization of error
between the transient responses of original higher order model and
the reduced order model pertaining to the unit step input. The results
obtained are compared with the earlier techniques utilized, to validate
its ease of computation. The proposed method is illustrated through
numerical example from literature.
Abstract: Sudoku is a kind of logic puzzles. Each puzzle consists
of a board, which is a 9×9 cells, divided into nine 3×3 subblocks
and a set of numbers from 1 to 9. The aim of this puzzle is to
fill in every cell of the board with a number from 1 to 9 such
that in every row, every column, and every subblock contains each
number exactly one. Sudoku puzzles belong to combinatorial problem
(NP complete). Sudoku puzzles can be solved by using a variety of
techniques/algorithms such as genetic algorithms, heuristics, integer
programming, and so on. In this paper, we propose a new approach for
solving Sudoku which is by modelling them as block-world problems.
In block-world problems, there are a number of boxes on the table
with a particular order or arrangement. The objective of this problem
is to change this arrangement into the targeted arrangement with the
help of two types of robots. In this paper, we present three models
for Sudoku. We modellized Sudoku as parameterized multi-agent
systems. A parameterized multi-agent system is a multi-agent system
which consists of several uniform/similar agents and the number of
the agents in the system is stated as the parameter of this system. We
use Temporal Logic of Actions (TLA) for formalizing our models.
Abstract: Efficient retrieval of multimedia objects has gained enormous focus in recent years. A number of techniques have been suggested for retrieval of textual information; however, relatively little has been suggested for efficient retrieval of multimedia objects. In this paper we have proposed a generic architecture for contextaware retrieval of multimedia objects. The proposed framework combines the well-known approaches of text-based retrieval and context-aware retrieval to formulate architecture for accurate retrieval of multimedia data.
Abstract: In this work we will present a new approach for shot transition auto-detection. Our approach is based on the analysis of Spatio-Temporal Video Slice (STVS) edges extracted from videos. The proposed approach is capable to efficiently detect both abrupt shot transitions 'cuts' and gradual ones such as fade-in, fade-out and dissolve. Compared to other techniques, our method is distinguished by its high level of precision and speed. Those performances are obtained due to minimizing the problem of the boundary shot detection to a simple 2D image partitioning problem.
Abstract: Over Current Relays (OCRs) and Directional Over Current Relays (DOCRs) are widely used for the radial protection and ring sub transmission protection systems and for distribution systems. All previous work formulates the DOCR coordination problem either as a Non-Linear Programming (NLP) for TDS and Ip or as a Linear Programming (LP) for TDS using recently a social behavior (Particle Swarm Optimization techniques) introduced to the work. In this paper, a Modified Particle Swarm Optimization (MPSO) technique is discussed for the optimal settings of DOCRs in power systems as a Non-Linear Programming problem for finding Ip values of the relays and for finding the TDS setting as a linear programming problem. The calculation of the Time Dial Setting (TDS) and the pickup current (Ip) setting of the relays is the core of the coordination study. PSO technique is considered as realistic and powerful solution schemes to obtain the global or quasi global optimum in optimization problem.
Abstract: In this paper, novel statistical sampling based equalization techniques and CNN based detection are proposed to increase the spectral efficiency of multiuser communication systems over fading channels. Multiuser communication combined with selective fading can result in interferences which severely deteriorate the quality of service in wireless data transmission (e.g. CDMA in mobile communication). The paper introduces new equalization methods to combat interferences by minimizing the Bit Error Rate (BER) as a function of the equalizer coefficients. This provides higher performance than the traditional Minimum Mean Square Error equalization. Since the calculation of BER as a function of the equalizer coefficients is of exponential complexity, statistical sampling methods are proposed to approximate the gradient which yields fast equalization and superior performance to the traditional algorithms. Efficient estimation of the gradient is achieved by using stratified sampling and the Li-Silvester bounds. A simple mechanism is derived to identify the dominant samples in real-time, for the sake of efficient estimation. The equalizer weights are adapted recursively by minimizing the estimated BER. The near-optimal performance of the new algorithms is also demonstrated by extensive simulations. The paper has also developed a (Cellular Neural Network) CNN based approach to detection. In this case fast quadratic optimization has been carried out by t, whereas the task of equalizer is to ensure the required template structure (sparseness) for the CNN. The performance of the method has also been analyzed by simulations.