Abstract: The design of distributed systems involves dividing the system into partitions (or components) and then allocating these partitions to physical nodes. There have been several techniques proposed for both the partitioning and allocation processes. These existing techniques suffer from a number of limitations including lack of support for replication. Replication is difficult to use effectively but has the potential to greatly improve the performance of a distributed system. This paper presents a new technique technique for allocating objects in order to improve performance in a distributed system that supports replication. The performance of the proposed technique is demonstrated and tested on an example system. The performance of the new technique is compared with the performance of an existing technique in order to demonstrate both the validity and superiority of the new technique when developing a distributed system that can utilise object replication.
Abstract: This paper proposes a new algebraic scheme to design a PID controller for higher order linear time invariant continuous systems. Modified PSO (MPSO) based model order formulation techniques have applied to obtain the effective formulated second order system. A controller is tuned to meet the desired performance specification by using pole-zero cancellation method. Proposed PID controller is attached with both higher order system and formulated second order system. The closed loop response is observed for stabilization process and compared with general PSO based formulated second order system. The proposed method is illustrated through numerical example from literature.
Abstract: Losses reduction initiatives in distribution systems
have been activated due to the increasing cost of supplying
electricity, the shortage in fuel with ever-increasing cost to produce
more power, and the global warming concerns. These initiatives have
been introduced to the utilities in shape of incentives and penalties.
Recently, the electricity distribution companies in Oman have been
incentivized to reduce the distribution technical and non-technical
losses with an equal annual reduction rate for 6 years. In this paper,
different techniques for losses reduction in Mazoon Electricity
Company (MZEC) are addressed. In this company, high numbers of
substation and feeders were found to be non-compliant with the
Distribution System Security Standard (DSSS). Therefore, 33
projects have been suggested to bring non-complying 29 substations
and 28 feeders to meet the planed criteria and to comply with the
DSSS. The largest part of MZEC-s network (South Batinah region)
was modeled by ETAP software package. The model has been
extended to implement the proposed projects and to examine their
effects on losses reduction. Simulation results have shown that the
implementation of these projects leads to a significant improvement
in voltage profile, and reduction in the active and the reactive power
losses. Finally, the economical analysis has revealed that the
implementation of the proposed projects in MZEC leads to an annual
saving of about US$ 5 million.
Abstract: Many researchers are working on information hiding
techniques using different ideas and areas to hide their secrete data.
This paper introduces a robust technique of hiding secret data in
image based on LSB insertion and RSA encryption technique. The
key of the proposed technique is to encrypt the secret data. Then the
encrypted data will be converted into a bit stream and divided it into
number of segments. However, the cover image will also be divided
into the same number of segments. Each segment of data will be
compared with each segment of image to find the best match
segment, in order to create a new random sequence of segments to be
inserted then in a cover image. Experimental results show that the
proposed technique has a high security level and produced better
stego-image quality.
Abstract: Article presents the geometry and structure
reconstruction procedure of the aircraft model for flatter research
(based on the I22-IRYDA aircraft). For reconstruction the Reverse
Engineering techniques and advanced surface modeling CAD tools
are used. Authors discuss all stages of data acquisition process,
computation and analysis of measured data. For acquisition the three
dimensional structured light scanner was used. In the further sections,
details of reconstruction process are present. Geometry
reconstruction procedure transform measured input data (points
cloud) into the three dimensional parametric computer model
(NURBS solid model) which is compatible with CAD systems.
Parallel to the geometry of the aircraft, the internal structure
(structural model) are extracted and modeled. In last chapter the
evaluation of obtained models are discussed.
Abstract: From a set of shifted, blurred, and decimated image , super-resolution image reconstruction can get a high-resolution image. So it has become an active research branch in the field of image restoration. In general, super-resolution image restoration is an ill-posed problem. Prior knowledge about the image can be combined to make the problem well-posed, which contributes to some regularization methods. In the regularization methods at present, however, regularization parameter was selected by experience in some cases and other techniques have too heavy computation cost for computing the parameter. In this paper, we construct a new super-resolution algorithm by transforming the solving of the System stem Є=An into the solving of the equations X+A*X-1A=I , and propose an inverse iterative method.
Abstract: This paper solves the environmental/ economic dispatch
power system problem using the Non-dominated Sorting Genetic
Algorithm-II (NSGA-II) and its hybrid with a Convergence Accelerator
Operator (CAO), called the NSGA-II/CAO. These multiobjective
evolutionary algorithms were applied to the standard IEEE 30-bus
six-generator test system. Several optimization runs were carried out
on different cases of problem complexity. Different quality measure
which compare the performance of the two solution techniques were
considered. The results demonstrated that the inclusion of the CAO
in the original NSGA-II improves its convergence while preserving
the diversity properties of the solution set.
Abstract: Collaborative working environments for distance
education can be considered as a more generic form of contemporary
remote labs. At present, the majority of existing real laboratories are
not constructed to allow the involved participants to collaborate in
real time. To make this revolutionary learning environment possible
we must allow the different users to carry out an experiment
simultaneously. In recent times, multi-user environments are
successfully applied in many applications such as air traffic control
systems, team-oriented military systems, chat-text tools, multi-player
games etc. Thus, understanding the ideas and techniques behind these
systems could be of great importance in the contribution of ideas to
our e-learning environment for collaborative working. In this
investigation, collaborative working environments from theoretical
and practical perspectives are considered in order to build an
effective collaborative real laboratory, which allows two students or
more to conduct remote experiments at the same time as a team. In
order to achieve this goal, we have implemented distributed system
architecture, enabling students to obtain an automated help by either
a human tutor or a rule-based e-tutor.
Abstract: Partial discharge (PD) detection is an important
method to evaluate the insulation condition of metal-clad apparatus.
Non-intrusive sensors which are easy to install and have no
interruptions on operation are preferred in onsite PD detection.
However, it often lacks of accuracy due to the interferences in PD
signals. In this paper a novel PD extraction method that uses frequency
analysis and entropy based time-frequency (TF) analysis is introduced.
The repetitive pulses from convertor are first removed via frequency
analysis. Then, the relative entropy and relative peak-frequency of
each pulse (i.e. time-indexed vector TF spectrum) are calculated and
all pulses with similar parameters are grouped. According to the
characteristics of non-intrusive sensor and the frequency distribution
of PDs, the pulses of PD and interferences are separated. Finally the
PD signal and interferences are recovered via inverse TF transform.
The de-noised result of noisy PD data demonstrates that the
combination of frequency and time-frequency techniques can
discriminate PDs from interferences with various frequency
distributions.
Abstract: In digital signal processing it is important to
approximate multi-dimensional data by the method called rank
reduction, in which we reduce the rank of multi-dimensional data from
higher to lower. For 2-dimennsional data, singular value
decomposition (SVD) is one of the most known rank reduction
techniques. Additional, outer product expansion expanded from SVD
was proposed and implemented for multi-dimensional data, which has
been widely applied to image processing and pattern recognition.
However, the multi-dimensional outer product expansion has behavior
of great computation complex and has not orthogonally between the
expansion terms. Therefore we have proposed an alterative method,
Third-order Orthogonal Tensor Product Expansion short for 3-OTPE.
3-OTPE uses the power method instead of nonlinear optimization
method for decreasing at computing time. At the same time the group
of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is
also developed with SVD extensions for multi-dimensional data.
3-OTPE and HOSVD are similarly on the rank reduction of
multi-dimensional data. Using these two methods we can obtain
computation results respectively, some ones are the same while some
ones are slight different. In this paper, we compare 3-OTPE to
HOSVD in accuracy of calculation and computing time of resolution,
and clarify the difference between these two methods.
Abstract: Classification of video sequences based on their contents is a vital process for adaptation techniques. It helps decide which adaptation technique best fits the resource reduction requested by the client. In this paper we used the principal feature analysis algorithm to select a reduced subset of video features. The main idea is to select only one feature from each class based on the similarities between the features within that class. Our results showed that using this feature reduction technique the source video features can be completely omitted from future classification of video sequences.
Abstract: Advent enhancements in the field of computing have
increased massive use of web based electronic documents. Current
Copyright protection laws are inadequate to prove the ownership for
electronic documents and do not provide strong features against
copying and manipulating information from the web. This has
opened many channels for securing information and significant
evolutions have been made in the area of information security.
Digital Watermarking has developed into a very dynamic area of
research and has addressed challenging issues for digital content.
Watermarking can be visible (logos or signatures) and invisible
(encoding and decoding). Many visible watermarking techniques
have been studied for text documents but there are very few for web
based text. XML files are used to trade information on the internet
and contain important information. In this paper, two invisible
watermarking techniques using Synonyms and Acronyms are
proposed for XML files to prove the intellectual ownership and to
achieve the security. Analysis is made for different attacks and
amount of capacity to be embedded in the XML file is also noticed.
A comparative analysis for capacity is also made for both methods.
The system has been implemented using C# language and all tests are
made practically to get the results.
Abstract: The purpose of planned islanding is to construct a
power island during system disturbances which are commonly
formed for maintenance purpose. However, in most of the cases
island mode operation is not allowed. Therefore distributed
generators (DGs) must sense the unplanned disconnection from the
main grid. Passive technique is the most commonly used method for
this purpose. However, it needs improvement in order to identify the
islanding condition. In this paper an effective method for
identification of islanding condition based on phase space and neural
network techniques has been developed. The captured voltage
waveforms at the coupling points of DGs are processed to extract the
required features. For this purposed a method known as the phase
space techniques is used. Based on extracted features, two neural
network configuration namely radial basis function and probabilistic
neural networks are trained to recognize the waveform class.
According to the test result, the investigated technique can provide
satisfactory identification of the islanding condition in the
distribution system.
Abstract: The analysis to detect arrhythmias and life-threatening
conditions are highly essential in today world and this analysis
can be accomplished by advanced non-linear processing methods
for accurate analysis of the complex signals of heartbeat dynamics.
In this perspective, recent developments in the field of multiscale
information content have lead to the Microcanonical Multiscale
Formalism (MMF). We show that such framework provides several
signal analysis techniques that are especially adapted to the
study of heartbeat dynamics. In this paper, we just show first hand
results of whether the considered heartbeat dynamics signals have
the multiscale properties by computing local preticability exponents
(LPEs) and the Unpredictable Points Manifold (UPM), and thereby
computing the singularity spectrum.
Abstract: Probabilistic techniques in computer programs are becoming
more and more widely used. Therefore, there is a big
interest in the formal specification, verification, and development
of probabilistic programs. In our work-in-progress project, we are
attempting to make a constructive framework for developing probabilistic
programs formally. The main contribution of this paper
is to introduce an intermediate artifact of our work, a Z-based
formalism called PZ, by which one can build set theoretical models of
probabilistic programs. We propose to use a constructive set theory,
called CZ set theory, to interpret the specifications written in PZ.
Since CZ has an interpretation in Martin-L¨of-s theory of types, this
idea enables us to derive probabilistic programs from correctness
proofs of their PZ specifications.
Abstract: The design of a complete expansion that allows for
compact representation of certain relevant classes of signals is a
central problem in signal processing applications. Achieving such a
representation means knowing the signal features for the purpose of
denoising, classification, interpolation and forecasting. Multilayer
Neural Networks are relatively a new class of techniques that are
mathematically proven to approximate any continuous function
arbitrarily well. Radial Basis Function Networks, which make use of
Gaussian activation function, are also shown to be a universal
approximator. In this age of ever-increasing digitization in the
storage, processing, analysis and communication of information,
there are numerous examples of applications where one needs to
construct a continuously defined function or numerical algorithm to
approximate, represent and reconstruct the given discrete data of a
signal. Many a times one wishes to manipulate the data in a way that
requires information not included explicitly in the data, which is
done through interpolation and/or extrapolation.
Tidal data are a very perfect example of time series and many
statistical techniques have been applied for tidal data analysis and
representation. ANN is recent addition to such techniques. In the
present paper we describe the time series representation capabilities
of a special type of ANN- Radial Basis Function networks and
present the results of tidal data representation using RBF. Tidal data
analysis & representation is one of the important requirements in
marine science for forecasting.
Abstract: Diabetes is one of the high prevalence diseases
worldwide with increased number of complications, with retinopathy
as one of the most common one. This paper describes how data
mining and case-based reasoning were integrated to predict
retinopathy prevalence among diabetes patients in Malaysia. The
knowledge base required was built after literature reviews and
interviews with medical experts. A total of 140 diabetes patients- data
were used to train the prediction system. A voting mechanism selects
the best prediction results from the two techniques used. It has been
successfully proven that both data mining and case-based reasoning
can be used for retinopathy prediction with an improved accuracy of
85%.
Abstract: During the past decade, pond aeration systems have
been developed which will sustain large quantities of fish and
invertebrate biomass. Dissolved Oxygen (DO) is considered to be
among the most important water quality parameters in fish culture.
Fishponds in aquaculture farms are usually located in remote areas
where grid lines are at far distance. Aeration of ponds is required to
prevent mortality and to intensify production, especially when
feeding is practical, and in warm regions. To increase pond
production it is necessary to control dissolved oxygen. Artificial
intelligence (AI) techniques are becoming useful as alternate
approaches to conventional techniques or as components of
integrated systems. They have been used to solve complicated
practical problems in various areas and are becoming more and more
popular nowadays. This paper presents a new design of diffused
aeration system using fuel cell as a power source. Also fuzzy logic
control Technique (FLC) is used for controlling the speed of air flow
rate from the blower to air piping connected to the pond by adjusting
blower speed. MATLAB SIMULINK results show high performance
of fuzzy logic control (FLC).
Abstract: Dual motor drives fed by single inverter is
purposely designed to reduced size and cost with respect to
single motor drives fed by single inverter. Previous researches
on dual motor drives only focus on the modulation and the
averaging techniques. Only a few of them, study the
performance of the drives based on different speed controller
other than Proportional and Integrator (PI) controller. This
paper presents a detailed comparative study on fuzzy rule-base
in Fuzzy Logic speed Controller (FLC) for Dual Permanent
Magnet Synchronous Motor (PMSM) drives. Two fuzzy speed
controllers which are standard and simplified fuzzy speed
controllers are designed and the results are compared and
evaluated. The standard fuzzy controller consists of 49 rules
while the proposed controller consists of 9 rules determined by
selecting the most dominant rules only. Both designs are
compared for wide range of speed and the robustness of both
controllers over load disturbance changes is tested to
demonstrate the effectiveness of the simplified/reduced rulebase.
Abstract: The amount of the information being churned out by the field of biology has jumped manifold and now requires the extensive use of computer techniques for the management of this information. The predominance of biological information such as protein sequence similarity in the biological information sea is key information for detecting protein evolutionary relationship. Protein sequence similarity typically implies homology, which in turn may imply structural and functional similarities. In this work, we propose, a learning method for detecting remote protein homology. The proposed method uses a transformation that converts protein sequence into fixed-dimensional representative feature vectors. Each feature vector records the sensitivity of a protein sequence to a set of amino acids substrings generated from the protein sequences of interest. These features are then used in conjunction with support vector machines for the detection of the protein remote homology. The proposed method is tested and evaluated on two different benchmark protein datasets and it-s able to deliver improvements over most of the existing homology detection methods.