Abstract: This paper deals with a novel approach of power
transformers diagnostics. This approach identifies the exact location
and the range of a fault in the transformer and helps to reduce
operation costs related to handling of the faulty transformer, its
disassembly and repair. The advantage of the approach is a
possibility to simulate healthy transformer and also all faults, which
can occur in transformer during its operation without its
disassembling, which is very expensive in practice. The approach is
based on creating frequency dependent impedance of the transformer
by sweep frequency response analysis measurements and by 3D FE
parametrical modeling of the fault in the transformer. The parameters
of the 3D FE model are the position and the range of the axial short
circuit. Then, by comparing the frequency dependent impedances of
the parametrical models with the measured ones, the location and the
range of the fault is identified. The approach was tested on a real
transformer and showed high coincidence between the real fault and
the simulated one.
Abstract: Video watermarking is usually considered as watermarking of a set of still images. In frame-by-frame watermarking approach, each video frame is seen as a single watermarked image, so collusion attack is more critical in video watermarking. If the same or redundant watermark is used for embedding in every frame of video, the watermark can be estimated and then removed by watermark estimate remodolulation (WER) attack. Also if uncorrelated watermarks are used for every frame, these watermarks can be washed out with frame temporal filtering (FTF). Switching watermark system or so-called SS-N system has better performance against WER and FTF attacks. In this system, for each frame, the watermark is randomly picked up from a finite pool of watermark patterns. At first SS-N system will be surveyed and then a new collusion attack for SS-N system will be proposed using a new algorithm for separating video frame based on watermark pattern. So N sets will be built in which every set contains frames carrying the same watermark. After that, using WER attack in every set, N different watermark patterns will be estimated and removed later.
Abstract: Cheating on standardized tests has been a major
concern as it potentially minimizes measurement precision. One
major way to reduce cheating by collusion is to administer multiple
forms of a test. Even with this approach, potential collusion is still
quite large. A Latin-square treatment structure for distributing
multiple forms is proposed to further reduce the colluding potential.
An index to measure the extent of colluding potential is also
proposed. Finally, with a simple algorithm, the various Latin-squares
were explored to find the best structure to keep the colluding
potential to a minimum.
Abstract: Morgan-s refinement calculus (MRC) is one of the
well-known methods allowing the formality presented in the program
specification to be continued all the way to code. On the other hand,
Object-Z (OZ) is an extension of Z adding support for classes and
objects. There are a number of methods for obtaining code from OZ
specifications that can be categorized into refinement and animation
methods. As far as we know, only one refinement method exists
which refines OZ specifications into code. However, this method
does not have fine-grained refinement rules and thus cannot be
automated. On the other hand, existing animation methods do not
present mapping rules formally and do not support the mapping of
several important constructs of OZ, such as all cases of operation
expressions and most of constructs in global paragraph. In this paper,
with the aim of providing an automatic path from OZ specifications
to code, we propose an approach to map OZ specifications into their
counterparts in MRC in order to use fine-grained refinement rules of
MRC. In this way, having counterparts of our specifications in MRC,
we can refine them into code automatically using MRC tools such as
RED. Other advantages of our work pertain to proposing mapping
rules formally, supporting the mapping of all important constructs of
Object-Z, and considering dynamic instantiation of objects while OZ
itself does not cover this facility.
Abstract: In this paper, we consider the design of pulse shaping
filter using orthogonal Hermite-Rodriguez basis functions. The pulse
shaping filter design problem has been formulated and solved as a
quadratic programming problem with linear inequality constraints.
Compared with the existing approaches reported in the literature, the
use of Hermite-Rodriguez functions offers an effective alternative to
solve the constrained filter synthesis problem. This is demonstrated
through a numerical example which is concerned with the design of
an equalization filter for a digital transmission channel.
Abstract: Automatic reading of handwritten cheque is a computationally
complex process and it plays an important role in financial
risk management. Machine vision and learning provide a viable
solution to this problem. Research effort has mostly been focused
on recognizing diverse pitches of cheques and demand drafts with an
identical outline. However most of these methods employ templatematching
to localize the pitches and such schemes could potentially
fail when applied to different types of outline maintained by the
bank. In this paper, the so-called outline problem is resolved by
a cheque information tree (CIT), which generalizes the localizing
method to extract active-region-of-entities. In addition, the weight
based density plot (WBDP) is performed to isolate text entities and
read complete pitches. Recognition is based on texture features using
neural classifiers. Legal amount is subsequently recognized by both
texture and perceptual features. A post-processing phase is invoked
to detect the incorrect readings by Type-2 grammar using the Turing
machine. The performance of the proposed system was evaluated
using cheque and demand drafts of 22 different banks. The test data
consists of a collection of 1540 leafs obtained from 10 different
account holders from each bank. Results show that this approach
can easily be deployed without significant design amendments.
Abstract: In this paper, we will present an architecture for the
implementation of a real time stereoscopic images correction's
approach. This architecture is parallel and makes use of several
memory blocs in which are memorized pre calculated data relating to
the cameras used for the acquisition of images. The use of reduced
images proves to be essential in the proposed approach; the
suggested architecture must so be able to carry out the real time
reduction of original images.
Abstract: The distressing flood scenarios that occur in
recent years at the surrounding areas of Sarawak River have
left damages of properties and indirectly caused disruptions of
productive activities. This study is meant to reconstruct a 100-year
flood event that took place in this river basin. Sarawak River Subbasin
was chosen and modeled using the one-dimensional
hydrodynamic modeling approach using InfoWorks River Simulation
(RS), in combination with Geographical Information System (GIS).
This produces the hydraulic response of the river and its floodplains
in extreme flooding conditions. With different parameters introduced
to the model, correlations of observed and simulated data are
between 79% – 87%. Using the best calibrated model, flood
mitigation structures are imposed along the sub-basin. Analysis is
done based on the model simulation results. Result shows that the
proposed retention ponds constructed along the sub-basin provide the
most efficient reduction of flood by 34.18%.
Abstract: In this paper we propose a new criterion for solving
the problem of channel shortening in multi-carrier systems. In a
discrete multitone receiver, a time-domain equalizer (TEQ) reduces
intersymbol interference (ISI) by shortening the effective duration of
the channel impulse response. Minimum mean square error (MMSE)
method for TEQ does not give satisfactory results. In [1] a new
criterion for partially equalizing severe ISI channels to reduce the
cyclic prefix overhead of the discrete multitone transceiver (DMT),
assuming a fixed transmission bandwidth, is introduced. Due to
specific constrained (unit morm constraint on the target impulse
response (TIR)) in their method, the freedom to choose optimum
vector (TIR) is reduced. Better results can be obtained by avoiding
the unit norm constraint on the target impulse response (TIR). In
this paper we change the cost function proposed in [1] to the cost
function of determining the maximum of a determinant subject to
linear matrix inequality (LMI) and quadratic constraint and solve the
resulting optimization problem. Usefulness of the proposed method
is shown with the help of simulations.
Abstract: In this paper, a novel method for recognition of musical
instruments in a polyphonic music is presented by using an
embedded hidden Markov model (EHMM). EHMM is a doubly
embedded HMM structure where each state of the external HMM
is an independent HMM. The classification is accomplished for
two different internal HMM structures where GMMs are used as
likelihood estimators for the internal HMMs. The results are compared
to those achieved by an artificial neural network with two
hidden layers. Appropriate classification accuracies were achieved
both for solo instrument performance and instrument combinations
which demonstrates that the new approach outperforms the similar
classification methods by means of the dynamic of the signal.
Abstract: This paper presents an approach for daily optimal operation of distribution networks considering Distributed Generators (DGs). Due to private ownership of DGs, a cost based compensation method is used to encourage DGs in active and reactive power generation. The objective function is summation of electrical energy generated by DGs and substation bus (main bus) in the next day. A genetic algorithm is used to solve the optimal operation problem. The approach is tested on an IEEE34 buses distribution feeder.
Abstract: This paper mainly studies the analyses of parameters
in the intersection collision avoidance (ICA) system based on the radar
sensors. The parameters include the positioning errors, the repeat
period of the radar sensor, the conditions of potential collisions of two
cross-path vehicles, etc. The analyses of the parameters can provide
the requirements, limitations, or specifications of this ICA system. In
these analyses, the positioning errors will be increased as the measured
vehicle approach the intersection. In addition, it is not necessary to
implement the radar sensor in higher position since the positioning
sensitivities become serious as the height of the radar sensor increases.
A concept of the safety buffer distances for front and rear of the
measured vehicle is also proposed. The conditions for potential
collisions of two cross-path vehicles are also presented to facilitate the
computation algorithm.
Abstract: An Artificial Neural Network based modeling
technique has been used to study the influence of different
combinations of meteorological parameters on evaporation from a
reservoir. The data set used is taken from an earlier reported study.
Several input combination were tried so as to find out the importance
of different input parameters in predicting the evaporation. The
prediction accuracy of Artificial Neural Network has also been
compared with the accuracy of linear regression for predicting
evaporation. The comparison demonstrated superior performance of
Artificial Neural Network over linear regression approach. The
findings of the study also revealed the requirement of all input
parameters considered together, instead of individual parameters
taken one at a time as reported in earlier studies, in predicting the
evaporation. The highest correlation coefficient (0.960) along with
lowest root mean square error (0.865) was obtained with the input
combination of air temperature, wind speed, sunshine hours and
mean relative humidity. A graph between the actual and predicted
values of evaporation suggests that most of the values lie within a
scatter of ±15% with all input parameters. The findings of this study
suggest the usefulness of ANN technique in predicting the
evaporation losses from reservoirs.
Abstract: A data warehouse (DW) is a system which has value and role for decision-making by querying. Queries to DW are critical regarding to their complexity and length. They often access millions of tuples, and involve joins between relations and aggregations. Materialized views are able to provide the better performance for DW queries. However, these views have maintenance cost, so materialization of all views is not possible. An important challenge of DW environment is materialized view selection because we have to realize the trade-off between performance and view maintenance cost. Therefore, in this paper, we introduce a new approach aimed at solve this challenge based on Two-Phase Optimization (2PO), which is a combination of Simulated Annealing (SA) and Iterative Improvement (II), with the use of Multiple View Processing Plan (MVPP). Our experiments show that our method provides a further improvement in term of query processing cost and view maintenance cost.
Abstract: In this paper we discuss the effect of unbounded particle interaction operator on particle growth and we study how this can address the choice of appropriate time steps of the numerical simulation. We provide also rigorous mathematical proofs showing that large particles become dominating with increasing time while small particles contribute negligibly. Second, we discuss the efficiency of the algorithm by performing numerical simulations tests and by comparing the simulated solutions with some known analytic solutions to the Smoluchowski equation.
Abstract: In this paper we present a hybrid search algorithm for
solving constraint satisfaction and optimization problems. This
algorithm combines ideas of two basic approaches: complete and
incomplete algorithms which also known as systematic search and
local search algorithms. Different characteristics of systematic search
and local search methods are complementary. Therefore we have
tried to get the advantages of both approaches in the presented
algorithm. The major advantage of presented algorithm is finding
partial sound solution for complicated problems which their complete
solution could not be found in a reasonable time. This algorithm
results are compared with other algorithms using the well known
n-queens problem.
Abstract: A new digital watermarking technique for images that
are sensitive to blocking artifacts is presented. Experimental results
show that the proposed MDCT based approach produces highly
imperceptible watermarked images and is robust to attacks such as
compression, noise, filtering and geometric transformations. The
proposed MDCT watermarking technique is applied to fingerprints
for ensuring security. The face image and demographic text data of
an individual are used as multiple watermarks. An AFIS system was
used to quantitatively evaluate the matching performance of the
MDCT-based watermarked fingerprint. The high fingerprint
matching scores show that the MDCT approach is resilient to
blocking artifacts. The quality of the extracted face and extracted text
images was computed using two human visual system metrics and
the results show that the image quality was high.
Abstract: As the enormous amount of on-line text grows on the
World-Wide Web, the development of methods for automatically
summarizing this text becomes more important. The primary goal of
this research is to create an efficient tool that is able to summarize
large documents automatically. We propose an Evolving
connectionist System that is adaptive, incremental learning and
knowledge representation system that evolves its structure and
functionality. In this paper, we propose a novel approach for Part of
Speech disambiguation using a recurrent neural network, a paradigm
capable of dealing with sequential data. We observed that
connectionist approach to text summarization has a natural way of
learning grammatical structures through experience. Experimental
results show that our approach achieves acceptable performance.
Abstract: The third phase of web means semantic web requires many web pages which are annotated with metadata. Thus, a crucial question is where to acquire these metadata. In this paper we propose our approach, a semi-automatic method to annotate the texts of documents and web pages and employs with a quite comprehensive knowledge base to categorize instances with regard to ontology. The approach is evaluated against the manual annotations and one of the most popular annotation tools which works the same as our tool. The approach is implemented in .net framework and uses the WordNet for knowledge base, an annotation tool for the Semantic Web.
Abstract: The complexity of today-s software systems makes
collaborative development necessary to accomplish tasks.
Frameworks are necessary to allow developers perform their tasks
independently yet collaboratively. Similarity detection is one of the
major issues to consider when developing such frameworks. It allows
developers to mine existing repositories when developing their own
views of a software artifact, and it is necessary for identifying the
correspondences between the views to allow merging them and
checking their consistency. Due to the importance of the
requirements specification stage in software development, this paper
proposes a framework for collaborative development of Object-
Oriented formal specifications along with a similarity detection
approach to support the creation, merging and consistency checking
of specifications. The paper also explores the impact of using
additional concepts on improving the matching results. Finally, the
proposed approach is empirically evaluated.