Abstract: The design of a complete expansion that allows for
compact representation of certain relevant classes of signals is a
central problem in signal processing applications. Achieving such a
representation means knowing the signal features for the purpose of
denoising, classification, interpolation and forecasting. Multilayer
Neural Networks are relatively a new class of techniques that are
mathematically proven to approximate any continuous function
arbitrarily well. Radial Basis Function Networks, which make use of
Gaussian activation function, are also shown to be a universal
approximator. In this age of ever-increasing digitization in the
storage, processing, analysis and communication of information,
there are numerous examples of applications where one needs to
construct a continuously defined function or numerical algorithm to
approximate, represent and reconstruct the given discrete data of a
signal. Many a times one wishes to manipulate the data in a way that
requires information not included explicitly in the data, which is
done through interpolation and/or extrapolation.
Tidal data are a very perfect example of time series and many
statistical techniques have been applied for tidal data analysis and
representation. ANN is recent addition to such techniques. In the
present paper we describe the time series representation capabilities
of a special type of ANN- Radial Basis Function networks and
present the results of tidal data representation using RBF. Tidal data
analysis & representation is one of the important requirements in
marine science for forecasting.
Abstract: Users of computer systems may often require the
private transfer of messages/communications between parties across
a network. Information warfare and the protection and dominance of
information in the military context is a prime example of an
application area in which the confidentiality of data needs to be
maintained. The safe transportation of critical data is therefore often
a vital requirement for many private communications. However,
unwanted interception/sniffing of communications is also a
possibility. An elementary stealthy transfer scheme is therefore
proposed by the authors. This scheme makes use of encoding,
splitting of a message and the use of a hashing algorithm to verify the
correctness of the reconstructed message. For this proof-of-concept
purpose, the authors have experimented with the random sending of
encoded parts of a message and the construction thereof to
demonstrate how data can stealthily be transferred across a network
so as to prevent the obvious retrieval of data.
Abstract: This paper presents the determination of the proper
quality costs parameters which provide the optimum return. The
system dynamics simulation was applied. The simulation model was
constructed by the real data from a case of the electronic devices
manufacturer in Thailand. The Steepest Descent algorithm was
employed to optimise. The experimental results show that the
company should spend on prevention and appraisal activities for 850
and 10 Baht/day respectively. It provides minimum cumulative total
quality cost, which is 258,000 Baht in twelve months. The effect of
the step size in the stage of improving the variables to the optimum
was also investigated. It can be stated that the smaller step size
provided a better result with more experimental runs. However, the
different yield in this case is not significant in practice. Therefore, the
greater step size is recommended because the region of optima could
be reached more easily and rapidly.
Abstract: This paper presents a systematic procedure for modelling and simulation of a power system installed with a power system stabilizer (PSS) and a flexible ac transmission system (FACTS)-based controller. For the design purpose, the model of example power system which is a single-machine infinite-bus power system installed with the proposed controllers is developed in MATLAB/SIMULINK. In the developed model synchronous generator is represented by model 1.1. which includes both the generator main field winding and the damper winding in q-axis so as to evaluate the impact of PSS and FACTS-based controller on power system stability. The model can be can be used for teaching the power system stability phenomena, and also for research works especially to develop generator controllers using advanced technologies. Further, to avoid adverse interactions, PSS and FACTS-based controller are simultaneously designed employing genetic algorithm (GA). The non-linear simulation results are presented for the example power system under various disturbance conditions to validate the effectiveness of the proposed modelling and simultaneous design approach.
Abstract: In this study, a 3D combustion chamber was simulated
using FLUENT 6.32. Aim to obtain detailed information on
combustion characteristics and _ nitrogen oxides in the furnace and
the effect of oxygen enrichment in a combustion process. Oxygenenriched
combustion is an effective way to reduce emissions. This
paper analyzes NO emission, including thermal NO and prompt NO.
Flow rate ratio of air to fuel is varied as 1.3, 3.2 and 5.1 and the
oxygen enriched flow rates are 28, 54 and 68 lit/min. The 3D
Reynolds Averaged Navier Stokes (RANS) equations with standard
k-ε turbulence model are solved together by Fluent 6.32 software.
First order upwind scheme is used to model governing equations and
the SIMPLE algorithm is used as pressure velocity coupling. Results
show that for AF=1.3, increase the oxygen flow rate of oxygen
reduction in NO emissions is Lance. Moreover, in a fixed oxygen
enrichment condition, increasing the air to fuel ratio will increase the
temperature peak, but not the NO emission rate. As a result, oxygen
enrichment can reduce the NO emission at this kind of furnace in low
air to fuel rates.
Abstract: A new secure knapsack cryptosystem based on the
Merkle-Hellman public key cryptosystem will be proposed in this
paper. Although it is common sense that when the density is low, the
knapsack cryptosystem turns vulnerable to the low-density attack. The
density d of a secure knapsack cryptosystem must be larger than
0.9408 to avoid low-density attack. In this paper, we investigate a
new Permutation Combination Algorithm. By exploiting this
algorithm, we shall propose a novel knapsack public-key cryptosystem.
Our proposed scheme can enjoy a high density to avoid the
low-density attack. The density d can also exceed 0.9408 to avoid
the low-density attack.
Abstract: XML files contain data which is in well formatted manner. By studying the format or semantics of the grammar it will be helpful for fast retrieval of the data. There are many algorithms which describes about searching the data from XML files. There are no. of approaches which uses data structure or are related to the contents of the document. In these cases user must know about the structure of the document and information retrieval techniques using NLPs is related to content of the document. Hence the result may be irrelevant or not so successful and may take more time to search.. This paper presents fast XML retrieval techniques by using new indexing technique and the concept of RXML. When indexing an XML document, the system takes into account both the document content and the document structure and assigns the value to each tag from file. To query the system, a user is not constrained about fixed format of query.
Abstract: This paper presents a forgetting factor scheme for variable step-size affine projection algorithms (APA). The proposed scheme uses a forgetting processed input matrix as the projection matrix of pseudo-inverse to estimate system deviation. This method introduces temporal weights into the projection matrix, which is typically a better model of the real error's behavior than homogeneous temporal weights. The regularization overcomes the ill-conditioning introduced by both the forgetting process and the increasing size of the input matrix. This algorithm is tested by independent trials with coloured input signals and various parameter combinations. Results show that the proposed algorithm is superior in terms of convergence rate and misadjustment compared to existing algorithms. As a special case, a variable step size NLMS with forgetting factor is also presented in this paper.
Abstract: Digital watermarking is a way to provide the facility of secure multimedia data communication besides its copyright protection approach. The Spread Spectrum modulation principle is widely used in digital watermarking to satisfy the robustness of multimedia signals against various signal-processing operations. Several SS watermarking algorithms have been proposed for multimedia signals but very few works have discussed on the issues responsible for secure data communication and its robustness improvement. The current paper has critically analyzed few such factors namely properties of spreading codes, proper signal decomposition suitable for data embedding, security provided by the key, successive bit cancellation method applied at decoder which have greater impact on the detection reliability, secure communication of significant signal under camouflage of insignificant signals etc. Based on the analysis, robust SS watermarking scheme for secure data communication is proposed in wavelet domain and improvement in secure communication and robustness performance is reported through experimental results. The reported result also shows improvement in visual and statistical invisibility of the hidden data.
Abstract: Most of fuzzy clustering algorithms have some
discrepancies, e.g. they are not able to detect clusters with convex
shapes, the number of the clusters should be a priori known, they
suffer from numerical problems, like sensitiveness to the
initialization, etc. This paper studies the synergistic combination of
the hierarchical and graph theoretic minimal spanning tree based
clustering algorithm with the partitional Gath-Geva fuzzy clustering
algorithm. The aim of this hybridization is to increase the robustness
and consistency of the clustering results and to decrease the number
of the heuristically defined parameters of these algorithms to
decrease the influence of the user on the clustering results. For the
analysis of the resulted fuzzy clusters a new fuzzy similarity measure
based tool has been presented. The calculated similarities of the
clusters can be used for the hierarchical clustering of the resulted
fuzzy clusters, which information is useful for cluster merging and
for the visualization of the clustering results. As the examples used
for the illustration of the operation of the new algorithm will show,
the proposed algorithm can detect clusters from data with arbitrary
shape and does not suffer from the numerical problems of the
classical Gath-Geva fuzzy clustering algorithm.
Abstract: In this paper, a robust statistics based filter to remove salt and pepper noise in digital images is presented. The function of the algorithm is to detect the corrupted pixels first since the impulse noise only affect certain pixels in the image and the remaining pixels are uncorrupted. The corrupted pixels are replaced by an estimated value using the proposed robust statistics based filter. The proposed method perform well in removing low to medium density impulse noise with detail preservation upto a noise density of 70% compared to standard median filter, weighted median filter, recursive weighted median filter, progressive switching median filter, signal dependent rank ordered mean filter, adaptive median filter and recently proposed decision based algorithm. The visual and quantitative results show the proposed algorithm outperforms in restoring the original image with superior preservation of edges and better suppression of impulse noise
Abstract: Imaging is defined as the process of obtaining
geometric images either two dimensional or three dimensional by scanning or digitizing the existing objects or products. In this research, it applied to retrieve 3D information of the human skin
surface in medical application. This research focuses on analyzing
and determining volume of leg ulcers using imaging devices. Volume
determination is one of the important criteria in clinical assessment of leg ulcer. The volume and size of the leg ulcer wound will give the
indication on responding to treatment whether healing or worsening.
Different imaging techniques are expected to give different result (and accuracies) in generating data and images. Midpoint projection
algorithm was used to reconstruct the cavity to solid model and compute the volume. Misinterpretation of the results can affect the
treatment efficacy. The objectives of this paper is to compare the
accuracy between two 3D data acquisition method, which is laser
triangulation and structured light methods, It was shown that using models with known volume, that structured-light-based 3D technique
produces better accuracy compared with laser triangulation data
acquisition method for leg ulcer volume determination.
Abstract: Evolutionary robotics is concerned with the design of
intelligent systems with life-like properties by means of simulated
evolution. Approaches in evolutionary robotics can be categorized
according to the control structures that represent the behavior and the
parameters of the controller that undergo adaptation. The basic idea
is to automatically synthesize behaviors that enable the robot to
perform useful tasks in complex environments. The evolutionary
algorithm searches through the space of parameterized controllers
that map sensory perceptions to control actions, thus realizing a
specific robotic behavior. Further, the evolutionary algorithm
maintains and improves a population of candidate behaviors by
means of selection, recombination and mutation. A fitness function
evaluates the performance of the resulting behavior according to the
robot-s task or mission. In this paper, the focus is in the use of
genetic algorithms to solve a multi-objective optimization problem
representing robot behaviors; in particular, the A-Compander Law is
employed in selecting the weight of each objective during the
optimization process. Results using an adaptive fitness function show
that this approach can efficiently react to complex tasks under
variable environments.
Abstract: Finding synchronizing sequences for the finite automata is a very important problem in many practical applications (part orienters in industry, reset problem in biocomputing theory, network issues etc). Problem of finding the shortest synchronizing sequence is NP-hard, so polynomial algorithms probably can work only as heuristic ones. In this paper we propose two versions of polynomial algorithms which work better than well-known Eppstein-s Greedy and Cycle algorithms.
Abstract: The paper presents the applications of artificial
intelligence technique called adaptive tabu search to design the
controller of a buck converter. The averaging model derived from the
DQ and generalized state-space averaging methods is applied to
simulate the system during a searching process. The simulations
using such averaging model require the faster computational time
compared with that of the full topology model from the software
packages. The reported model is suitable for the work in the paper in
which the repeating calculation is needed for searching the best
solution. The results will show that the proposed design technique
can provide the better output waveforms compared with those
designed from the classical method.
Abstract: Mixed Model Production is the practice of assembling
several distinct and different models of a product on the same
assembly line without changeovers and then sequencing those models
in a way that smoothes the demand for upstream components. In this
paper, we consider an objective function which minimizes total
stoppage time and total idle time and it is presented sequence
dependent set up time. Many studies have been done on the mixed
model assembly lines. But in this paper we specifically focused on
reducing the idle times. This is possible through various help policies.
For improving the solutions, some cases developed and about 40 tests
problem was considered. We use scatter search for optimization and
for showing the efficiency of our algorithm, experimental results
shows behavior of method. Scatter search and help policies can
produce high quality answers, so it has been used in this paper.
Abstract: Realistic 3D face model is desired in various
applications such as face recognition, games, avatars, animations, and
etc. Construction of 3D face model is composed of 1) building a face
shape model and 2) rendering the face shape model. Thus, building a
realistic 3D face shape model is an essential step for realistic 3D face
model. Recently, 3D morphable model is successfully introduced to
deal with the various human face shapes. 3D dense correspondence
problem should be precedently resolved for constructing a realistic 3D
dense morphable face shape model. Several approaches to 3D dense
correspondence problem in 3D face modeling have been proposed
previously, and among them optical flow based algorithms and TPS
(Thin Plate Spline) based algorithms are representative. Optical flow
based algorithms require texture information of faces, which is
sensitive to variation of illumination. In TPS based algorithms
proposed so far, TPS process is performed on the 2D projection
representation in cylindrical coordinates of the 3D face data, not
directly on the 3D face data and thus errors due to distortion in data
during 2D TPS process may be inevitable.
In this paper, we propose a new 3D dense correspondence algorithm
for 3D dense morphable face shape modeling. The proposed algorithm
does not need texture information and applies TPS directly on 3D face
data. Through construction procedures, it is observed that the proposed
algorithm constructs realistic 3D face morphable model reliably and
fast.
Abstract: In this research, the laminar heat transfer of natural convection on vertical surfaces has been investigated. Most of the studies on natural convection have been considered constantly whereas velocity and temperature domain, do not change with time, transient one are used a lot. Governing equations are solved using a finite volume approach. The convective terms are discretized using the power-law scheme, whereas for diffusive terms the central difference is employed. Coupling between the velocity and pressure is made with SIMPLE algorithm. The resultant system of discretized linear algebraic equations is solved with an alternating direction implicit scheme. Then a configuration of rectangular fins is put in different ways on the surface and heat transfer of natural convection on these surfaces without sliding is studied and finally optimization is done.
Abstract: A multi-block algorithm and its implementation in two-dimensional finite element numerical model CCHE2D are presented. In addition to a conventional Lagrangian Interpolation Method (LIM), a novel interpolation method, called Consistent Interpolation Method (CIM), is proposed for more accurate information transfer across the interfaces. The consistent interpolation solves the governing equations over the auxiliary elements constructed around the interpolation nodes using the same numerical scheme used for the internal computational nodes. With the CIM, the momentum conservation can be maintained as well as the mass conservation. An imbalance correction scheme is used to enforce the conservation laws (mass and momentum) across the interfaces. Comparisons of the LIM and the CIM are made using several flow simulation examples. It is shown that the proposed CIM is physically more accurate and produces satisfactory results efficiently.
Abstract: A new decomposition form is introduced in this report
to establish a criterion for the bi-partite separability of Bell diagonal
states. A such criterion takes a quadratic inequality of the coefficients
of a given Bell diagonal states and can be derived via a simple
algorithmic calculation of its invariants. In addition, the criterion can
be extended to a quantum system of higher dimension.