Abstract: In this paper presents a technique for developing the
computational efficiency in simulating double output induction
generators (DOIG) with two rotor circuits where stator transients are
to be included. Iterative decomposition is used to separate the flux–
Linkage equations into decoupled fast and slow subsystems, after
which the model order of the fast subsystems is reduced by
neglecting the heavily damped fast transients caused by the second
rotor circuit using integral manifolds theory. The two decoupled
subsystems along with the equation for the very slowly changing slip
constitute a three time-scale model for the machine which resulted in
increasing computational speed. Finally, the proposed method of
reduced order in this paper is compared with the other conventional
methods in linear and nonlinear modes and it is shown that this
method is better than the other methods regarding simulation
accuracy and speed.
Abstract: This article simulates the wind generator set which has
two fault bearing collar rail destruction and the gear box oil leak fault.
The electric current signal which produced by the generator, We use
Empirical Mode Decomposition (EMD) as well as Fast Fourier
Transform (FFT) obtains the frequency range-s signal figure and
characteristic value. The last step is use a kind of Artificial Neural
Network (ANN) classifies which determination fault signal's type and
reason. The ANN purpose of the automatic identification wind
generator set fault..
Abstract: As a result of the daily workflow in the design
development departments of companies, databases containing huge
numbers of 3D geometric models are generated. According to the
given problem engineers create CAD drawings based on their design
ideas and evaluate the performance of the resulting design, e.g. by
computational simulations. Usually, new geometries are built either
by utilizing and modifying sets of existing components or by adding
single newly designed parts to a more complex design.
The present paper addresses the two facets of acquiring
components from large design databases automatically and providing
a reasonable overview of the parts to the engineer. A unified
framework based on the topographic non-negative matrix
factorization (TNMF) is proposed which solves both aspects
simultaneously. First, on a given database meaningful components
are extracted into a parts-based representation in an unsupervised
manner. Second, the extracted components are organized and
visualized on square-lattice 2D maps. It is shown on the example of
turbine-like geometries that these maps efficiently provide a wellstructured
overview on the database content and, at the same time,
define a measure for spatial similarity allowing an easy access and
reuse of components in the process of design development.
Abstract: The pyrolysis of hazelnut shell, polyethylene oxide and their blends were carried out catalytically at 500 and 650 ºC. Potassium dichromate was chosen according to its oxidative characteristics and decomposition temperature (500 ºC) where decomposition products are CrO3 and K2CrO4. As a main effect, a remarkable increase in gasification was observed using this catalyst for pure components and blends especially at 500 ºC rather than 650 ºC contrary to the main observation in the pyrolysis process. The increase in gas product quantity was compensated mainly with decrease in solid product and additionally in some cases liquid products.
Abstract: One of the major problems in liberalized power
markets is loss allocation. In this paper, a different method for
allocating transmission losses to pool market participants is
proposed. The proposed method is fundamentally based on
decomposition of loss function and current projection concept. The
method has been implemented and tested on several networks and
one sample summarized in the paper. The results show that the
method is comprehensive and fair to allocating the energy losses of a
power market to its participants.
Abstract: The ability to detect and classify the type of fault
plays a great role in the protection of power system. This procedure
is required to be precise with no time consumption. In this paper
detection of fault type has been implemented using wavelet analysis
together with wavelet entropy principle. The simulation of power
system is carried out using PSCAD/EMTDC. Different types of
faults were studied obtaining various current waveforms. These
current waveforms were decomposed using wavelet analysis into
different approximation and details. The wavelet entropy of such
decompositions is analyzed reaching a successful methodology for
fault classification. The suggested approach is tested using different
fault types and proven successful identification for the type of fault.
Abstract: In this paper, an approach to reduce the computation steps required by fast neural networksfor the searching process is presented. The principle ofdivide and conquer strategy is applied through imagedecomposition. Each image is divided into small in sizesub-images and then each one is tested separately usinga fast neural network. The operation of fast neuralnetworks based on applying cross correlation in thefrequency domain between the input image and theweights of the hidden neurons. Compared toconventional and fast neural networks, experimentalresults show that a speed up ratio is achieved whenapplying this technique to locate human facesautomatically in cluttered scenes. Furthermore, fasterface detection is obtained by using parallel processingtechniques to test the resulting sub-images at the sametime using the same number of fast neural networks. Incontrast to using only fast neural networks, the speed upratio is increased with the size of the input image whenusing fast neural networks and image decomposition.
Abstract: This paper proposes rough set models with three
different level knowledge granules in incomplete information system
under tolerance relation by similarity between objects according to
their attribute values. Through introducing dominance relation on the
discourse to decompose similarity classes into three subclasses: little
better subclass, little worse subclass and vague subclass, it dismantles
lower and upper approximations into three components. By using
these components, retrieving information to find naturally hierarchical
expansions to queries and constructing answers to elaborative queries
can be effective. It illustrates the approach in applying rough set
models in the design of information retrieval system to access different
granular expanded documents. The proposed method enhances rough
set model application in the flexibility of expansions and elaborative
queries in information retrieval.
Abstract: This paper presents the robust stability criteria for uncertain genetic regulatory networks with time-varying delays. One key point of the criterion is that the decomposition of the matrix ˜D into ˜D = ˜D1 + ˜D2. This decomposition corresponds to a decomposition of the delayed terms into two groups: the stabilizing ones and the destabilizing ones. This technique enables one to take the stabilizing effect of part of the delayed terms into account. Meanwhile, by choosing an appropriate new Lyapunov functional, a new delay-dependent stability criteria is obtained and formulated in terms of linear matrix inequalities (LMIs). Finally, numerical examples are presented to illustrate the effectiveness of the theoretical results.
Abstract: Summarizing skills have been introduced to English
syllabus in secondary school in Malaysia to evaluate student-s comprehension for a given text where it requires students to employ several strategies to produce the summary. This paper reports on our effort to develop a computer-based summarization assessment system
that detects the strategies used by the students in producing their
summaries. Sentence decomposition of expert-written summaries is
used to analyze how experts produce their summary sentences. From
the analysis, we identified seven summarizing strategies and their
rules which are then transformed into a set of heuristic rules on how
to determine the summarizing strategies. We developed an algorithm
based on the heuristic rules and performed some experiments to
evaluate and support the technique proposed.
Abstract: We present a subband adaptive infinite-impulse response (IIR) filtering method, which is based on a polyphase decomposition of IIR filter. Motivated by the fact that the polyphase structure has benefits in terms of convergence rate and stability, we introduce the polyphase decomposition to subband IIR filtering, i.e., in each subband high order IIR filter is decomposed into polyphase IIR filters with lower order. Computer simulations demonstrate that the proposed method has improved convergence rate over conventional IIR filters.
Abstract: This paper aims to study decomposition behavior in
pyrolytic environment of four lignocellulosic biomass (oil palm shell,
oil palm frond, rice husk and paddy straw), and two commercial
components of biomass (pure cellulose and lignin), performed in a
thermogravimetry analyzer (TGA). The unit which consists of a
microbalance and a furnace flowed with 100 cc (STP) min-1 Nitrogen,
N2 as inert. Heating rate was set at 20⁰C min-1 and temperature
started from 50 to 900⁰C. Hydrogen gas production during the
pyrolysis was observed using Agilent Gas Chromatography Analyzer
7890A. Oil palm shell, oil palm frond, paddy straw and rice husk
were found to be reactive enough in a pyrolytic environment of up to
900°C since pyrolysis of these biomass starts at temperature as low as
200°C and maximum value of weight loss is achieved at about
500°C. Since there was not much different in the cellulose,
hemicelluloses and lignin fractions between oil palm shell, oil palm
frond, paddy straw and rice husk, the T-50 and R-50 values obtained
are almost similar. H2 productions started rapidly at this temperature
as well due to the decompositions of biomass inside the TGA.
Biomass with more lignin content such as oil palm shell was found to
have longer duration of H2 production compared to materials of high
cellulose and hemicelluloses contents.
Abstract: This research aimed to study on the potential of
recycling organic waste in Suan Sunandha Rajabhat University as
compost. In doing so, the composition of solid waste generated in the
campus was investigated while physical and chemical properties of
organic waste were analyzed in order to evaluate the portion of waste
suitable for recycling as compost. As a result of the study, it was
found that (1) the amount of organic waste was averaged at 299.8
kg/day in which mixed food wastes had the highest amount of 191.9
kg/day followed by mixed leave & yard wastes and mixed fruit &
vegetable wastes at the amount of 66.3 and 41.6 kg/day respectively;
(2) physical and chemical properties of organic waste in terms of
moisture content was between 69.54 to 78.15%, major elements for
plant as N, P and K were 0.14 to 0.17%, 0.46 to 0.52% and 0.16 to
0.18% respectively, and carbon/nitrogen ratio (C/N) was about 15:1
to 17.5:1; (3) recycling organic waste as compost was designed by
aerobic decomposition using mixed food wastes : mixed leave & yard
wastes : mixed fruit & vegetable wastes at the portion of 3:2:1 by
weight in accordance with the potential of their amounts and their
physical and chemical properties.
Abstract: Non-stationary trend in R-R interval series is
considered as a main factor that could highly influence the evaluation
of spectral analysis. It is suggested to remove trends in order to obtain
reliable results. In this study, three detrending methods, the
smoothness prior approach, the wavelet and the empirical mode
decomposition, were compared on artificial R-R interval series with
four types of simulated trends. The Lomb-Scargle periodogram was
used for spectral analysis of R-R interval series. Results indicated that
the wavelet method showed a better overall performance than the other
two methods, and more time-saving, too. Therefore it was selected for
spectral analysis of real R-R interval series of thirty-seven healthy
subjects. Significant decreases (19.94±5.87% in the low frequency
band and 18.97±5.78% in the ratio (p
Abstract: This paper describes a method of signal process applied
on an end effects of Hilbert-Huang transform (HHT) to provide an
improvement in the reality of spectrum. The method is based on
back-propagation network (BPN). To improve the effect, the end
extension of the original signal is obtained by back-propagation
network. A full waveform including origin and its extension is
decomposed by using empirical mode decomposition (EMD) to obtain
intrinsic mode functions (IMFs) of the waveform. Then, the Hilbert
transform (HT) is applied to the IMFs to obtain the Hilbert spectrum of
the waveform. As a result, the method is superiority of the processing
of end effect of HHT to obtain the real frequency spectrum of signals.
Abstract: Our aim in this piece of work is to demonstrate the
power of the Laplace Adomian decomposition method (LADM) in
approximating the solutions of nonlinear differential equations
governing the two-dimensional viscous flow induced by a shrinking
sheet.
Abstract: Digital watermarking is one of the techniques for
copyright protection. In this paper, a normalization-based robust
image watermarking scheme which encompasses singular value
decomposition (SVD) and discrete cosine transform (DCT)
techniques is proposed. For the proposed scheme, the host image is
first normalized to a standard form and divided into non-overlapping
image blocks. SVD is applied to each block. By concatenating the
first singular values (SV) of adjacent blocks of the normalized image,
a SV block is obtained. DCT is then carried out on the SV blocks to
produce SVD-DCT blocks. A watermark bit is embedded in the highfrequency
band of a SVD-DCT block by imposing a particular
relationship between two pseudo-randomly selected DCT
coefficients. An adaptive frequency mask is used to adjust local
watermark embedding strength. Watermark extraction involves
mainly the inverse process. The watermark extracting method is blind
and efficient. Experimental results show that the quality degradation
of watermarked image caused by the embedded watermark is visually
transparent. Results also show that the proposed scheme is robust
against various image processing operations and geometric attacks.
Abstract: A study of the obtainable watermark data rate for information hiding algorithms is presented in this paper. As the perceptual entropy for wideband monophonic audio signals is in the range of four to five bits per sample, a significant amount of additional information can be inserted into signal without causing any perceptual distortion. Experimental results showed that transform domain watermark embedding outperforms considerably watermark embedding in time domain and that signal decompositions with a high gain of transform coding, like the wavelet transform, are the most suitable for high data rate information hiding. Keywords?Digital watermarking, information hiding, audio watermarking, watermark data rate.
Abstract: In this paper, Optimum adaptive loading algorithms
are applied to multicarrier system with Space-Time Block Coding
(STBC) scheme associated with space-time processing based on
singular-value decomposition (SVD) of the channel matrix over
Rayleigh fading channels. SVD method has been employed in
MIMO-OFDM system in order to overcome subchannel interference.
Chaw-s and Compello-s algorithms have been implemented to obtain
a bit and power allocation for each subcarrier assuming instantaneous
channel knowledge. The adaptive loaded SVD-STBC scheme is
capable of providing both full-rate and full-diversity for any number
of transmit antennas. The effectiveness of these techniques has
demonstrated through the simulation of an Adaptive loaded SVDSTBC
system, and the comparison shown that the proposed
algorithms ensure better performance in the case of MIMO.
Abstract: Natural outdoor scene classification is active and
promising research area around the globe. In this study, the
classification is carried out in two phases. In the first phase, the
features are extracted from the images by wavelet decomposition
method and stored in a database as feature vectors. In the second
phase, the neural classifiers such as back-propagation neural network
(BPNN) and resilient back-propagation neural network (RPNN) are
employed for the classification of scenes. Four hundred color images
are considered from MIT database of two classes as forest and street.
A comparative study has been carried out on the performance of the
two neural classifiers BPNN and RPNN on the increasing number of
test samples. RPNN showed better classification results compared to
BPNN on the large test samples.