Abstract: In this paper we are to find the optimum multiwavelet for compression of electrocardiogram (ECG) signals and then, selecting it for using with SPIHT codec. At present, it is not well known which multiwavelet is the best choice for optimum compression of ECG. In this work, we examine different multiwavelets on 24 sets of ECG data with entirely different characteristics, selected from MIT-BIH database. For assessing the functionality of the different multiwavelets in compressing ECG signals, in addition to known factors such as Compression Ratio (CR), Percent Root Difference (PRD), Distortion (D), Root Mean Square Error (RMSE) in compression literature, we also employed the Cross Correlation (CC) criterion for studying the morphological relations between the reconstructed and the original ECG signal and Signal to reconstruction Noise Ratio (SNR). The simulation results show that the Cardinal Balanced Multiwavelet (cardbal2) by the means of identity (Id) prefiltering method to be the best effective transformation. After finding the most efficient multiwavelet, we apply SPIHT coding algorithm on the transformed signal by this multiwavelet.
Abstract: The objective of this study is to investigate the
combustion in a pilot-ignited supercharged dual-fuel engine, fueled
with different types of gaseous fuels under various equivalence ratios.
It is found that if certain operating conditions are maintained,
conventional dual-fuel engine combustion mode can be transformed to
the combustion mode with the two-stage heat release. This mode of
combustion was called the PREMIER (PREmixed Mixture Ignition in
the End-gas Region) combustion. During PREMIER combustion,
initially, the combustion progresses as the premixed flame
propagation and then, due to the mixture autoignition in the end-gas
region, ahead of the propagating flame front, the transition occurs with
the rapid increase in the heat release rate.
Abstract: The perfect operation of common Active Filters is depended on accuracy of identification system distortion. Also, using a suitable method in current injection and reactive power compensation, leads to increased filter performance. Due to this fact, this paper presents a method based on predictive current control theory in shunt active filter applications. The harmonics of the load current is identified by using o–d–q reference frame on load current and eliminating the DC part of d–q components. Then, the rest of these components deliver to predictive current controller as a Threephase reference current by using Park inverse transformation. System is modeled in discreet time domain. The proposed method has been tested using MATLAB model for a nonlinear load (with Total Harmonic Distortion=20%). The simulation results indicate that the proposed filter leads to flowing a sinusoidal current (THD=0.15%) through the source. In addition, the results show that the filter tracks the reference current accurately.
Abstract: In this paper, we investigated vector control of an induction machine taking into account discretization problems of the command. In the purpose to show how to include in a discrete model of this current control and with rotor time constant update. The results of simulation obtained are very satisfaisant. That was possible thanks to the good choice of the values of the parameters of the regulators used which shows, the founded good of the method used, for the choice of the parameters of the discrete regulators. The simulation results are presented at the end of this paper.
Abstract: Shot boundary detection is a fundamental step for the organization of large video data. In this paper, we propose a new method for video gradual shots detection and classification, using advantages of fractal analysis and AIS-based classifier. Proposed features are “vertical intercept" and “fractal dimension" of each frame of videos which are computed using Fourier transform coefficients. We also used a classifier based on Clonal Selection Algorithm. We have carried out our solution and assessed it according to the TRECVID2006 benchmark dataset.
Abstract: Many factors affect the success of Machine Learning
(ML) on a given task. The representation and quality of the instance
data is first and foremost. If there is much irrelevant and redundant
information present or noisy and unreliable data, then knowledge
discovery during the training phase is more difficult. It is well known
that data preparation and filtering steps take considerable amount of
processing time in ML problems. Data pre-processing includes data
cleaning, normalization, transformation, feature extraction and
selection, etc. The product of data pre-processing is the final training
set. It would be nice if a single sequence of data pre-processing
algorithms had the best performance for each data set but this is not
happened. Thus, we present the most well know algorithms for each
step of data pre-processing so that one achieves the best performance
for their data set.
Abstract: Knowledge about the magnetic quantities in a magnetic circuit is always of great interest. On the one hand, this information is needed for the simulation of a transformer. On the other hand, parameter studies are more reliable, if the magnetic quantities are derived from a well established model. One possibility to model the 3-phase transformer is by using a magnetic equivalent circuit (MEC). Though this is a well known system, it is often not an easy task to set up such a model for a large number of lumped elements which additionally includes the nonlinear characteristic of the magnetic material. Here we show the setup of a solver for a MEC and the results of the calculation in comparison to measurements taken. The equations of the MEC are based on a rearranged system of the nodal analysis. Thus it is possible to achieve a minimum number of equations, and a clear and simple structure. Hence, it is uncomplicated in its handling and it supports the iteration process. Additional helpful tasks are implemented within the solver to enhance the performance. The electric circuit is described by an electric equivalent circuit (EEC). Our results for the 3-phase transformer demonstrate the computational efficiency of the solver, and show the benefit of the application of a MEC.
Abstract: Granular computing deals with representation of information in the form of some aggregates and related methods for transformation and analysis for problem solving. A granulation scheme based on clustering and Rough Set Theory is presented with focus on structured conceptualization of information has been presented in this paper. Experiments for the proposed method on four labeled data exhibit good result with reference to classification problem. The proposed granulation technique is semi-supervised imbibing global as well as local information granulation. To represent the results of the attribute oriented granulation a tree structure is proposed in this paper.
Abstract: The main focus of this paper is on the human induced
forces. Almost all existing force models for this type of load (defined
either in the time or frequency domain) are developed from the
assumption of perfect periodicity of the force and are based on force
measurements conducted on rigid (i.e. high frequency) surfaces. To
verify the different authors conclusions the vertical pressure
measurements invoked during the walking was performed, using
pressure gauges in various configurations. The obtained forces are
analyzed using Fourier transformation. This load is often decisive in
the design of footbridges. Design criteria and load models proposed
by widely used standards and other researchers were introduced and a
comparison was made.
Abstract: SAD (Sum of Absolute Difference) algorithm is
heavily used in motion estimation which is computationally highly
demanding process in motion picture encoding. To enhance the
performance of motion picture encoding on a VLIW processor, an
efficient implementation of SAD algorithm on the VLIW processor is
essential. SAD algorithm is programmed as a nested loop with a
conditional branch. In VLIW processors, loop is usually optimized by
software pipelining, but researches on optimal scheduling of software
pipelining for nested loops, especially nested loops with conditional
branches are rare. In this paper, we propose an optimal scheduling and
implementation of SAD algorithm with conditional branch on a VLIW
DSP processor. The proposed optimal scheduling first transforms the
nested loop with conditional branch into a single loop with conditional
branch with consideration of full utilization of ILP capability of the
VLIW processor and realization of earlier escape from the loop. Next,
the proposed optimal scheduling applies a modulo scheduling
technique developed for single loop. Based on this optimal scheduling
strategy, optimal implementation of SAD algorithm on TMS320C67x,
a VLIW DSP is presented. Through experiments on TMS320C6713
DSK, it is shown that H.263 encoder with the proposed SAD
implementation performs better than other H.263 encoder with other
SAD implementations, and that the code size of the optimal SAD
implementation is small enough to be appropriate for embedded
environments.
Abstract: A clustering is process to identify a homogeneous
groups of object called as cluster. Clustering is one interesting topic
on data mining. A group or class behaves similarly characteristics.
This paper discusses a robust clustering process for data images with
two reduction dimension approaches; i.e. the two dimensional
principal component analysis (2DPCA) and principal component
analysis (PCA). A standard approach to overcome this problem is
dimension reduction, which transforms a high-dimensional data into
a lower-dimensional space with limited loss of information. One of
the most common forms of dimensionality reduction is the principal
components analysis (PCA). The 2DPCA is often called a variant of
principal component (PCA), the image matrices were directly treated
as 2D matrices; they do not need to be transformed into a vector so
that the covariance matrix of image can be constructed directly using
the original image matrices. The decomposed classical covariance
matrix is very sensitive to outlying observations. The objective of
paper is to compare the performance of robust minimizing vector
variance (MVV) in the two dimensional projection PCA (2DPCA)
and the PCA for clustering on an arbitrary data image when outliers
are hiden in the data set. The simulation aspects of robustness and
the illustration of clustering images are discussed in the end of
paper
Abstract: ECG contains very important clinical information about the cardiac activities of the heart. Often the ECG signal needs to be captured for a long period of time in order to identify abnormalities in certain situations. Such signal apart of a large volume often is characterised by low quality due to the noise and other influences. In order to extract features in the ECG signal with time-varying characteristics at first need to be preprocessed with the best parameters. Also, it is useful to identify specific parts of the long lasting signal which have certain abnormalities and to direct the practitioner to those parts of the signal. In this work we present a method based on wavelet transform, standard deviation and variable threshold which achieves 100% accuracy in identifying the ECG signal peaks and heartbeat as well as identifying the standard deviation, providing a quick reference to abnormalities.
Abstract: The method described in this paper deals with the problems of T-wave detection in an ECG. Determining the position of a T-wave is complicated due to the low amplitude, the ambiguous and changing form of the complex. A wavelet transform approach handles these complications therefore a method based on this concept was developed. In this way we developed a detection method that is able to detect T-waves with a sensitivity of 93% and a correct-detection ratio of 93% even with a serious amount of baseline drift and noise.
Abstract: This work deals with aspects of support vector learning for large-scale data mining tasks. Based on a decomposition algorithm that can be run in serial and parallel mode we introduce a data transformation that allows for the usage of an expensive generalized kernel without additional costs. In order to speed up the decomposition algorithm we analyze the problem of working set selection for large data sets and analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our modifications and settings lead to improvement of support vector learning performance and thus allow using extensive parameter search methods to optimize classification accuracy.
Abstract: This paper proposes method of diagnosing ball screw
preload loss through the Hilbert-Huang Transform (HHT) and
Multiscale entropy (MSE) process. The proposed method can
diagnose ball screw preload loss through vibration signals when the
machine tool is in operation. Maximum dynamic preload of 2 %, 4 %,
and 6 % ball screws were predesigned, manufactured, and tested
experimentally. Signal patterns are discussed and revealed using
Empirical Mode Decomposition(EMD)with the Hilbert Spectrum.
Different preload features are extracted and discriminated using HHT.
The irregularity development of a ball screw with preload loss is
determined and abstracted using MSE based on complexity
perception. Experiment results show that the proposed method can
predict the status of ball screw preload loss. Smart sensing for the
health of the ball screw is also possible based on a comparative
evaluation of MSE by the signal processing and pattern matching of
EMD/HHT. This diagnosis method realizes the purposes of prognostic
effectiveness on knowing the preload loss and utilizing convenience.
Abstract: To produce sugar and ethanol, sugarcane processing
generates several agricultural residues, being straw and bagasse is
considered as the main among them. And what to do with this
residues has been subject of many studies and experiences in an
industry that, in recent years, highlighted by the ability to transform
waste into valuable products such as electric power. Cellulose is the
main component of these materials. It is the most common organic
polymer and represents about 1.5 x 1012 tons of total production of
biomass per year and is considered an almost inexhaustible source of
raw material. Pretreatment with mineral acids is one of the most
widely used as stage of cellulose extraction from lignocellulosic
materials for solubilizing most of the hemicellulose content. This
study had as goal to find the best reaction time of sugarcane bagasse
pretreatment with sulfuric acid in order to minimize the losses of
cellulose concomitantly with the highest possible removal of
hemicellulose and lignin. It was found that the best time for this
reaction was 40 minutes, in which it was reached a loss of
hemicelluloses around 70% and lignin and cellulose, around 15%.
Over this time, it was verified that the cellulose loss increased and
there was no loss of lignin and hemicellulose.
Abstract: Transient eddy current problem is solved in the
present paper by the method of the Laplace transform for the case of
a double conductor line located parallel to a conducting half-space.
The Fourier sine and cosine integral transforms are used in order to
find the Laplace transform of the solution. The inverse Laplace
transform of the solution is found in closed form. The integrated
electromotive force per unit length of the double conductor line is
calculated in the form of an improper integral.
Abstract: This paper presents investigation effects of a sharp edged gust on aeroelastic behavior and time-domain response of a typical section model using Jones approximate aerodynamics for pure plunging motion. Flutter analysis has been done by using p and p-k methods developed for presented finite-state aerodynamic model for a typical section model (airfoil). Introduction of gust analysis as a linear set of ordinary differential equations in a simplified procedure has been carried out by using transformation into an eigenvalue problem.
Abstract: This paper presents a new function expansion method for finding traveling wave solution of a non-linear equation and calls it the (G'/G)-expansion method. The shallow water wave equation is reduced to a non linear ordinary differential equation by using a simple transformation. As a result the traveling wave solutions of shallow water wave equation are expressed in three forms: hyperbolic solutions, trigonometric solutions and rational solutions.
Abstract: The passive electrical properties of a tissue depends
on the intrinsic constituents and its structure, therefore by measuring
the complex electrical impedance of the tissue it might be possible to
obtain indicators of the tissue state or physiological activity [1].
Complete bio-impedance information relative to physiology and
pathology of a human body and functional states of the body tissue or
organs can be extracted by using a technique containing a fourelectrode
measurement setup. This work presents the estimation
measurement setup based on the four-electrode technique. First, the
complex impedance is estimated by three different estimation
techniques: Fourier, Sine Correlation and Digital De-convolution and
then estimation errors for the magnitude, phase, reactance and
resistance are calculated and analyzed for different levels of
disturbances in the observations. The absolute values of relative
errors are plotted and the graphical performance of each technique is
compared.