Abstract: The switching lag-time and the voltage drop across
the power devices cause serious waveform distortions and
fundamental voltage drop in pulse width-modulated inverter output.
These phenomenons are conspicuous when both the output frequency
and voltage are low. To estimate the output voltage from the PWM
reference signal it is essential to take account of these imperfections
and to correct them. In this paper, on-line compensation method is
presented. It needs three simple blocs to add at the ideal reference
voltages. This method does not require any additional hardware
circuit and off- line experimental measurement. The paper includes
experimental results to demonstrate the validity of the proposed
method. It is applied, finally, in case of indirect vector controlled
induction machine and implemented using dSpace card.
Abstract: In syntactic pattern recognition a pattern can be
represented by a graph. Given an unknown pattern represented by
a graph g, the problem of recognition is to determine if the graph g
belongs to a language L(G) generated by a graph grammar G. The
so-called IE graphs have been defined in [1] for a description of
patterns. The IE graphs are generated by so-called ETPL(k) graph
grammars defined in [1]. An efficient, parsing algorithm for ETPL(k)
graph grammars for syntactic recognition of patterns represented by
IE graphs has been presented in [1]. In practice, structural
descriptions may contain pattern distortions, so that the assignment
of a graph g, representing an unknown pattern, to
a graph language L(G) generated by an ETPL(k) graph grammar G is
rejected by the ETPL(k) type parsing. Therefore, there is a need for
constructing effective parsing algorithms for recognition of distorted
patterns. The purpose of this paper is to present a new approach to
syntactic recognition of distorted patterns. To take into account all
variations of a distorted pattern under study, a probabilistic
description of the pattern is needed. A random IE graph approach is
proposed here for such a description ([2]).
Abstract: Wireless Sensor Network is Multi hop Self-configuring
Wireless Network consisting of sensor nodes. The deployment of
wireless sensor networks in many application areas, e.g., aggregation
services, requires self-organization of the network nodes into clusters.
Efficient way to enhance the lifetime of the system is to partition the
network into distinct clusters with a high energy node as cluster head.
The different methods of node clustering techniques have appeared in
the literature, and roughly fall into two families; those based on the
construction of a dominating set and those which are based solely on
energy considerations. Energy optimized cluster formation for a set
of randomly scattered wireless sensors is presented. Sensors within a
cluster are expected to be communicating with cluster head only. The
energy constraint and limited computing resources of the sensor nodes
present the major challenges in gathering the data. In this paper we
propose a framework to study how partially correlated data affect the
performance of clustering algorithms. The total energy consumption
and network lifetime can be analyzed by combining random geometry
techniques and rate distortion theory. We also present the relation
between compression distortion and data correlation.
Abstract: Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.
Abstract: The significance of psychology in studying politics
is embedded in philosophical issues as well as behavioural
pursuits. For the former is often associated with Sigmund Freud
and his followers. The latter is inspired by the writings of Harold
Lasswell. Political psychology or psychopolitics has its own
impression on political thought ever since it deciphers the concept
of human nature and political propaganda. More importantly,
psychoanalysis views political thought as a textual content which
needs to explore the latent from the manifest content. In other
words, it reads the text symptomatically and interprets the hidden
truth. This paper explains the paradigm of dream interpretation
applied by Freud. The dream work is a process which has four
successive activities: condensation, displacement, representation
and secondary revision. The texts dealing with political though can
also be interpreted on these principles. Freud's method of dream
interpretation draws its source after the hermeneutic model of
philological research. It provides theoretical perspective and
technical rules for the interpretation of symbolic structures. The
task of interpretation remains a discovery of equivalence of
symbols and actions through perpetual analogies. Psychoanalysis
can help in studying political thought in two ways: to study the text
distortion, Freud's dream interpretation is used as a paradigm
exploring the latent text from its manifest text; and to apply Freud's
psychoanalytic concepts and theories ranging from individual mind
to civilization, religion, war and politics.
Abstract: We describe an effective method for image encryption
which employs magnitude and phase manipulation using carrier
images. Although it involves traditional methods like magnitude and
phase encryptions, the novelty of this work lies in deploying the
concept of carrier images for encryption purpose. To this end, a
carrier image is randomly chosen from a set of stored images. One
dimensional (1-D) discrete Fourier transform (DFT) is then carried
out on the original image to be encrypted along with the carrier
image. Row wise spectral addition and scaling is performed between
the magnitude spectra of the original and carrier images by randomly
selecting the rows. Similarly, row wise phase addition and scaling is
performed between the original and carrier images phase spectra by
randomly selecting the rows. The encrypted image obtained by these
two operations is further subjected to one more level of magnitude
and phase manipulation using another randomly chosen carrier image
by 1-D DFT along the columns. The resulting encrypted image is
found to be fully distorted, resulting in increasing the robustness
of the proposed work. Further, applying the reverse process at the
receiver, the decrypted image is found to be distortionless.
Abstract: The efficiency of an image watermarking technique depends on the preservation of visually significant information. This is attained by embedding the watermark transparently with the maximum possible strength. The current paper presents an approach for still image digital watermarking in which the watermark embedding process employs the wavelet transform and incorporates Human Visual System (HVS) characteristics. The sensitivity of a human observer to contrast with respect to spatial frequency is described by the Contrast Sensitivity Function (CSF). The strength of the watermark within the decomposition subbands, which occupy an interval on the spatial frequencies, is adjusted according to this sensitivity. Moreover, the watermark embedding process is carried over the subband coefficients that lie on edges where distortions are less noticeable. The experimental evaluation of the proposed method shows very good results in terms of robustness and transparency.
Abstract: In this paper as showed a non-invasive 3D eye tracker
for optometry clinical applications. Measurements of biomechanical
variables in clinical practice have many font of errors associated with
traditional procedments such cover test (CT), near point of
accommodation (NPC), eye ductions (ED), eye vergences (EG) and,
eye versions (ES). Ocular motility should always be tested but all
evaluations have a subjective interpretations by practitioners, the
results is based in clinical experiences, repeatability and accuracy
don-t exist. Optometric-lab is a tool with 3 (tree) analogical video
cameras triggered and synchronized in one acquisition board AD.
The variables globe rotation angle and velocity can be quantified.
Data record frequency was performed with 27Hz, camera calibration
was performed in a know volume and image radial distortion
adjustments.
Abstract: Most fingerprint recognition techniques are based on minutiae matching and have been well studied. However, this technology still suffers from problems associated with the handling of poor quality impressions. One problem besetting fingerprint matching is distortion. Distortion changes both geometric position and orientation, and leads to difficulties in establishing a match among multiple impressions acquired from the same finger tip. Marking all the minutiae accurately as well as rejecting false minutiae is another issue still under research. Our work has combined many methods to build a minutia extractor and a minutia matcher. The combination of multiple methods comes from a wide investigation into research papers. Also some novel changes like segmentation using Morphological operations, improved thinning, false minutiae removal methods, minutia marking with special considering the triple branch counting, minutia unification by decomposing a branch into three terminations, and matching in the unified x-y coordinate system after a two-step transformation are used in the work.
Abstract: In H.264/AVC video encoding, rate-distortion
optimization for mode selection plays a significant role to achieve
outstanding performance in compression efficiency and video quality.
However, this mode selection process also makes the encoding
process extremely complex, especially in the computation of the ratedistortion
cost function, which includes the computations of the sum
of squared difference (SSD) between the original and reconstructed
image blocks and context-based entropy coding of the block. In this
paper, a transform-domain rate-distortion optimization accelerator
based on fast SSD (FSSD) and VLC-based rate estimation algorithm
is proposed. This algorithm could significantly simplify the hardware
architecture for the rate-distortion cost computation with only
ignorable performance degradation. An efficient hardware structure
for implementing the proposed transform-domain rate-distortion
optimization accelerator is also proposed. Simulation results
demonstrated that the proposed algorithm reduces about 47% of total
encoding time with negligible degradation of coding performance.
The proposed method can be easily applied to many mobile video
application areas such as a digital camera and a DMB (Digital
Multimedia Broadcasting) phone.
Abstract: In this paper, a joint source-channel coding (JSCC) scheme for time-varying channels is presented. The proposed scheme uses hierarchical framework for both source encoder and transmission via QAM modulation. Hierarchical joint source channel codes with hierarchical QAM constellations are designed to track the channel variations which yields to a higher throughput by adapting certain parameters of the receiver to the channel variation. We consider the problem of still image transmission over time-varying channels with channel state information (CSI) available at 1) receiver only and 2) both transmitter and receiver being informed about the state of the channel. We describe an algorithm that optimizes hierarchical source codebooks by minimizing the distortion due to source quantizer and channel impairments. Simulation results, based on image representation, show that, the proposed hierarchical system outperforms the conventional schemes based on a single-modulator and channel optimized source coding.
Abstract: This paper proposes a Fuzzy Expert System design to
determine the wearing properties of nitrided and non nitrided steel.
The proposed Fuzzy Expert System approach helps the user and the
manufacturer to forecast the wearing properties of nitrided and non
nitrided steel under specified laboratory conditions. Surfaces of the
engineering components are often nitrided for improving wear,
corosion, fatigue specifications. A major property of nitriding
process is reducing distortion and wearing of the metalic alloys. A
Fuzzy Expert System was developed for determining the wearing and
durability properties of nitrided and non nitrided steels that were
tested under different loads and different sliding speeds in the
laboratory conditions.
Abstract: In this paper, a method for matching image segments
using triangle-based (geometrical) regions is proposed. Triangular
regions are formed from triples of vertex points obtained from a
keypoint detector (SIFT). However, triangle regions are subject to
noise and distortion around the edges and vertices (especially acute
angles). Therefore, these triangles are expanded into parallelogramshaped
regions. The extracted image segments inherit an important
triangle property; the invariance to affine distortion. Given two
images, matching corresponding regions is conducted by computing
the relative affine matrix, rectifying one of the regions w.r.t. the other
one, then calculating the similarity between the reference and
rectified region. The experimental tests show the efficiency and
robustness of the proposed algorithm against geometrical distortion.
Abstract: The visualization of geographic information on mobile devices has become popular as the widespread use of mobile Internet. The mobility of these devices brings about much convenience to people-s life. By the add-on location-based services of the devices, people can have an access to timely information relevant to their tasks. However, visual analysis of geographic data on mobile devices presents several challenges due to the small display and restricted computing resources. These limitations on the screen size and resources may impair the usability aspects of the visualization applications. In this paper, a variable-scale visualization method is proposed to handle the challenge of small mobile display. By merging multiple scales of information into a single image, the viewer is able to focus on the interesting region, while having a good grasp of the surrounding context. This is essentially visualizing the map through a fisheye lens. However, the fisheye lens induces undesirable geometric distortion in the peripheral, which renders the information meaningless. The proposed solution is to apply map generalization that removes excessive information around the peripheral and an automatic smoothing process to correct the distortion while keeping the local topology consistent. The proposed method is applied on both artificial and real geographical data for evaluation.
Abstract: In this paper we are to find the optimum multiwavelet for compression of electrocardiogram (ECG) signals and then, selecting it for using with SPIHT codec. At present, it is not well known which multiwavelet is the best choice for optimum compression of ECG. In this work, we examine different multiwavelets on 24 sets of ECG data with entirely different characteristics, selected from MIT-BIH database. For assessing the functionality of the different multiwavelets in compressing ECG signals, in addition to known factors such as Compression Ratio (CR), Percent Root Difference (PRD), Distortion (D), Root Mean Square Error (RMSE) in compression literature, we also employed the Cross Correlation (CC) criterion for studying the morphological relations between the reconstructed and the original ECG signal and Signal to reconstruction Noise Ratio (SNR). The simulation results show that the Cardinal Balanced Multiwavelet (cardbal2) by the means of identity (Id) prefiltering method to be the best effective transformation. After finding the most efficient multiwavelet, we apply SPIHT coding algorithm on the transformed signal by this multiwavelet.
Abstract: The perfect operation of common Active Filters is depended on accuracy of identification system distortion. Also, using a suitable method in current injection and reactive power compensation, leads to increased filter performance. Due to this fact, this paper presents a method based on predictive current control theory in shunt active filter applications. The harmonics of the load current is identified by using o–d–q reference frame on load current and eliminating the DC part of d–q components. Then, the rest of these components deliver to predictive current controller as a Threephase reference current by using Park inverse transformation. System is modeled in discreet time domain. The proposed method has been tested using MATLAB model for a nonlinear load (with Total Harmonic Distortion=20%). The simulation results indicate that the proposed filter leads to flowing a sinusoidal current (THD=0.15%) through the source. In addition, the results show that the filter tracks the reference current accurately.
Abstract: Let Gα ,β (γ ,δ ) denote the class of function
f (z), f (0) = f ′(0)−1= 0 which satisfied e δ {αf ′(z)+ βzf ′′(z)}> γ i Re
in the open unit disk D = {z ∈ı : z < 1} for some α ∈ı (α ≠ 0) ,
β ∈ı and γ ∈ı (0 ≤γ 0 . In
this paper, we determine some extremal properties including
distortion theorem and argument of f ′( z ) .
Abstract: It has been shown that a load discontinuity at the end of
an impulse will result in an extra impulse and hence an extra amplitude
distortion if a step-by-step integration method is employed to yield the
shock response. In order to overcome this difficulty, three remedies
are proposed to reduce the extra amplitude distortion. The first remedy
is to solve the momentum equation of motion instead of the force
equation of motion in the step-by-step solution of the shock response,
where an external momentum is used in the solution of the momentum
equation of motion. Since the external momentum is a resultant of the
time integration of external force, the problem of load discontinuity
will automatically disappear. The second remedy is to perform a single
small time step immediately upon termination of the applied impulse
while the other time steps can still be conducted by using the time step
determined from general considerations. This is because that the extra
impulse caused by a load discontinuity at the end of an impulse is
almost linearly proportional to the step size. Finally, the third remedy
is to use the average value of the two different values at the integration
point of the load discontinuity to replace the use of one of them for
loading input. The basic motivation of this remedy originates from the
concept of no loading input error associated with the integration point
of load discontinuity. The feasibility of the three remedies are
analytically explained and numerically illustrated.
Abstract: The approach based on the wavelet transform has
been widely used for image denoising due to its multi-resolution
nature, its ability to produce high levels of noise reduction and the
low level of distortion introduced. However, by removing noise, high
frequency components belonging to edges are also removed, which
leads to blurring the signal features. This paper proposes a new
method of image noise reduction based on local variance and edge
analysis. The analysis is performed by dividing an image into 32 x 32
pixel blocks, and transforming the data into wavelet domain. Fast
lifting wavelet spatial-frequency decomposition and reconstruction is
developed with the advantages of being computationally efficient and
boundary effects minimized. The adaptive thresholding by local
variance estimation and edge strength measurement can effectively
reduce image noise while preserve the features of the original image
corresponding to the boundaries of the objects. Experimental results
demonstrate that the method performs well for images contaminated
by natural and artificial noise, and is suitable to be adapted for
different class of images and type of noises. The proposed algorithm
provides a potential solution with parallel computation for real time
or embedded system application.
Abstract: Oil debris signal generated from the inductive oil
debris monitor (ODM) is useful information for machine condition
monitoring but is often spoiled by background noise. To improve the
reliability in machine condition monitoring, the high-fidelity signal
has to be recovered from the noisy raw data. Considering that the noise
components with large amplitude often have higher frequency than
that of the oil debris signal, the integral transform is proposed to
enhance the detectability of the oil debris signal. To cancel out the
baseline wander resulting from the integral transform, the empirical
mode decomposition (EMD) method is employed to identify the trend
components. An optimal reconstruction strategy including both
de-trending and de-noising is presented to detect the oil debris signal
with less distortion. The proposed approach is applied to detect the oil
debris signal in the raw data collected from an experimental setup. The
result demonstrates that this approach is able to detect the weak oil
debris signal with acceptable distortion from noisy raw data.