Abstract: Electrical distribution systems are incurring large losses as the loads are wide spread, inadequate reactive power compensation facilities and their improper control. A comprehensive static VAR compensator consisting of capacitor bank in five binary sequential steps in conjunction with a thyristor controlled reactor of smallest step size is employed in the investigative work. The work deals with the performance evaluation through analytical studies and practical implementation on an existing system. A fast acting error adaptive controller is developed suitable both for contactor and thyristor switched capacitors. The switching operations achieved are transient free, practically no need to provide inrush current limiting reactors, TCR size minimum providing small percentages of nontriplen harmonics, facilitates stepless variation of reactive power depending on load requirement so as maintain power factor near unity always. It is elegant, closed loop microcontroller system having the features of self regulation in adaptive mode for automatic adjustment. It is successfully tested on a distribution transformer of three phase 50 Hz, Dy11, 11KV/440V, 125 KVA capacity and the functional feasibility and technical soundness are established. The controller developed is new, adaptable to both LT & HT systems and practically established to be giving reliable performance.
Abstract: In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.
Abstract: In this paper, an image adaptive, invisible digital
watermarking algorithm with Orthogonal Polynomials based
Transformation (OPT) is proposed, for copyright protection of digital
images. The proposed algorithm utilizes a visual model to determine
the watermarking strength necessary to invisibly embed the
watermark in the mid frequency AC coefficients of the cover image,
chosen with a secret key. The visual model is designed to generate a
Just Noticeable Distortion mask (JND) by analyzing the low level
image characteristics such as textures, edges and luminance of the
cover image in the orthogonal polynomials based transformation
domain. Since the secret key is required for both embedding and
extraction of watermark, it is not possible for an unauthorized user to
extract the embedded watermark. The proposed scheme is robust to
common image processing distortions like filtering, JPEG
compression and additive noise. Experimental results show that the
quality of OPT domain watermarked images is better than its DCT
counterpart.
Abstract: Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.
Abstract: In the traditional concept of product life cycle management, the activities of design, manufacturing, and assembly are performed in a sequential way. The drawback is that the considerations in design may contradict the considerations in manufacturing and assembly. The different designs of components can lead to different assembly sequences. Therefore, in some cases, a good design may result in a high cost in the downstream assembly activities. In this research, an integrated design evaluation and assembly sequence planning model is presented. Given a product requirement, there may be several design alternative cases to design the components for the same product. If a different design case is selected, the assembly sequence for constructing the product can be different. In this paper, first, the designed components are represented by using graph based models. The graph based models are transformed to assembly precedence constraints and assembly costs. A particle swarm optimization (PSO) approach is presented by encoding a particle using a position matrix defined by the design cases and the assembly sequences. The PSO algorithm simultaneously performs design evaluation and assembly sequence planning with an objective of minimizing the total assembly costs. As a result, the design cases and the assembly sequences can both be optimized. The main contribution lies in the new concept of integrated design evaluation and assembly sequence planning model and the new PSO solution method. The test results show that the presented method is feasible and efficient for solving the integrated design evaluation and assembly planning problem. In this paper, an example product is tested and illustrated.
Abstract: Large volumes of fingerprints are collected and stored
every day in a wide range of applications, including forensics, access
control etc. It is evident from the database of Federal Bureau of
Investigation (FBI) which contains more than 70 million finger
prints. Compression of this database is very important because of this
high Volume. The performance of existing image coding standards
generally degrades at low bit-rates because of the underlying block
based Discrete Cosine Transform (DCT) scheme. Over the past
decade, the success of wavelets in solving many different problems
has contributed to its unprecedented popularity. Due to
implementation constraints scalar wavelets do not posses all the
properties which are needed for better performance in compression.
New class of wavelets called 'Multiwavelets' which posses more
than one scaling filters overcomes this problem. The objective of this
paper is to develop an efficient compression scheme and to obtain
better quality and higher compression ratio through multiwavelet
transform and embedded coding of multiwavelet coefficients through
Set Partitioning In Hierarchical Trees algorithm (SPIHT) algorithm.
A comparison of the best known multiwavelets is made to the best
known scalar wavelets. Both quantitative and qualitative measures of
performance are examined for Fingerprints.
Abstract: X-ray mammography is the most effective method for
the early detection of breast diseases. However, the typical diagnostic
signs such as microcalcifications and masses are difficult to detect
because mammograms are of low-contrast and noisy. In this paper, a
new algorithm for image denoising and enhancement in Orthogonal
Polynomials Transformation (OPT) is proposed for radiologists to
screen mammograms. In this method, a set of OPT edge coefficients
are scaled to a new set by a scale factor called OPT scale factor. The
new set of coefficients is then inverse transformed resulting in
contrast improved image. Applications of the proposed method to
mammograms with subtle lesions are shown. To validate the
effectiveness of the proposed method, we compare the results to
those obtained by the Histogram Equalization (HE) and the Unsharp
Masking (UM) methods. Our preliminary results strongly suggest
that the proposed method offers considerably improved enhancement
capability over the HE and UM methods.
Abstract: Three sulphonic acid-doped polyanilines were
synthesized through chemical oxidation at low temperature (0-5 oC)
and potential of these polymers as sensing agent for O2 gas detection
in terms of fluorescence quenching was studied. Sulphuric acid,
dodecylbenzene sulphonic acid (DBSA) and camphor sulphonic acid
(CSA) were used as doping agents. All polymers obtained were dark
green powder. Polymers obtained were characterized by Fourier
transform infrared spectroscopy, ultraviolet-visible absorption
spectroscopy, thermogravimetry analysis, elemental analysis,
differential scanning calorimeter and gel permeation
chromatography. Characterizations carried out showed that polymers
were successfully synthesized with mass recovery for sulphuric aciddoped
polyaniline (SPAN), DBSA-doped polyaniline (DBSA-doped
PANI) and CSA-doped polyaniline (CSA-doped PANI) of 71.40%,
75.00% and 39.96%, respectively. Doping level of SPAN, DBSAdoped
PANI and CSA-doped PANI were 32.86%, 33.13% and
53.96%, respectively as determined based on elemental analysis.
Sensing test was carried out on polymer sample in the form of
solution and film by using fluorescence spectrophotometer. Samples
of polymer solution and polymer film showed positive response
towards O2 exposure. All polymer solutions and films were fully
regenerated by using N2 gas within 1 hour period. Photostability
study showed that all samples of polymer solutions and films were
stable towards light when continuously exposed to xenon lamp for 9
hours. The relative standard deviation (RSD) values for SPAN
solution, DBSA-doped PANI solution and CSA-doped PANI
solution for repeatability were 0.23%, 0.64% and 0.76%,
respectively. Meanwhile RSD values for reproducibility were 2.36%,
6.98% and 1.27%, respectively. Results for SPAN film, DBSAdoped
PANI film and CSA-doped PANI film showed the same
pattern with RSD values for repeatability of 0.52%, 4.05% and
0.90%, respectively. Meanwhile RSD values for reproducibility were
2.91%, 10.05% and 7.42%, respectively. The study on effect of the
flow rate on response time was carried out using 3 different rates
which were 0.25 mL/s, 1.00 mL/s and 2.00 mL/s. Results obtained
showed that the higher the flow rate, the shorter the response time.
Abstract: Water, soil and sediment contaminated with
metolachlor poses a threat to the environment and human health.
We determined the effectiveness of nano-zerovalent iron (NZVI) to
dechlorinate metolachlor [2-chloro-n-(2-ethyl-6-methyl-phenyl)-n-
(1-methoxypropan-2-yl)acetamide] in pH solution and the presence
of aluminium salt. The optimum dosage of degradation of 100 mlL-1
metolachlor was 1% (w/v) NZVI. The degradation kinetic rate (kobs)
was 0.218×10-3 min-1 and specific first-order rates (kSA) was
8.72×10-7 L m-2min-1. By treating aqueous solutions of metolachlor
with NZVI, metolachlor destruction rate were increased as the pH
decrease from 10 to 4. Lowering solution pH removes Fe (III)
passivating layers from the NZVI and makes it free for reductive
transformations. Destruction kinetic rates were 20.8×10-3 min-1 for
pH4, 18.9×10-3 min-1 for pH7, 13.8×10-3 min-1 for pH10. In addition,
destruction kinetic of metolachlor by NZVI was enhanced when
aluminium sulfate was added. The destruction kinetic rate were
20.4×10-3 min-1 for 0.05% Al(SO4)3 and 60×10-3 min-1 for 0.1%
Al(SO4)3.
Abstract: Access to information is the key to the empowerment of everybody despite where they are living. This research is to be carried out in respect of the people living in developing countries, considering their plight and complex geographical, demographic, social-economic conditions surrounding the areas they live, which hinder access to information and of professionals providing services such as medical workers, which has led to high death rates and development stagnation. Research on Unified Communications and Integrated Collaborations (UCIC) system in the health sector of developing countries comes in to create a possible solution of bridging the digital canyon among the communities. The aim is to deliver services in a seamless manner to assist health workers situated anywhere to be accessed easily and access information which will help in service delivery. The proposed UCIC provides the most immersive Telepresence experience for one-to-one or many-tomany meetings. Extending to locations anywhere in the world, the transformative platform delivers Ultra-low operating costs through the use of general purpose networks and using special lenses and track systems.
Abstract: This paper adopted the hybrid differential transform approach for studying heat transfer problems in a gold/chromium thin film with an ultra-short-pulsed laser beam projecting on the gold side. The physical system, formulated based on the hyperbolic two-step heat transfer model, covers three characteristics: (i) coupling effects between the electron/lattice systems, (ii) thermal wave propagation in metals, and (iii) radiation effects along the interface. The differential transform method is used to transfer the governing equations in the time domain into the spectrum equations, which is further discretized in the space domain by the finite difference method. The results, obtained through a recursive process, show that the electron temperature in the gold film can rise up to several thousand degrees before its electron/lattice systems reach equilibrium at only several hundred degrees. The electron and lattice temperatures in the chromium film are much lower than those in the gold film.
Abstract: In this work a new offline signature recognition system
based on Radon Transform, Fractal Dimension (FD) and Support Vector Machine (SVM) is presented. In the first step, projections of
original signatures along four specified directions have been performed using radon transform. Then, FDs of four obtained
vectors are calculated to construct a feature vector for each
signature. These vectors are then fed into SVM classifier for recognition of signatures. In order to evaluate the effectiveness of
the system several experiments are carried out. Offline signature
database from signature verification competition (SVC) 2004 is used
during all of the tests. Experimental result indicates that the proposed method achieved high accuracy rate in signature recognition.
Abstract: In this study, to compress ECG signals, KLT (Karhunen-
Loeve Transform) method has been used. The purpose of this method is to
perform effective ECG coding by a correlation between the length of frames
and the number of vectors of ECG signals.
Abstract: Rice seed expression (cDNA) library in the Lambda
Zap 11® phage constructed from the developing grain 10-20 days
after flowering was transformed into yeast for functional
complementation assays in three salt sensitive yeast mutants S.
cerevisiae strain CY162, G19 and Axt3K. Transformed cells of G19
and Axt3K with pYES vector with cDNA inserts showed enhance
tolerance than those with empty pYes vector. Sequencing of the
cDNA inserts revealed that they encode for the putative proteins with
the sequence homologous to rice putative protein PROLM24
(Os06g31070), a prolamin precursor. Expression of this cDNA did
not affect yeast growth in absence of salt. Axt3k and G19 strains
expressing the PROLM24 were able to grow upto 400 mM and 600
mM of NaCl respectively. Similarly, Axt3k mutant with PROLM24
expression showed comparatively higher growth rate in the medium
with excess LiCl (50 mM). The observation that expression of
PROLM24 rescued the salt sensitive phenotypes of G19 and Axt3k
indicates the existence of a regulatory system that ameliorates the
effect of salt stress in the transformed yeast mutants. However, the
exact function of the cDNA sequence, which shows partial sequence
homology to yeast UTR1 is not clear. Although UTR1 involved in
ferrous uptake and iron homeostasis in yeast cells, there is no
evidence to prove its role in Na+ homeostasis in yeast cells. Absence
of transmembrane regions in Os06g31070 protein indicates that salt
tolerance is achieved not through the direct functional
complementation of the mutant genes but through an alternative
mechanism.
Abstract: Mapping between local and global coordinates is an
important issue in finite element method, as all calculations are
performed in local coordinates. The concern arises when subparametric
are used, in which the shape functions of the field variable
and the geometry of the element are not the same. This is particularly
the case for C* elements in which the extra degrees of freedoms
added to the nodes make the elements sub-parametric. In the present
work, transformation matrix for C1* (an 8-noded hexahedron
element with 12 degrees of freedom at each node) is obtained using
equivalent C0 elements (with the same number of degrees of
freedom). The convergence rate of 8-noded C1* element is nearly
equal to its equivalent C0 element, while it consumes less CPU time
with respect to the C0 element. The existence of derivative degrees
of freedom at the nodes of C1* element along with excellent
convergence makes it superior compared with it equivalent C0
element.
Abstract: This study has investigated the antidiabetic and
antioxidant potential of Pseudovaria macrophylla bark extract on
streptozotocin–nicotinamide induced type 2 diabetic rats. LCMSQTOF
and NMR experiments were done to determine the chemical
composition in the methanolic bark extract. For in vivo experiments,
the STZ (60 mg/kg/b.w, 15 min after 120 mg/kg/1 nicotinamide, i.p.)
induced diabetic rats were treated with methanolic extract of
Pseuduvaria macrophylla (200 and 400 mg/kg·bw) and
glibenclamide (2.5 mg/kg) as positive control respectively.
Biochemical parameters were assayed in the blood samples of all
groups of rats. The pro-inflammatory cytokines, antioxidant status
and plasma transforming growth factor βeta-1 (TGF-β1) were
evaluated. The histological study of the pancreas was examined and
its expression level of insulin was observed by
immunohistochemistry. In addition, the expression of glucose
transporters (GLUT 1, 2 and 4) were assessed in pancreas tissue by
western blot analysis. The outcomes of the study displayed that the
bark methanol extract of Pseuduvaria macrophylla has potentially
normalized the elevated blood glucose levels and improved serum
insulin and C-peptide levels with significant increase in the
antioxidant enzyme, reduced glutathione (GSH) and decrease in the
level of lipid peroxidation (LPO). Additionally, the extract has
markedly decreased the levels of serum pro-inflammatory cytokines
and transforming growth factor beta-1 (TGF-β1). Histopathology
analysis demonstrated that Pseuduvaria macrophylla has the
potential to protect the pancreas of diabetic rats against peroxidation
damage by downregulating oxidative stress and elevated
hyperglycaemia. Furthermore, the expression of insulin protein,
GLUT-1, GLUT-2 and GLUT-4 in pancreatic cells was enhanced.
The findings of this study support the anti-diabetic claims of
Pseudovaria macrophylla bark.
Abstract: In Image processing the Image compression can improve
the performance of the digital systems by reducing the cost and
time in image storage and transmission without significant reduction
of the Image quality. This paper describes hardware architecture of
low complexity Discrete Cosine Transform (DCT) architecture for
image compression[6]. In this DCT architecture, common computations
are identified and shared to remove redundant computations
in DCT matrix operation. Vector processing is a method used for
implementation of DCT. This reduction in computational complexity
of 2D DCT reduces power consumption. The 2D DCT is performed
on 8x8 matrix using two 1-Dimensional Discrete cosine transform
blocks and a transposition memory [7]. Inverse discrete cosine
transform (IDCT) is performed to obtain the image matrix and
reconstruct the original image. The proposed image compression
algorithm is comprehended using MATLAB code. The VLSI design
of the architecture is implemented Using Verilog HDL. The proposed
hardware architecture for image compression employing DCT was
synthesized using RTL complier and it was mapped using 180nm
standard cells. . The Simulation is done using Modelsim. The
simulation results from MATLAB and Verilog HDL are compared.
Detailed analysis for power and area was done using RTL compiler
from CADENCE. Power consumption of DCT core is reduced to
1.027mW with minimum area[1].
Abstract: In this paper, an efficient local appearance feature
extraction method based the multi-resolution Curvelet transform is
proposed in order to further enhance the performance of the well
known Linear Discriminant Analysis(LDA) method when applied
to face recognition. Each face is described by a subset of band
filtered images containing block-based Curvelet coefficients. These
coefficients characterize the face texture and a set of simple statistical
measures allows us to form compact and meaningful feature vectors.
The proposed method is compared with some related feature extraction
methods such as Principal component analysis (PCA), as well
as Linear Discriminant Analysis LDA, and independent component
Analysis (ICA). Two different muti-resolution transforms, Wavelet
(DWT) and Contourlet, were also compared against the Block Based
Curvelet-LDA algorithm. Experimental results on ORL, YALE and
FERET face databases convince us that the proposed method provides
a better representation of the class information and obtains much
higher recognition accuracies.
Abstract: The design of a pattern classifier includes an attempt
to select, among a set of possible features, a minimum subset of
weakly correlated features that better discriminate the pattern classes.
This is usually a difficult task in practice, normally requiring the
application of heuristic knowledge about the specific problem
domain. The selection and quality of the features representing each
pattern have a considerable bearing on the success of subsequent
pattern classification. Feature extraction is the process of deriving
new features from the original features in order to reduce the cost of
feature measurement, increase classifier efficiency, and allow higher
classification accuracy. Many current feature extraction techniques
involve linear transformations of the original pattern vectors to new
vectors of lower dimensionality. While this is useful for data
visualization and increasing classification efficiency, it does not
necessarily reduce the number of features that must be measured
since each new feature may be a linear combination of all of the
features in the original pattern vector. In this paper a new approach is
presented to feature extraction in which feature selection, feature
extraction, and classifier training are performed simultaneously using
a genetic algorithm. In this approach each feature value is first
normalized by a linear equation, then scaled by the associated weight
prior to training, testing, and classification. A knn classifier is used to
evaluate each set of feature weights. The genetic algorithm optimizes
a vector of feature weights, which are used to scale the individual
features in the original pattern vectors in either a linear or a nonlinear
fashion. By this approach, the number of features used in classifying
can be finely reduced.
Abstract: This paper introduces a new signal denoising based on the Empirical mode decomposition (EMD) framework. The method is a fully data driven approach. Noisy signal is decomposed adaptively into oscillatory components called Intrinsic mode functions (IMFs) by means of a process called sifting. The EMD denoising involves filtering or thresholding each IMF and reconstructs the estimated signal using the processed IMFs. The EMD can be combined with a filtering approach or with nonlinear transformation. In this work the Savitzky-Golay filter and shoftthresholding are investigated. For thresholding, IMF samples are shrinked or scaled below a threshold value. The standard deviation of the noise is estimated for every IMF. The threshold is derived for the Gaussian white noise. The method is tested on simulated and real data and compared with averaging, median and wavelet approaches.