Abstract: A proposed small-signal model parameters for a pseudomorphic high electron mobility transistor (PHEMT) is presented. Both extrinsic and intrinsic circuit elements of a smallsignal model are determined using genetic algorithm (GA) as a stochastic global search and optimization tool. The parameters extraction of the small-signal model is performed on 200-μm gate width AlGaAs/InGaAs PHEMT. The equivalent circuit elements for a proposed 18 elements model are determined directly from the measured S- parameters. The GA is used to extract the parameters of the proposed small-signal model from 0.5 up to 18 GHz.
Abstract: Bangla Vowel characterization determines the spectral properties of Bangla vowels for efficient synthesis as well as recognition of Bangla vowels. In this paper, Bangla vowels in isolated word have been analyzed based on speech production model within the framework of Analysis-by-Synthesis. This has led to the extraction of spectral parameters for the production model in order to produce different Bangla vowel sounds. The real and synthetic spectra are compared and a weighted square error has been computed along with the error in the formant bandwidths for efficient representation of Bangla vowels. The extracted features produced good representation of targeted Bangla vowel. Such a representation also plays essential role in low bit rate speech coding and vocoders.
Abstract: Independent component analysis can estimate unknown
source signals from their mixtures under the assumption that the
source signals are statistically independent. However, in a real environment,
the separation performance is often deteriorated because
the number of the source signals is different from that of the sensors.
In this paper, we propose an estimation method for the number of
the sources based on the joint distribution of the observed signals
under two-sensor configuration. From several simulation results, it
is found that the number of the sources is coincident to that of
peaks in the histogram of the distribution. The proposed method can
estimate the number of the sources even if it is larger than that of
the observed signals. The proposed methods have been verified by
several experiments.
Abstract: Tea is a widely consumed beverage that contains many components. Caffeine belongs to this group of components called alkaloids contain nitrogen. In this study caffeine contents of three types of Turkish teas are determined by using extraction method. After condensation process, residue of caffeine and oil are obtained with evaporation. The oil which is in the residue is removed by hot water. Extraction process performed by using chloroform and the crude caffeine is obtained. From the results of experiments, caffeine contents are found in black tea, green tea and earl grey tea as 3.57±0.43%, 3.11±0.02%, 4.29±0.27%, respectively. Caffeine contents which are found in 1, 5 and 10 cups of tea are calculated. Furthermore, the daily intake of caffeine from black teas that affects human health is investigated.
Abstract: In this paper we present the first Arabic sentence
dataset for on-line handwriting recognition written on tablet pc. The
dataset is natural, simple and clear. Texts are sampled from daily
newspapers. To collect naturally written handwriting, forms are
dictated to writers. The current version of our dataset includes 154
paragraphs written by 48 writers. It contains more than 3800 words
and more than 19,400 characters. Handwritten texts are mainly
written by researchers from different research centers. In order to use
this dataset in a recognition system word extraction is needed. In this
paper a new word extraction technique based on the Arabic
handwriting cursive nature is also presented. The technique is applied
to this dataset and good results are obtained. The results can be
considered as a bench mark for future research to be compared with.
Abstract: A multilayer self organizing neural neural network
(MLSONN) architecture for binary object extraction, guided by a beta
activation function and characterized by backpropagation of errors
estimated from the linear indices of fuzziness of the network output
states, is discussed. Since the MLSONN architecture is designed to
operate in a single point fixed/uniform thresholding scenario, it does
not take into cognizance the heterogeneity of image information in
the extraction process. The performance of the MLSONN architecture
with representative values of the threshold parameters of the beta
activation function employed is also studied. A three layer bidirectional
self organizing neural network (BDSONN) architecture
comprising fully connected neurons, for the extraction of objects from
a noisy background and capable of incorporating the underlying image
context heterogeneity through variable and adaptive thresholding,
is proposed in this article. The input layer of the network architecture
represents the fuzzy membership information of the image scene to
be extracted. The second layer (the intermediate layer) and the final
layer (the output layer) of the network architecture deal with the self
supervised object extraction task by bi-directional propagation of the
network states. Each layer except the output layer is connected to the
next layer following a neighborhood based topology. The output layer
neurons are in turn, connected to the intermediate layer following
similar topology, thus forming a counter-propagating architecture
with the intermediate layer. The novelty of the proposed architecture
is that the assignment/updating of the inter-layer connection weights
are done using the relative fuzzy membership values at the constituent
neurons in the different network layers. Another interesting feature
of the network lies in the fact that the processing capabilities of
the intermediate and the output layer neurons are guided by a beta
activation function, which uses image context sensitive adaptive
thresholding arising out of the fuzzy cardinality estimates of the
different network neighborhood fuzzy subsets, rather than resorting to
fixed and single point thresholding. An application of the proposed
architecture for object extraction is demonstrated using a synthetic
and a real life image. The extraction efficiency of the proposed
network architecture is evaluated by a proposed system transfer index
characteristic of the network.
Abstract: In these days, multimedia data is transmitted and
processed in compressed format. Due to the decoding procedure and
filtering for edge detection, the feature extraction process of MPEG-7
Edge Histogram Descriptor is time-consuming as well as
computationally expensive. To improve efficiency of compressed
image retrieval, we propose a new edge histogram generation
algorithm in DCT domain in this paper. Using the edge information
provided by only two AC coefficients of DCT coefficients, we can get
edge directions and strengths directly in DCT domain. The
experimental results demonstrate that our system has good
performance in terms of retrieval efficiency and effectiveness.
Abstract: State-of-the-art methods for secondary structure (Porter, Psi-PRED, SAM-T99sec, Sable) and solvent accessibility (Sable, ACCpro) predictions use evolutionary profiles represented by the position specific scoring matrix (PSSM). It has been demonstrated that evolutionary profiles are the most important features in the feature space for these predictions. Unfortunately applying PSSM matrix leads to high dimensional feature spaces that may create problems with parameter optimization and generalization. Several recently published suggested that applying feature extraction for the PSSM matrix may result in improvements in secondary structure predictions. However, none of the top performing methods considered here utilizes dimensionality reduction to improve generalization. In the present study, we used simple and fast methods for features selection (t-statistics, information gain) that allow us to decrease the dimensionality of PSSM matrix by 75% and improve generalization in the case of secondary structure prediction compared to the Sable server.
Abstract: The noteworthy point in the advancement of Brain Machine Interface (BMI) research is the ability to accurately extract features of the brain signals and to classify them into targeted control action with the easiest procedures since the expected beneficiaries are of disabled. In this paper, a new feature extraction method using the combination of adaptive band pass filters and adaptive autoregressive (AAR) modelling is proposed and applied to the classification of right and left motor imagery signals extracted from the brain. The introduction of the adaptive bandpass filter improves the characterization process of the autocorrelation functions of the AAR models, as it enhances and strengthens the EEG signal, which is noisy and stochastic in nature. The experimental results on the Graz BCI data set have shown that by implementing the proposed feature extraction method, a LDA and SVM classifier outperforms other AAR approaches of the BCI 2003 competition in terms of the mutual information, the competition criterion, or misclassification rate.
Abstract: Mining frequent tree patterns have many useful
applications in XML mining, bioinformatics, network routing, etc.
Most of the frequent subtree mining algorithms (i.e. FREQT,
TreeMiner and CMTreeMiner) use anti-monotone property in the
phase of candidate subtree generation. However, none of these
algorithms have verified the correctness of this property in tree
structured data. In this research it is shown that anti-monotonicity
does not generally hold, when using weighed support in tree pattern
discovery. As a result, tree mining algorithms that are based on this
property would probably miss some of the valid frequent subtree
patterns in a collection of trees. In this paper, we investigate the
correctness of anti-monotone property for the problem of weighted
frequent subtree mining. In addition we propose W3-Miner, a new
algorithm for full extraction of frequent subtrees. The experimental
results confirm that W3-Miner finds some frequent subtrees that the
previously proposed algorithms are not able to discover.
Abstract: The myoelectric signal (MES) is one of the Biosignals
utilized in helping humans to control equipments. Recent approaches
in MES classification to control prosthetic devices employing pattern
recognition techniques revealed two problems, first, the classification
performance of the system starts degrading when the number of
motion classes to be classified increases, second, in order to solve the
first problem, additional complicated methods were utilized which
increase the computational cost of a multifunction myoelectric
control system. In an effort to solve these problems and to achieve a
feasible design for real time implementation with high overall
accuracy, this paper presents a new method for feature extraction in
MES recognition systems. The method works by extracting features
using Wavelet Packet Transform (WPT) applied on the MES from
multiple channels, and then employs Fuzzy c-means (FCM)
algorithm to generate a measure that judges on features suitability for
classification. Finally, Principle Component Analysis (PCA) is
utilized to reduce the size of the data before computing the
classification accuracy with a multilayer perceptron neural network.
The proposed system produces powerful classification results (99%
accuracy) by using only a small portion of the original feature set.
Abstract: The determination of sugars in foods is very
significant. Their relation in fact, can affect the chemical and
sensorial quality of the matrix (e.g., sweetness, pH, total acidity,
microbial stability, global acceptability) and can provide information
on food to optimize several selected technological processes. Three
stages of ripeness (green, yellow and red) of tomatoes (Lycopersicon
Esculentum cv. Elegance) at different harvest dates were evaluated.
Fruit from all harvests were exposed to different of ozone doses
(0.25, 0.50 and 1 mg O3/g tomatoes) and clean air for 5 day at 15
°C±2 and 90-95 % relative humidity. Then, fruits were submitted for
extraction and analysis after a day from the finish of exposure of each
stage. The concentrations of the glucose and fructose increased in the
tomatoes which were subjected to ozone treatments.
Abstract: In non destructive testing by radiography, a perfect knowledge of the weld defect shape is an essential step to appreciate the quality of the weld and make decision on its acceptability or rejection. Because of the complex nature of the considered images, and in order that the detected defect region represents the most accurately possible the real defect, the choice of thresholding methods must be done judiciously. In this paper, performance criteria are used to conduct a comparative study of thresholding methods based on gray level histogram, 2-D histogram and locally adaptive approach for weld defect extraction in radiographic images.
Abstract: Embedding and extraction of a secret information as
well as the restoration of the original un-watermarked image is
highly desirable in sensitive applications like military, medical, and
law enforcement imaging. This paper presents a novel reversible
data-hiding method for digital images using integer to integer
wavelet transform and companding technique which can embed and
recover the secret information as well as can restore the image to its
pristine state. The novel method takes advantage of block based
watermarking and iterative optimization of threshold for companding
which avoids histogram pre and post-processing. Consequently, it
reduces the associated overhead usually required in most of the
reversible watermarking techniques. As a result, it keeps the
distortion small between the marked and the original images.
Experimental results show that the proposed method outperforms the
existing reversible data hiding schemes reported in the literature.
Abstract: Theoptimal extraction condition of dried Phaseolus
vulgaris powderwas studied. The three independent variables are raw
material concentration, shaking and centrifugaltime. The dependent
variables are both yield percentage of crude extract and alphaamylase
enzyme inhibition activity. The experimental design was
based on box-behnkendesign. Highest yield percentage of crude
extract could get from extraction condition at concentration of 1, 0,1,
concentration of 0.15 M ,extraction time for 2hour, and
separationtime for60 min. Moreover, the crude extract with highest
alpha-amylase enzyme inhibition activityoccurred by extraction
condition at concentration of 0.10 M, extraction time for 2 min, and
separation time for 45 min
Abstract: Face recognition in the infrared spectrum has attracted a lot of interest in recent years. Many of the techniques used in infrared are based on their visible counterpart, especially linear techniques like PCA and LDA. In this work, we introduce a probabilistic Bayesian framework for face recognition in the infrared spectrum. In the infrared spectrum, variations can occur between face images of the same individual due to pose, metabolic, time changes, etc. Bayesian approaches permit to reduce intrapersonal variation, thus making them very interesting for infrared face recognition. This framework is compared with classical linear techniques. Non linear techniques we developed recently for infrared face recognition are also presented and compared to the Bayesian face recognition framework. A new approach for infrared face extraction based on SVM is introduced. Experimental results show that the Bayesian technique is promising and lead to interesting results in the infrared spectrum when a sufficient number of face images is used in an intrapersonal learning process.
Abstract: This paper shows possibility of extraction Social,
Group and Individual Mind from Multiple Agents Rule Bases. Types
those Rule bases are selected as two fuzzy systems, namely
Mambdani and Takagi-Sugeno fuzzy system. Their rule bases are
describing (modeling) agent behavior. Modifying of agent behavior
in the time varying environment will be provided by learning fuzzyneural
networks and optimization of their parameters with using
genetic algorithms in development system FUZNET. Finally,
extraction Social, Group and Individual Mind from Multiple Agents
Rule Bases are provided by Cognitive analysis and Matching
criterion.
Abstract: Climate change could lead to changes in cultural
environments and landscapes as we know them.Climate change
presents an immediate and significant threat to our natural and built
environments and to the ways of life which co-exist with these
environments. In most traditional buildings, the harmony of texture
with nature and environment has been ever considered; so houses and
cities have been mixed with their natural environment so
astonishingly and the selection and usage of materials have been in
such a way that they have provided the utmost conformity with the
environment, as the result the created areas have a unique beauty and
attraction.The extent to which climate change contributes to
destruction procedure on Iran-s historic buildings.is a subject of
current discussion. Cities, towns and built-up areas also have their
own characteristics that might make them particularly vulnerable to
climate change.
Abstract: In this study, the problem of discriminating between interictal epileptic and non- epileptic pathological EEG cases, which present episodic loss of consciousness, investigated. We verify the accuracy of the feature extraction method of autocross-correlated coefficients which extracted and studied in previous study. For this purpose we used in one hand a suitable constructed artificial supervised LVQ1 neural network and in other a cross-correlation technique. To enforce the above verification we used a statistical procedure which based on a chi- square control. The classification and the statistical results showed that the proposed feature extraction is a significant accurate method for diagnostic discrimination cases between interictal and non-interictal EEG events and specifically the classification procedure showed that the LVQ neural method is superior than the cross-correlation one.
Abstract: As a result of the daily workflow in the design
development departments of companies, databases containing huge
numbers of 3D geometric models are generated. According to the
given problem engineers create CAD drawings based on their design
ideas and evaluate the performance of the resulting design, e.g. by
computational simulations. Usually, new geometries are built either
by utilizing and modifying sets of existing components or by adding
single newly designed parts to a more complex design.
The present paper addresses the two facets of acquiring
components from large design databases automatically and providing
a reasonable overview of the parts to the engineer. A unified
framework based on the topographic non-negative matrix
factorization (TNMF) is proposed which solves both aspects
simultaneously. First, on a given database meaningful components
are extracted into a parts-based representation in an unsupervised
manner. Second, the extracted components are organized and
visualized on square-lattice 2D maps. It is shown on the example of
turbine-like geometries that these maps efficiently provide a wellstructured
overview on the database content and, at the same time,
define a measure for spatial similarity allowing an easy access and
reuse of components in the process of design development.