Abstract: The volume of XML data exchange is explosively
increasing, and the need for efficient mechanisms of XML data
management is vital. Many XML storage models have been proposed
for storing XML DTD-independent documents in relational database
systems. Benchmarking is the best way to highlight pros and cons of
different approaches. In this study, we use a common benchmarking
scheme, known as XMark to compare the most cited and newly
proposed DTD-independent methods in terms of logical reads,
physical I/O, CPU time and duration. We show the effect of Label
Path, extracting values and storing in another table and type of join
needed for each method-s query answering.
Abstract: Bangla Vowel characterization determines the spectral properties of Bangla vowels for efficient synthesis as well as recognition of Bangla vowels. In this paper, Bangla vowels in isolated word have been analyzed based on speech production model within the framework of Analysis-by-Synthesis. This has led to the extraction of spectral parameters for the production model in order to produce different Bangla vowel sounds. The real and synthetic spectra are compared and a weighted square error has been computed along with the error in the formant bandwidths for efficient representation of Bangla vowels. The extracted features produced good representation of targeted Bangla vowel. Such a representation also plays essential role in low bit rate speech coding and vocoders.
Abstract: Independent component analysis can estimate unknown
source signals from their mixtures under the assumption that the
source signals are statistically independent. However, in a real environment,
the separation performance is often deteriorated because
the number of the source signals is different from that of the sensors.
In this paper, we propose an estimation method for the number of
the sources based on the joint distribution of the observed signals
under two-sensor configuration. From several simulation results, it
is found that the number of the sources is coincident to that of
peaks in the histogram of the distribution. The proposed method can
estimate the number of the sources even if it is larger than that of
the observed signals. The proposed methods have been verified by
several experiments.
Abstract: Tea is a widely consumed beverage that contains many components. Caffeine belongs to this group of components called alkaloids contain nitrogen. In this study caffeine contents of three types of Turkish teas are determined by using extraction method. After condensation process, residue of caffeine and oil are obtained with evaporation. The oil which is in the residue is removed by hot water. Extraction process performed by using chloroform and the crude caffeine is obtained. From the results of experiments, caffeine contents are found in black tea, green tea and earl grey tea as 3.57±0.43%, 3.11±0.02%, 4.29±0.27%, respectively. Caffeine contents which are found in 1, 5 and 10 cups of tea are calculated. Furthermore, the daily intake of caffeine from black teas that affects human health is investigated.
Abstract: In this paper we present the first Arabic sentence
dataset for on-line handwriting recognition written on tablet pc. The
dataset is natural, simple and clear. Texts are sampled from daily
newspapers. To collect naturally written handwriting, forms are
dictated to writers. The current version of our dataset includes 154
paragraphs written by 48 writers. It contains more than 3800 words
and more than 19,400 characters. Handwritten texts are mainly
written by researchers from different research centers. In order to use
this dataset in a recognition system word extraction is needed. In this
paper a new word extraction technique based on the Arabic
handwriting cursive nature is also presented. The technique is applied
to this dataset and good results are obtained. The results can be
considered as a bench mark for future research to be compared with.
Abstract: A multilayer self organizing neural neural network
(MLSONN) architecture for binary object extraction, guided by a beta
activation function and characterized by backpropagation of errors
estimated from the linear indices of fuzziness of the network output
states, is discussed. Since the MLSONN architecture is designed to
operate in a single point fixed/uniform thresholding scenario, it does
not take into cognizance the heterogeneity of image information in
the extraction process. The performance of the MLSONN architecture
with representative values of the threshold parameters of the beta
activation function employed is also studied. A three layer bidirectional
self organizing neural network (BDSONN) architecture
comprising fully connected neurons, for the extraction of objects from
a noisy background and capable of incorporating the underlying image
context heterogeneity through variable and adaptive thresholding,
is proposed in this article. The input layer of the network architecture
represents the fuzzy membership information of the image scene to
be extracted. The second layer (the intermediate layer) and the final
layer (the output layer) of the network architecture deal with the self
supervised object extraction task by bi-directional propagation of the
network states. Each layer except the output layer is connected to the
next layer following a neighborhood based topology. The output layer
neurons are in turn, connected to the intermediate layer following
similar topology, thus forming a counter-propagating architecture
with the intermediate layer. The novelty of the proposed architecture
is that the assignment/updating of the inter-layer connection weights
are done using the relative fuzzy membership values at the constituent
neurons in the different network layers. Another interesting feature
of the network lies in the fact that the processing capabilities of
the intermediate and the output layer neurons are guided by a beta
activation function, which uses image context sensitive adaptive
thresholding arising out of the fuzzy cardinality estimates of the
different network neighborhood fuzzy subsets, rather than resorting to
fixed and single point thresholding. An application of the proposed
architecture for object extraction is demonstrated using a synthetic
and a real life image. The extraction efficiency of the proposed
network architecture is evaluated by a proposed system transfer index
characteristic of the network.
Abstract: It has been proven that early establishment of
microbial flora in digestive tract of ruminants, has a beneficial effect
on their health condition and productivity. A probiotic compound,
made from five bacteria isolated from adult bovine cattle, was dosed
to 15 Holstein newborn calves in order to measure its capacity of
improving body weight gain and reduce diarrhea incidence. The test
was performed in the municipality of Cajicá (Colombia), at 2580
m.a.s.l., throughout rainy season, with environmental temperature
that oscillated between 4 to 25 °C. Five calves were allotted to
control (no addition of probiotic). Treatments 1, and 2 (5 calves per
group) received 10 ml Probiotic mix 1 and 2, respectively. Probiotic
mixes 1 and 2 where similar in microbial composition but different in
production process. Probiotics were added to the morning milk and
dosed on a daily basis by a month and then on a weekly basis for
three additional months. Diarrhea incidence was measured by
observance of number of animals affected in each group; each animal
was weighed up on a daily basis for obtaining weight gain and rumen
fluid samples were extracted with oro-esophageal catheter for
determining level of fiber and grain consumption.
Abstract: In these days, multimedia data is transmitted and
processed in compressed format. Due to the decoding procedure and
filtering for edge detection, the feature extraction process of MPEG-7
Edge Histogram Descriptor is time-consuming as well as
computationally expensive. To improve efficiency of compressed
image retrieval, we propose a new edge histogram generation
algorithm in DCT domain in this paper. Using the edge information
provided by only two AC coefficients of DCT coefficients, we can get
edge directions and strengths directly in DCT domain. The
experimental results demonstrate that our system has good
performance in terms of retrieval efficiency and effectiveness.
Abstract: State-of-the-art methods for secondary structure (Porter, Psi-PRED, SAM-T99sec, Sable) and solvent accessibility (Sable, ACCpro) predictions use evolutionary profiles represented by the position specific scoring matrix (PSSM). It has been demonstrated that evolutionary profiles are the most important features in the feature space for these predictions. Unfortunately applying PSSM matrix leads to high dimensional feature spaces that may create problems with parameter optimization and generalization. Several recently published suggested that applying feature extraction for the PSSM matrix may result in improvements in secondary structure predictions. However, none of the top performing methods considered here utilizes dimensionality reduction to improve generalization. In the present study, we used simple and fast methods for features selection (t-statistics, information gain) that allow us to decrease the dimensionality of PSSM matrix by 75% and improve generalization in the case of secondary structure prediction compared to the Sable server.
Abstract: The noteworthy point in the advancement of Brain Machine Interface (BMI) research is the ability to accurately extract features of the brain signals and to classify them into targeted control action with the easiest procedures since the expected beneficiaries are of disabled. In this paper, a new feature extraction method using the combination of adaptive band pass filters and adaptive autoregressive (AAR) modelling is proposed and applied to the classification of right and left motor imagery signals extracted from the brain. The introduction of the adaptive bandpass filter improves the characterization process of the autocorrelation functions of the AAR models, as it enhances and strengthens the EEG signal, which is noisy and stochastic in nature. The experimental results on the Graz BCI data set have shown that by implementing the proposed feature extraction method, a LDA and SVM classifier outperforms other AAR approaches of the BCI 2003 competition in terms of the mutual information, the competition criterion, or misclassification rate.
Abstract: Mining frequent tree patterns have many useful
applications in XML mining, bioinformatics, network routing, etc.
Most of the frequent subtree mining algorithms (i.e. FREQT,
TreeMiner and CMTreeMiner) use anti-monotone property in the
phase of candidate subtree generation. However, none of these
algorithms have verified the correctness of this property in tree
structured data. In this research it is shown that anti-monotonicity
does not generally hold, when using weighed support in tree pattern
discovery. As a result, tree mining algorithms that are based on this
property would probably miss some of the valid frequent subtree
patterns in a collection of trees. In this paper, we investigate the
correctness of anti-monotone property for the problem of weighted
frequent subtree mining. In addition we propose W3-Miner, a new
algorithm for full extraction of frequent subtrees. The experimental
results confirm that W3-Miner finds some frequent subtrees that the
previously proposed algorithms are not able to discover.
Abstract: The myoelectric signal (MES) is one of the Biosignals
utilized in helping humans to control equipments. Recent approaches
in MES classification to control prosthetic devices employing pattern
recognition techniques revealed two problems, first, the classification
performance of the system starts degrading when the number of
motion classes to be classified increases, second, in order to solve the
first problem, additional complicated methods were utilized which
increase the computational cost of a multifunction myoelectric
control system. In an effort to solve these problems and to achieve a
feasible design for real time implementation with high overall
accuracy, this paper presents a new method for feature extraction in
MES recognition systems. The method works by extracting features
using Wavelet Packet Transform (WPT) applied on the MES from
multiple channels, and then employs Fuzzy c-means (FCM)
algorithm to generate a measure that judges on features suitability for
classification. Finally, Principle Component Analysis (PCA) is
utilized to reduce the size of the data before computing the
classification accuracy with a multilayer perceptron neural network.
The proposed system produces powerful classification results (99%
accuracy) by using only a small portion of the original feature set.
Abstract: The determination of sugars in foods is very
significant. Their relation in fact, can affect the chemical and
sensorial quality of the matrix (e.g., sweetness, pH, total acidity,
microbial stability, global acceptability) and can provide information
on food to optimize several selected technological processes. Three
stages of ripeness (green, yellow and red) of tomatoes (Lycopersicon
Esculentum cv. Elegance) at different harvest dates were evaluated.
Fruit from all harvests were exposed to different of ozone doses
(0.25, 0.50 and 1 mg O3/g tomatoes) and clean air for 5 day at 15
°C±2 and 90-95 % relative humidity. Then, fruits were submitted for
extraction and analysis after a day from the finish of exposure of each
stage. The concentrations of the glucose and fructose increased in the
tomatoes which were subjected to ozone treatments.
Abstract: In this paper, a second order autoregressive (AR)
model is proposed to discriminate alcoholics using single trial
gamma band Visual Evoked Potential (VEP) signals using 3 different
classifiers: Simplified Fuzzy ARTMAP (SFA) neural network (NN),
Multilayer-perceptron-backpropagation (MLP-BP) NN and Linear
Discriminant (LD). Electroencephalogram (EEG) signals were
recorded from alcoholic and control subjects during the presentation
of visuals from Snodgrass and Vanderwart picture set. Single trial
VEP signals were extracted from EEG signals using Elliptic filtering
in the gamma band spectral range. A second order AR model was
used as gamma band VEP exhibits pseudo-periodic behaviour and
second order AR is optimal to represent this behaviour. This
circumvents the requirement of having to use some criteria to choose
the correct order. The averaged discrimination errors of 2.6%, 2.8%
and 11.9% were given by LD, MLP-BP and SFA classifiers. The
high LD discrimination results show the validity of the proposed
method to discriminate between alcoholic subjects.
Abstract: A direct connection between ElectroEncephaloGram
(EEG) and the genetic information of individuals has been
investigated by neurophysiologists and psychiatrists since 1960-s;
and it opens a new research area in the science. This paper focuses on
the person identification based on feature extracted from the EEG
which can show a direct connection between EEG and the genetic
information of subjects. In this work the full EO EEG signal of
healthy individuals are estimated by an autoregressive (AR) model
and the AR parameters are extracted as features. Here for feature
vector constitution, two methods have been proposed; in the first
method the extracted parameters of each channel are used as a
feature vector in the classification step which employs a competitive
neural network and in the second method a combination of different
channel parameters are used as a feature vector. Correct classification
scores at the range of 80% to 100% reveal the potential of our
approach for person classification/identification and are in agreement
to the previous researches showing evidence that the EEG signal
carries genetic information. The novelty of this work is in the
combination of AR parameters and the network type (competitive
network) that we have used. A comparison between the first and the
second approach imply preference of the second one.
Abstract: In this work we will present a new approach for shot transition auto-detection. Our approach is based on the analysis of Spatio-Temporal Video Slice (STVS) edges extracted from videos. The proposed approach is capable to efficiently detect both abrupt shot transitions 'cuts' and gradual ones such as fade-in, fade-out and dissolve. Compared to other techniques, our method is distinguished by its high level of precision and speed. Those performances are obtained due to minimizing the problem of the boundary shot detection to a simple 2D image partitioning problem.
Abstract: While the explosive increase in information published
on the Web, researchers have to filter information when searching for
conference related information. To make it easier for users to search
related information, this paper uses Topic Maps and social information
to implement ontology since ontology can provide the formalisms and
knowledge structuring for comprehensive and transportable machine
understanding that digital information requires. Besides enhancing
information in Topic Maps, this paper proposes a method of
constructing research Topic Maps considering social information.
First, extract conference data from the web. Then extract conference
topics and the relationships between them through the proposed
method. Finally visualize it for users to search and browse. This paper
uses ontology, containing abundant of knowledge hierarchy structure,
to facilitate researchers getting useful search results. However, most
previous ontology construction methods didn-t take “people" into
account. So this paper also analyzes the social information which helps
researchers find the possibilities of cooperation/combination as well as
associations between research topics, and tries to offer better results.
Abstract: In non destructive testing by radiography, a perfect knowledge of the weld defect shape is an essential step to appreciate the quality of the weld and make decision on its acceptability or rejection. Because of the complex nature of the considered images, and in order that the detected defect region represents the most accurately possible the real defect, the choice of thresholding methods must be done judiciously. In this paper, performance criteria are used to conduct a comparative study of thresholding methods based on gray level histogram, 2-D histogram and locally adaptive approach for weld defect extraction in radiographic images.
Abstract: Embedding and extraction of a secret information as
well as the restoration of the original un-watermarked image is
highly desirable in sensitive applications like military, medical, and
law enforcement imaging. This paper presents a novel reversible
data-hiding method for digital images using integer to integer
wavelet transform and companding technique which can embed and
recover the secret information as well as can restore the image to its
pristine state. The novel method takes advantage of block based
watermarking and iterative optimization of threshold for companding
which avoids histogram pre and post-processing. Consequently, it
reduces the associated overhead usually required in most of the
reversible watermarking techniques. As a result, it keeps the
distortion small between the marked and the original images.
Experimental results show that the proposed method outperforms the
existing reversible data hiding schemes reported in the literature.
Abstract: Theoptimal extraction condition of dried Phaseolus
vulgaris powderwas studied. The three independent variables are raw
material concentration, shaking and centrifugaltime. The dependent
variables are both yield percentage of crude extract and alphaamylase
enzyme inhibition activity. The experimental design was
based on box-behnkendesign. Highest yield percentage of crude
extract could get from extraction condition at concentration of 1, 0,1,
concentration of 0.15 M ,extraction time for 2hour, and
separationtime for60 min. Moreover, the crude extract with highest
alpha-amylase enzyme inhibition activityoccurred by extraction
condition at concentration of 0.10 M, extraction time for 2 min, and
separation time for 45 min