Abstract: This paper presents the cepstral and trispectral
analysis of a speech signal produced by normal men, men with
defective audition (deaf, deep deaf) and others affected by
tracheotomy, the trispectral analysis based on parametric methods
(Autoregressive AR) using the fourth order cumulant. These
analyses are used to detect and compare the pitches and the formants
of corresponding voiced sounds (vowel \a\, \i\ and \u\). The first
results appear promising, since- it seems after several experimentsthere
is no deformation of the spectrum as one could have supposed
it at the beginning, however these pathologies influenced the two
characteristics:
The defective audition influences to the formants contrary to the
tracheotomy, which influences the fundamental frequency (pitch).
Abstract: This paper describes an experimental investigation of
the drying behavior and conditions of rosehip in a convective
cyclone-type dryer. Drying experiments were conducted at air inlet
temperatures of 50, 60 and 70 o C and air velocities of 0.5, 1 and 1.5
ms–1. The parametric values obtained from the experiments were
fitted to the Newton mathematical models. Consequently, the drying
model developed by Newton model showed good agreement with the
data obtained from the experiments. Concluding, it was obtained that;
(i) the temperature is the major effect on the drying process, (ii) air
velocity has low effect on the drying of rosehip, (iii) the C-vitamin is
observed to change according to the temperature, moisture, drying
time and flow types. The changing ratio is found to be in the range of
0.70-0.74.
Abstract: In this paper, we propose a novel approach for image
segmentation via fuzzification of Rènyi Entropy of Generalized
Distributions (REGD). The fuzzy REGD is used to precisely measure
the structural information of image and to locate the optimal
threshold desired by segmentation. The proposed approach draws
upon the postulation that the optimal threshold concurs with
maximum information content of the distribution. The contributions
in the paper are as follow: Initially, the fuzzy REGD as a measure of
the spatial structure of image is introduced. Then, we propose an
efficient entropic segmentation approach using fuzzy REGD.
However the proposed approach belongs to entropic segmentation
approaches (i.e. these approaches are commonly applied to grayscale
images), it is adapted to be viable for segmenting color images.
Lastly, diverse experiments on real images that show the superior
performance of the proposed method are carried out.
Abstract: The aim of this study is to investigate the kinematics of undulatory elongated fish swimming against a velocity flow. We perform the experiments on European eel Anguilla Anguilla swimming in a hydrodynamic re-circulating tank with the velocity flow fixed at 0.2 m/s. We find that the undulating shape of overall eel body changes when it swims slantwise from the flow direction, by comparison to axial undulation shape. We examine this kinematics and we propose a general equation describing the lateral position of undulation body taking into account the direction of the eel-s swimming.
Abstract: An ontology is widely used in many kinds of applications as a knowledge representation tool for domain knowledge. However, even though an ontology schema is well prepared by domain experts, it is tedious and cost-intensive to add instances into the ontology. The most confident and trust-worthy way to add instances into the ontology is to gather instances from tables in the related Web pages. In automatic populating of instances, the primary task is to find the most proper concept among all possible concepts within the ontology for a given table. This paper proposes a novel method for this problem by defining the similarity between the table and the concept using the overlap of their properties. According to a series of experiments, the proposed method achieves 76.98% of accuracy. This implies that the proposed method is a plausible way for automatic ontology population from Web tables.
Abstract: In this paper a combined feature selection method is
proposed which takes advantages of sample domain filtering,
resampling and feature subset evaluation methods to reduce
dimensions of huge datasets and select reliable features. This method
utilizes both feature space and sample domain to improve the process
of feature selection and uses a combination of Chi squared with
Consistency attribute evaluation methods to seek reliable features.
This method consists of two phases. The first phase filters and
resamples the sample domain and the second phase adopts a hybrid
procedure to find the optimal feature space by applying Chi squared,
Consistency subset evaluation methods and genetic search.
Experiments on various sized datasets from UCI Repository of
Machine Learning databases show that the performance of five
classifiers (Naïve Bayes, Logistic, Multilayer Perceptron, Best First
Decision Tree and JRIP) improves simultaneously and the
classification error for these classifiers decreases considerably. The
experiments also show that this method outperforms other feature
selection methods.
Abstract: We have developed an energy based approach for identifying the binding sites and important residues for binding in protein-protein complexes. We found that the residues and residuepairs with charged and aromatic side chains are important for binding. These residues influence to form cation-¤Ç, electrostatic and aromatic interactions. Our observation has been verified with the experimental binding specificity of protein-protein complexes and found good agreement with experiments. The analysis on surrounding hydrophobicity reveals that the binding residues are less hydrophobic than non-binding sites, which suggests that the hydrophobic core are important for folding and stability whereas the surface seeking residues play a critical role in binding. Further, the propensity of residues in the binding sites of receptors and ligands, number of medium and long-range contacts, and influence of neighboring residues will be discussed.
Abstract: The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.
Abstract: POS (also been called DGPS/IMU) technique can obtain the Exterior Orientation Elements of aerial photo, so the triangulation and DLG production using POS can save large numbers of ground control points (GCP), and this will improve the produce efficiency of DLG and reduce the cost of collecting GCP. This paper mainly research on POS technique in production of 1:10 000 scale DLG on GCP distribution. We designed 23 kinds of ground control points distribution schemes, using integrated sensor direction method to do the triangulation experiments, based on the results of triangulation, we produce a map with the scale of 1:10 000 and test its accuracy. This paper put forward appropriate GCP distributing schemes by experiments and research above, and made preparations for the application of POS technique on photogrammetry 4D data production.
Abstract: This paper proposes a new approach to perform the
problem of real-time face detection. The proposed method combines
primitive Haar-Like feature and variance value to construct a new
feature, so-called Variance based Haar-Like feature. Face in image
can be represented with a small quantity of features using this
new feature. We used SVM instead of AdaBoost for training and
classification. We made a database containing 5,000 face samples
and 10,000 non-face samples extracted from real images for learning
purposed. The 5,000 face samples contain many images which have
many differences of light conditions. And experiments showed that
face detection system using Variance based Haar-Like feature and
SVM can be much more efficient than face detection system using
primitive Haar-Like feature and AdaBoost. We tested our method on
two Face databases and one Non-Face database. We have obtained
96.17% of correct detection rate on YaleB face database, which is
higher 4.21% than that of using primitive Haar-Like feature and
AdaBoost.
Abstract: The huge development of new technologies and the
apparition of open communication system more and more
sophisticated create a new challenge to protect digital content from
piracy. Digital watermarking is a recent research axis and a new
technique suggested as a solution to these problems. This technique
consists in inserting identification information (watermark) into
digital data (audio, video, image, databases...) in an invisible and
indelible manner and in such a way not to degrade original medium-s
quality. Moreover, we must be able to correctly extract the
watermark despite the deterioration of the watermarked medium (i.e
attacks). In this paper we propose a system for watermarking satellite
images. We chose to embed the watermark into frequency domain,
precisely the discrete wavelet transform (DWT). We applied our
algorithm on satellite images of Tunisian center. The experiments
show satisfying results. In addition, our algorithm showed an
important resistance facing different attacks, notably the compression
(JEPG, JPEG2000), the filtering, the histogram-s manipulation and
geometric distortions such as rotation, cropping, scaling.
Abstract: Effective cooling of electronic equipment has emerged
as a challenging and constraining problem of the new century. In the
present work the feasibility and effectiveness of jet impingement
cooling on electronics were investigated numerically and
experimentally. Studies have been conducted to see the effect of the
geometrical parameters such as jet diameter (D), jet to target
spacing (Z) and ratio of jet spacing to jet diameter (Z/D) on the heat
transfer characteristics. The values of Reynolds numbers considered
are in the range 7000 to 42000. The results obtained from the
numerical studies are validated by conducting experiments. From the
studies it is found that the optimum value of Z/D ratio is 5. For a
given Reynolds number, the Nusselt number increases by about 28%
if the diameter of the nozzle is increased from 1mm to 2mm.
Correlations are proposed for Nusselt number in terms of Reynolds
number and these are valid for air as the cooling medium.
Abstract: A double module hollow fiber supported liquid
membrane (HFSLM) was applied to selectively separate lead and
mercury ions from dilute synthetic produced water. The experiments
were investigated on several variables: types of extractants
(D2EHPA, Cyanex 471, Aliquat 336, and TOA), concentration of the
selected extractant and operating time. The results clearly showed
that the double module HFSLM could selectively separate Pb(II) and
Hg(II) in feed solution at a very low concentration to less than the
regulatory discharge limit of 0.2 and 0.005 mg/L issued by the
Ministry of Industry and the Ministry of Natural Resource
Environment, Thailand. The highest extractions of lead and mercury
ions from synthetic produced water were 96% and 100% using 0.03
M D2EHPA and 0.06 M Aliquat 336 as the extractant for the first
and second modules.
Abstract: SoftBoost is a recently presented boosting algorithm,
which trades off the size of achieved classification margin and
generalization performance. This paper presents a performance
evaluation of SoftBoost algorithm on the generic object recognition
problem. An appearance-based generic object recognition
model is used. The evaluation experiments are performed using
a difficult object recognition benchmark. An assessment with respect
to different degrees of label noise as well as a comparison to
the well known AdaBoost algorithm is performed. The obtained
results reveal that SoftBoost is encouraged to be used in cases
when the training data is known to have a high degree of noise.
Otherwise, using Adaboost can achieve better performance.
Abstract: Society has grown to rely on Internet services, and the
number of Internet users increases every day. As more and more
users become connected to the network, the window of opportunity
for malicious users to do their damage becomes very great and
lucrative. The objective of this paper is to incorporate different
techniques into classier system to detect and classify intrusion from
normal network packet. Among several techniques, Steady State
Genetic-based Machine Leaning Algorithm (SSGBML) will be used
to detect intrusions. Where Steady State Genetic Algorithm (SSGA),
Simple Genetic Algorithm (SGA), Modified Genetic Algorithm and
Zeroth Level Classifier system are investigated in this research.
SSGA is used as a discovery mechanism instead of SGA. SGA
replaces all old rules with new produced rule preventing old good
rules from participating in the next rule generation. Zeroth Level
Classifier System is used to play the role of detector by matching
incoming environment message with classifiers to determine whether
the current message is normal or intrusion and receiving feedback
from environment. Finally, in order to attain the best results,
Modified SSGA will enhance our discovery engine by using Fuzzy
Logic to optimize crossover and mutation probability. The
experiments and evaluations of the proposed method were performed
with the KDD 99 intrusion detection dataset.
Abstract: The aim of this study is to compare the effect of the ultrasonic pre treatment on the removal of heavy metals (Iron, Zinc and Copper) from Acid Mine Drainage (AMD) by Denver Cell flotation. Synthetic AMD and individual metal solutions are used in the initial experiments to optimise the process conditions for real AMD. Three different process methods, ultrasound treatment followed by Denver flotation cell, Denver flotation cell alone and ultrasonic treatments run simultaneously with the Denver flotation cell were tested for every sample. Precipitation of the metal solutions by using sodium hydroxide (NaOH) and application of the optimum frother dosage followed by flotation significantly reduced the metal content of the AMD.
Abstract: The turbulent mixing of coolant streams of different
temperature and density can cause severe temperature fluctuations in
piping systems in nuclear reactors. In certain periodic contraction
cycles these conditions lead to thermal fatigue. The resulting aging
effect prompts investigation in how the mixing of flows over a sharp
temperature/density interface evolves. To study the fundamental
turbulent mixing phenomena in the presence of density gradients,
isokinetic (shear-free) mixing experiments are performed in a square
channel with Reynolds numbers ranging from 2-500 to 60-000.
Sucrose is used to create the density difference. A Wire Mesh Sensor
(WMS) is used to determine the concentration map of the flow in the
cross section. The mean interface width as a function of velocity,
density difference and distance from the mixing point are analyzed
based on traditional methods chosen for the purposes of
atmospheric/oceanic stratification analyses. A definition of the
mixing layer thickness more appropriate to thermal fatigue and based
on mixedness is devised. This definition shows that the thermal
fatigue risk assessed using simple mixing layer growth can be
misleading and why an approach that separates the effects of large
scale (turbulent) and small scale (molecular) mixing is necessary.
Abstract: In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Abstract: The increased number of automobiles in recent years
has resulted in great demand for fossil fuel. This has led to the
development of automobile by using alternative fuels which include
gaseous fuels, biofuels and vegetables oils as fuel. Energy from
biomass and more specific bio-diesel is one of the opportunities that
could cover the future demand of fossil fuel shortage. Biomass in the
form of cashew nut shell represents a new energy source and
abundant source of energy in India. The bio-fuel is derived from
cashew nut shell oil and its blend with diesel are promising
alternative fuel for diesel engine. In this work the pyrolysis Cashew
Nut Shell Liquid (CNSL)-Diesel Blends (CDB) was used to run the
Direct Injection (DI) diesel engine. The experiments were conducted
with various blends of CNSL and Diesel namely B20, B40, B60, B80
and B100. The results are compared with neat diesel operation. The
brake thermal efficiency was decreased for blends of CNSL and
Diesel except the lower blends of B20. The brake thermal efficiency
of B20 is nearly closer to that of diesel fuel. Also the emission level
of the all CNSL and Diesel blends was increased compared to neat
diesel. The higher viscosity and lower volatility of CNSL leads to
poor mixture formation and hence lower brake thermal efficiency and
higher emission levels. The higher emission level can be reduced by
adding suitable additives and oxygenates with CNSL and Diesel
blends.
Abstract: In this paper, a tooth shape optimization method for
cogging torque reduction in Permanent Magnet (PM) motors is
developed by using the Reduced Basis Technique (RBT) coupled by
Finite Element Analysis (FEA) and Design of Experiments (DOE)
methods. The primary objective of the method is to reduce the
enormous number of design variables required to define the tooth
shape. RBT is a weighted combination of several basis shapes. The
aim of the method is to find the best combination using the weights
for each tooth shape as the design variables. A multi-level design
process is developed to find suitable basis shapes or trial shapes at
each level that can be used in the reduced basis technique. Each level
is treated as a separated optimization problem until the required
objective – minimum cogging torque – is achieved. The process is
started with geometrically simple basis shapes that are defined by
their shape co-ordinates. The experimental design of Taguchi method
is used to build the approximation model and to perform
optimization. This method is demonstrated on the tooth shape
optimization of a 8-poles/12-slots PM motor.