Abstract: Nowadays, consumption of whole flours and flours
with high extraction rate is recommended, because of their high
amount of fibers, vitamins and minerals. Despite nutritional benefits
of whole flour, concentration of some undesirable components such
as phytic acid is higher than white flour. In this study, effect of
several lactic acid bacteria sourdough on Toast bread is investigated.
Sourdough from lactic acid bacteria (Lb. plantarum, Lb. reuteri) with
different dough yield (250 and 300) is made and incubated at 30°C
for 20 hour, then added to dough in the ratio of 10, 20 and 30%
replacement. Breads that supplemented with Lb. plantarum
sourdough had lower phytic acid. Higher replacement of sourdough
and higher DY cause higher decrease in phytic acid content.
Sourdough from Lb. plantarum, DY = 300 and 30% replacement
cause the highest decrease in phytic acid content (49.63 mg/100g).
As indicated by panelists, Lb. reuteri sourdough can present the
greatest effect on overall quality score of the breads. DY reduction
cause a decrease in bread quality score. Sensory score of Toast bread
is 81.71 in the samples that treated with Lb. reuteri sourdough with
DY = 250 and 20% replacement.
Abstract: Purpose: To explore the use of Curvelet transform to
extract texture features of pulmonary nodules in CT image and support
vector machine to establish prediction model of small solitary
pulmonary nodules in order to promote the ratio of detection and
diagnosis of early-stage lung cancer. Methods: 2461 benign or
malignant small solitary pulmonary nodules in CT image from 129
patients were collected. Fourteen Curvelet transform textural features
were as parameters to establish support vector machine prediction
model. Results: Compared with other methods, using 252 texture
features as parameters to establish prediction model is more proper.
And the classification consistency, sensitivity and specificity for the
model are 81.5%, 93.8% and 38.0% respectively. Conclusion: Based
on texture features extracted from Curvelet transform, support vector
machine prediction model is sensitive to lung cancer, which can
promote the rate of diagnosis for early-stage lung cancer to some
extent.
Abstract: In this paper, a method to detect multiple ellipses is presented. The technique is efficient and robust against incomplete ellipses due to partial occlusion, noise or missing edges and outliers. It is an iterative technique that finds and removes the best ellipse until no reasonable ellipse is found. At each run, the best ellipse is extracted from randomly selected edge patches, its fitness calculated and compared to a fitness threshold. RANSAC algorithm is applied as a sampling process together with the Direct Least Square fitting of ellipses (DLS) as the fitting algorithm. In our experiment, the method performs very well and is robust against noise and spurious edges on both synthetic and real-world image data.
Abstract: In this paper, an Arabic letter recognition system based on Artificial Neural Networks (ANNs) and statistical analysis for feature extraction is presented. The ANN is trained using the Least Mean Squares (LMS) algorithm. In the proposed system, each typed Arabic letter is represented by a matrix of binary numbers that are used as input to a simple feature extraction system whose output, in addition to the input matrix, are fed to an ANN. Simulation results are provided and show that the proposed system always produces a lower Mean Squared Error (MSE) and higher success rates than the current ANN solutions.
Abstract: Most fingerprint recognition techniques are based on minutiae matching and have been well studied. However, this technology still suffers from problems associated with the handling of poor quality impressions. One problem besetting fingerprint matching is distortion. Distortion changes both geometric position and orientation, and leads to difficulties in establishing a match among multiple impressions acquired from the same finger tip. Marking all the minutiae accurately as well as rejecting false minutiae is another issue still under research. Our work has combined many methods to build a minutia extractor and a minutia matcher. The combination of multiple methods comes from a wide investigation into research papers. Also some novel changes like segmentation using Morphological operations, improved thinning, false minutiae removal methods, minutia marking with special considering the triple branch counting, minutia unification by decomposing a branch into three terminations, and matching in the unified x-y coordinate system after a two-step transformation are used in the work.
Abstract: Acoustical properties of speech have been shown to
be related to mental states of speaker with symptoms: depression
and remission. This paper describes way to address the issue of
distinguishing depressed patients from remitted subjects based on
measureable acoustics change of their spoken sound. The vocal-tract
related frequency characteristics of speech samples from female
remitted and depressed patients were analyzed via speech
processing techniques and consequently, evaluated statistically by
cross-validation with Support Vector Machine. Our results
comparatively show the classifier's performance with effectively
correct separation of 93% determined from testing with the subjectbased
feature model and 88% from the frame-based model based on
the same speech samples collected from hospital visiting interview
sessions between patients and psychiatrists.
Abstract: An automated wood recognition system is designed to
classify tropical wood species.The wood features are extracted based
on two feature extractors: Basic Grey Level Aura Matrix (BGLAM)
technique and statistical properties of pores distribution (SPPD)
technique. Due to the nonlinearity of the tropical wood species
separation boundaries, a pre classification stage is proposed which
consists ofKmeans clusteringand kernel discriminant analysis (KDA).
Finally, Linear Discriminant Analysis (LDA) classifier and KNearest
Neighbour (KNN) are implemented for comparison purposes.
The study involves comparison of the system with and without pre
classification using KNN classifier and LDA classifier.The results
show that the inclusion of the pre classification stage has improved
the accuracy of both the LDA and KNN classifiers by more than
12%.
Abstract: This paper presents an evaluation for a wavelet-based
digital watermarking technique used in estimating the quality of
video sequences transmitted over Additive White Gaussian Noise
(AWGN) channel in terms of a classical objective metric, such as
Peak Signal-to-Noise Ratio (PSNR) without the need of the original
video. In this method, a watermark is embedded into the Discrete
Wavelet Transform (DWT) domain of the original video frames
using a quantization method. The degradation of the extracted
watermark can be used to estimate the video quality in terms of
PSNR with good accuracy. We calculated PSNR for video frames
contaminated with AWGN and compared the values with those
estimated using the Watermarking-DWT based approach. It is found
that the calculated and estimated quality measures of the video
frames are highly correlated, suggesting that this method can provide
a good quality measure for video frames transmitted over AWGN
channel without the need of the original video.
Abstract: The decision to recruit manpower in an organization
requires clear identification of the criteria (attributes) that distinguish
successful from unsuccessful performance. The choice of appropriate
attributes or criteria in different levels of hierarchy in an organization
is a multi-criteria decision problem and therefore multi-criteria
decision making (MCDM) techniques can be used for prioritization
of such attributes. Analytic Hierarchy Process (AHP) is one such
technique that is widely used for deciding among the complex criteria
structure in different levels. In real applications, conventional AHP
still cannot reflect the human thinking style as precise data
concerning human attributes are quite hard to be extracted. Fuzzy
logic offers a systematic base in dealing with situations, which are
ambiguous or not well defined. This study aims at defining a
methodology to improve the quality of prioritization of an
employee-s performance measurement attributes under fuzziness. To
do so, a methodology based on the Extent Fuzzy Analytic Hierarchy
Process is proposed. Within the model, four main attributes such as
Subject knowledge and achievements, Research aptitude, Personal
qualities and strengths and Management skills with their subattributes
are defined. The two approaches conventional AHP
approach and the Extent Fuzzy Analytic Hierarchy Process approach
have been compared on the same hierarchy structure and criteria set.
Abstract: This paper presents a new strategy of identification
and classification of pathological voices using the hybrid method
based on wavelet transform and neural networks. After speech
acquisition from a patient, the speech signal is analysed in order to
extract the acoustic parameters such as the pitch, the formants, Jitter,
and shimmer. Obtained results will be compared to those normal and
standard values thanks to a programmable database. Sounds are
collected from normal people and patients, and then classified into
two different categories. Speech data base is consists of several
pathological and normal voices collected from the national hospital
“Rabta-Tunis". Speech processing algorithm is conducted in a
supervised mode for discrimination of normal and pathology voices
and then for classification between neural and vocal pathologies
(Parkinson, Alzheimer, laryngeal, dyslexia...). Several simulation
results will be presented in function of the disease and will be
compared with the clinical diagnosis in order to have an objective
evaluation of the developed tool.
Abstract: In this paper, a method for matching image segments
using triangle-based (geometrical) regions is proposed. Triangular
regions are formed from triples of vertex points obtained from a
keypoint detector (SIFT). However, triangle regions are subject to
noise and distortion around the edges and vertices (especially acute
angles). Therefore, these triangles are expanded into parallelogramshaped
regions. The extracted image segments inherit an important
triangle property; the invariance to affine distortion. Given two
images, matching corresponding regions is conducted by computing
the relative affine matrix, rectifying one of the regions w.r.t. the other
one, then calculating the similarity between the reference and
rectified region. The experimental tests show the efficiency and
robustness of the proposed algorithm against geometrical distortion.
Abstract: AAM (active appearance model) has been successfully
applied to face and facial feature localization. However, its performance is sensitive to initial parameter values. In this paper, we propose a two-stage AAM for robust face alignment, which first fits an
inner face-AAM model to the inner facial feature points of the face and then localizes the whole face and facial features by optimizing the
whole face-AAM model parameters. Experiments show that the proposed face alignment method using two-stage AAM is more reliable to the background and the head pose than the standard
AAM-based face alignment method.
Abstract: Music Information Retrieval (MIR) and modern data mining techniques are applied to identify style markers in midi music for stylometric analysis and author attribution. Over 100 attributes are extracted from a library of 2830 songs then mined using supervised learning data mining techniques. Two attributes are identified that provide high informational gain. These attributes are then used as style markers to predict authorship. Using these style markers the authors are able to correctly distinguish songs written by the Beatles from those that were not with a precision and accuracy of over 98 per cent. The identification of these style markers as well as the architecture for this research provides a foundation for future research in musical stylometry.
Abstract: High level and high velocity flood flows are
potentially harmful to bridge piers as evidenced in many toppled
piers, and among them the single-column piers were considered as
the most vulnerable. The flood flow characteristic parameters
including drag coefficient, scouring and vortex shedding are built into
a pier-flood interaction model to investigate structural safety against
flood hazards considering the effects of local scouring, hydrodynamic
forces, and vortex induced resonance vibrations. By extracting the
pier-flood simulation results embedded in a neural networks code,
two cases of pier toppling occurred in typhoon days were reexamined:
(1) a bridge overcome by flash flood near a mountain side;
(2) a bridge washed off in flood across a wide channel near the
estuary. The modeling procedures and simulations are capable of
identifying the probable causes for the tumbled bridge piers during
heavy floods, which include the excessive pier bending moments and
resonance in structural vibrations.
Abstract: Fossil fuels are the major source to meet the world
energy requirements but its rapidly diminishing rate and adverse
effects on our ecological system are of major concern. Renewable
energy utilization is the need of time to meet the future challenges.
Ocean energy is the one of these promising energy resources. Threefourths
of the earth-s surface is covered by the oceans. This enormous
energy resource is contained in the oceans- waters, the air above the
oceans, and the land beneath them. The renewable energy source of
ocean mainly is contained in waves, ocean current and offshore solar
energy. Very fewer efforts have been made to harness this reliable
and predictable resource. Harnessing of ocean energy needs detail
knowledge of underlying mathematical governing equation and their
analysis. With the advent of extra ordinary computational resources
it is now possible to predict the wave climatology in lab simulation.
Several techniques have been developed mostly stem from numerical
analysis of Navier Stokes equations. This paper presents a brief over
view of such mathematical model and tools to understand and
analyze the wave climatology. Models of 1st, 2nd and 3rd generations
have been developed to estimate the wave characteristics to assess the
power potential. A brief overview of available wave energy
technologies is also given. A novel concept of on-shore wave energy
extraction method is also presented at the end. The concept is based
upon total energy conservation, where energy of wave is transferred
to the flexible converter to increase its kinetic energy. Squeezing
action by the external pressure on the converter body results in
increase velocities at discharge section. High velocity head then can
be used for energy storage or for direct utility of power generation.
This converter utilizes the both potential and kinetic energy of the
waves and designed for on-shore or near-shore application. Increased
wave height at the shore due to shoaling effects increases the
potential energy of the waves which is converted to renewable
energy. This approach will result in economic wave energy
converter due to near shore installation and more dense waves due to
shoaling. Method will be more efficient because of tapping both
potential and kinetic energy of the waves.
Abstract: A new technique of topological multi-scale analysis is
introduced. By performing a clustering recursively to build a
hierarchy, and analyzing the co-scale and intra-scale similarities, an
Iterated Function System can be extracted from any data set. The study
of fractals shows that this method is efficient to extract
self-similarities, and can find elegant solutions the inverse problem of
building fractals. The theoretical aspects and practical
implementations are discussed, together with examples of analyses of
simple fractals.
Abstract: In this paper, based on the past project cost and time
performance, a model for forecasting project cost performance is
developed. This study presents a probabilistic project control concept
to assure an acceptable forecast of project cost performance. In this
concept project activities are classified into sub-groups entitled
control accounts. Then obtain the Stochastic S-Curve (SS-Curve), for
each sub-group and the project SS-Curve is obtained by summing
sub-groups- SS-Curves. In this model, project cost uncertainties are
considered through Beta distribution functions of the project
activities costs required to complete the project at every selected time
sections through project accomplishment, which are extracted from a
variety of sources. Based on this model, after a percentage of the
project progress, the project performance is measured via Earned
Value Management to adjust the primary cost probability distribution
functions. Then, accordingly the future project cost performance is
predicted by using the Monte-Carlo simulation method.
Abstract: Many factors affect the success of Machine Learning
(ML) on a given task. The representation and quality of the instance
data is first and foremost. If there is much irrelevant and redundant
information present or noisy and unreliable data, then knowledge
discovery during the training phase is more difficult. It is well known
that data preparation and filtering steps take considerable amount of
processing time in ML problems. Data pre-processing includes data
cleaning, normalization, transformation, feature extraction and
selection, etc. The product of data pre-processing is the final training
set. It would be nice if a single sequence of data pre-processing
algorithms had the best performance for each data set but this is not
happened. Thus, we present the most well know algorithms for each
step of data pre-processing so that one achieves the best performance
for their data set.
Abstract: This paper presents a comparative analysis of a new
unsupervised PCA-based technique for steel plates texture segmentation
towards defect detection. The proposed scheme called Variance
Based Component Analysis or VBCA employs PCA for feature
extraction, applies a feature reduction algorithm based on variance of
eigenpictures and classifies the pixels as defective and normal. While
the classic PCA uses a clusterer like Kmeans for pixel clustering,
VBCA employs thresholding and some post processing operations to
label pixels as defective and normal. The experimental results show
that proposed algorithm called VBCA is 12.46% more accurate and
78.85% faster than the classic PCA.
Abstract: Segmentation in ultrasound images is challenging due to the interference from speckle noise and fuzziness of boundaries. In this paper, a segmentation scheme using fuzzy c-means (FCM) clustering incorporating both intensity and texture information of images is proposed to extract breast lesions in ultrasound images. Firstly, the nonlinear structure tensor, which can facilitate to refine the edges detected by intensity, is used to extract speckle texture. And then, a spatial FCM clustering is applied on the image feature space for segmentation. In the experiments with simulated and clinical ultrasound images, the spatial FCM clustering with both intensity and texture information gets more accurate results than the conventional FCM or spatial FCM without texture information.