Abstract: Landslide susceptibility map delineates the potential
zones for landslide occurrence. Previous works have applied
multivariate methods and neural networks for mapping landslide
susceptibility. This study proposed a new approach to integrate
decision tree model and spatial cluster statistic for assessing landslide
susceptibility spatially. A total of 2057 landslide cells were digitized
for developing the landslide decision tree model. The relationships of
landslides and instability factors were explicitly represented by using
tree graphs in the model. The local Getis-Ord statistics were used to
cluster cells with high landslide probability. The analytic result from
the local Getis-Ord statistics was classed to create a map of landslide
susceptibility zones. The map was validated using new landslide data
with 482 cells. Results of validation show an accuracy rate of 86.1% in
predicting new landslide occurrence. This indicates that the proposed
approach is useful for improving landslide susceptibility mapping.
Abstract: The intelligent fuzzy input estimator is used to estimate
the input force of the rigid bar structural system in this study. The
fuzzy Kalman filter without the input term and the fuzzy weighting
recursive least square estimator are two main portions of this method.
The practicability and accuracy of the proposed method were verified
with numerical simulations from which the input forces of a rigid bar
structural system were estimated from the output responses. In order to
examine the accuracy of the proposed method, a rigid bar structural
system is subjected to periodic sinusoidal dynamic loading. The
excellent performance of this estimator is demonstrated by comparing
it with the use of difference weighting function and improper the
initial process noise covariance. The estimated results have a good
agreement with the true values in all cases tested.
Abstract: This paper presents dynamic voltage collapse prediction on an actual power system using support vector machines.
Dynamic voltage collapse prediction is first determined based on the PTSI calculated from information in dynamic simulation output. Simulations were carried out on a practical 87 bus test system by considering load increase as the contingency. The data collected from the time domain simulation is then used as input to the SVM in which support vector regression is used as a predictor to determine the
dynamic voltage collapse indices of the power system. To reduce training time and improve accuracy of the SVM, the Kernel function type and Kernel parameter are considered. To verify the
effectiveness of the proposed SVM method, its performance is compared with the multi layer perceptron neural network (MLPNN). Studies show that the SVM gives faster and more accurate results for dynamic voltage collapse prediction compared with the MLPNN.
Abstract: Proteomics is one of the largest areas of research for
bioinformatics and medical science. An ambitious goal of proteomics
is to elucidate the structure, interactions and functions of all proteins
within cells and organisms. Predicting Protein-Protein Interaction
(PPI) is one of the crucial and decisive problems in current research.
Genomic data offer a great opportunity and at the same time a lot of
challenges for the identification of these interactions. Many methods
have already been proposed in this regard. In case of in-silico
identification, most of the methods require both positive and negative
examples of protein interaction and the perfection of these examples
are very much crucial for the final prediction accuracy. Positive
examples are relatively easy to obtain from well known databases. But
the generation of negative examples is not a trivial task. Current PPI
identification methods generate negative examples based on some
assumptions, which are likely to affect their prediction accuracy.
Hence, if more reliable negative examples are used, the PPI prediction
methods may achieve even more accuracy. Focusing on this issue, a
graph based negative example generation method is proposed, which
is simple and more accurate than the existing approaches. An
interaction graph of the protein sequences is created. The basic
assumption is that the longer the shortest path between two
protein-sequences in the interaction graph, the less is the possibility of
their interaction. A well established PPI detection algorithm is
employed with our negative examples and in most cases it increases
the accuracy more than 10% in comparison with the negative pair
selection method in that paper.
Abstract: A reduced order modeling approach for natural
gas transient flow in pipelines is presented. The Euler
equations are considered as the governing equations and
solved numerically using the implicit Steger-Warming flux
vector splitting method. Next, the linearized form of the
equations is derived and the corresponding eigensystem is
obtained. Then, a few dominant flow eigenmodes are used to
construct an efficient reduced-order model. A well-known test
case is presented to demonstrate the accuracy and the
computational efficiency of the proposed method. The results
obtained are in good agreement with those of the direct
numerical method and field data. Moreover, it is shown that
the present reduced-order model is more efficient than the
conventional numerical techniques for transient flow analysis
of natural gas in pipelines.
Abstract: In this paper, the 1-D conduction-radiation problem is solved by the lattice Boltzmann method. The effects of various parameters such as the scattering albedo, the conduction–radiation parameter and the wall emissivity are studied. In order to check on the accuracy of the numerical technique employed for the solution of the considered problem, the present numerical code was validated with the published study. The found results are in good agreement with those published
Abstract: The objective of this paper, is to apply support vector machine (SVM) approach for the classification of cancerous and normal regions of prostate images. Three kinds of textural features are extracted and used for the analysis: parameters of the Gauss- Markov random field (GMRF), correlation function and relative entropy. Prostate images are acquired by the system consisting of a microscope, video camera and a digitizing board. Cross-validated classification over a database of 46 images is implemented to evaluate the performance. In SVM classification, sensitivity and specificity of 96.2% and 97.0% are achieved for the 32x32 pixel block sized data, respectively, with an overall accuracy of 96.6%. Classification performance is compared with artificial neural network and k-nearest neighbor classifiers. Experimental results demonstrate that the SVM approach gives the best performance.
Abstract: In this paper we report a study aimed at determining
the most effective animation technique for representing ASL
(American Sign Language) finger-spelling. Specifically, in the study
we compare two commonly used 3D computer animation methods
(keyframe animation and motion capture) in order to ascertain which
technique produces the most 'accurate', 'readable', and 'close to
actual signing' (i.e. realistic) rendering of ASL finger-spelling. To
accomplish this goal we have developed 20 animated clips of fingerspelled
words and we have designed an experiment consisting of a
web survey with rating questions. 71 subjects ages 19-45 participated
in the study. Results showed that recognition of the words was
correlated with the method used to animate the signs. In particular,
keyframe technique produced the most accurate representation of the
signs (i.e., participants were more likely to identify the words
correctly in keyframed sequences rather than in motion captured
ones). Further, findings showed that the animation method had an
effect on the reported scores for readability and closeness to actual
signing; the estimated marginal mean readability and closeness was
greater for keyframed signs than for motion captured signs. To our
knowledge, this is the first study aimed at measuring and comparing
accuracy, readability and realism of ASL animations produced with
different techniques.
Abstract: In this paper, an erosion-based model for abrasive
waterjet (AWJ) turning process is presented. By using modified
Hashish erosion model, the volume of material removed by impacting
of abrasive particles to surface of the rotating cylindrical specimen is
estimated and radius reduction at each rotation is calculated.
Different to previous works, the proposed model considers the
continuous change in local impact angle due to change in workpiece
diameter, axial traverse rate of the jet, the abrasive particle roundness
and density. The accuracy of the proposed model is examined by
experimental tests under various traverse rates. The final diameters
estimated by the proposed model are in good accordance with
experiments.
Abstract: The equivalence class subset algorithm is a powerful
tool for solving a wide variety of constraint satisfaction problems and
is based on the use of a decision function which has a very high but
not perfect accuracy. Perfect accuracy is not required in the decision
function as even a suboptimal solution contains valuable information
that can be used to help find an optimal solution. In the hardest
problems, the decision function can break down leading to a
suboptimal solution where there are more equivalence classes than
are necessary and which can be viewed as a mixture of good decision
and bad decisions. By choosing a subset of the decisions made in
reaching a suboptimal solution an iterative technique can lead to an
optimal solution, using series of steadily improved suboptimal
solutions. The goal is to reach an optimal solution as quickly as
possible. Various techniques for choosing the decision subset are
evaluated.
Abstract: A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.
Abstract: Many studies have shown that Artificial Neural
Networks (ANN) have been widely used for forecasting financial
markets, because of many financial and economic variables are nonlinear,
and an ANN can model flexible linear or non-linear
relationship among variables.
The purpose of the study was to employ an ANN models to
predict the direction of the Istanbul Stock Exchange National 100
Indices (ISE National-100).
As a result of this study, the model forecast the direction of the
ISE National-100 to an accuracy of 74, 51%.
Abstract: Grid computing provides a virtual framework for
controlled sharing of resources across institutional boundaries.
Recently, trust has been recognised as an important factor for
selection of optimal resources in a grid. We introduce a new method
that provides a quantitative trust value, based on the past interactions
and present environment characteristics. This quantitative trust value
is used to select a suitable resource for a job and eliminates run time
failures arising from incompatible user-resource pairs. The proposed
work will act as a tool to calculate the trust values of the various
components of the grid and there by improves the success rate of the
jobs submitted to the resource on the grid. The access to a resource
not only depend on the identity and behaviour of the resource but
also upon its context of transaction, time of transaction, connectivity
bandwidth, availability of the resource and load on the resource. The
quality of the recommender is also evaluated based on the accuracy
of the feedback provided about a resource. The jobs are submitted for
execution to the selected resource after finding the overall trust value
of the resource. The overall trust value is computed with respect to
the subjective and objective parameters.
Abstract: We propose our genuine research of geometric
moments which detects the mineral inadequacy in the frail groundnut
plant. This plant is prone to many deficiencies as a result of the
variance in the soil nutrients. By analyzing the leaves of the plant, we
detect the visual symptoms that are not recognizable to the naked eyes.
We have collected about 160 samples of leaves from the nearby fields.
The images have been taken by keeping every leaf into a black box to
avoid the external interference. For the first time, it has been possible
to provide the farmer with the stages of deficiencies. This paper has
applied the algorithms successfully to many other plants like Lady-s
finger, Green Bean, Lablab Bean, Chilli and Tomato. But we submit
the results of the groundnut predominantly. The accuracy of our
algorithm and method is almost 93%. This will again pioneer a kind of
green revolution in the field of agriculture and will be a boon to that
field.
Abstract: This work deals with aspects of support vector machine learning for large-scale data mining tasks. Based on a decomposition algorithm for support vector machine training that can be run in serial as well as shared memory parallel mode we introduce a transformation of the training data that allows for the usage of an expensive generalized kernel without additional costs. We present experiments for the Gaussian kernel, but usage of other kernel functions is possible, too. In order to further speed up the decomposition algorithm we analyze the critical problem of working set selection for large training data sets. In addition, we analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our tests and conclusions led to several modifications of the algorithm and the improvement of overall support vector machine learning performance. Our method allows for using extensive parameter search methods to optimize classification accuracy.
Abstract: Since dealing with high dimensional data is
computationally complex and sometimes even intractable, recently
several feature reductions methods have been developed to reduce
the dimensionality of the data in order to simplify the calculation
analysis in various applications such as text categorization, signal
processing, image retrieval, gene expressions and etc. Among feature
reduction techniques, feature selection is one the most popular
methods due to the preservation of the original features.
In this paper, we propose a new unsupervised feature selection
method which will remove redundant features from the original
feature space by the use of probability density functions of various
features. To show the effectiveness of the proposed method, popular
feature selection methods have been implemented and compared.
Experimental results on the several datasets derived from UCI
repository database, illustrate the effectiveness of our proposed
methods in comparison with the other compared methods in terms of
both classification accuracy and the number of selected features.
Abstract: In this paper we propose new method for
simultaneous generating multiple quantiles corresponding to given
probability levels from data streams and massive data sets. This
method provides a basis for development of single-pass low-storage
quantile estimation algorithms, which differ in complexity, storage
requirement and accuracy. We demonstrate that such algorithms may
perform well even for heavy-tailed data.
Abstract: The purpose of this study is to design a portable virtual
piano. By utilizing optical fiber gloves and the virtual piano software
designed by this study, the user can play the piano anywhere at any
time. This virtual piano consists of three major parts: finger tapping
identification, hand movement and positioning identification, and
MIDI software sound effect simulation. To play the virtual piano, the
user wears optical fiber gloves and simulates piano key tapping
motions. The finger bending information detected by the optical fiber
gloves can tell when piano key tapping motions are made. Images
captured by a video camera are analyzed, hand locations and moving
directions are positioned, and the corresponding scales are found. The
system integrates finger tapping identification with information about
hand placement in relation to corresponding piano key positions, and
generates MIDI piano sound effects based on this data. This
experiment shows that the proposed method achieves an accuracy rate
of 95% for determining when a piano key is tapped.
Abstract: This paper presents an alternate approach that uses
artificial neural network to simulate the flood level dynamics in a
river basin. The algorithm was developed in a decision support
system environment in order to enable users to process the data. The
decision support system is found to be useful due to its interactive
nature, flexibility in approach and evolving graphical feature and can
be adopted for any similar situation to predict the flood level. The
main data processing includes the gauging station selection, input
generation, lead-time selection/generation, and length of prediction.
This program enables users to process the flood level data, to
train/test the model using various inputs and to visualize results. The
program code consists of a set of files, which can as well be modified
to match other purposes. This program may also serve as a tool for
real-time flood monitoring and process control. The running results
indicate that the decision support system applied to the flood level
seems to have reached encouraging results for the river basin under
examination. The comparison of the model predictions with the
observed data was satisfactory, where the model is able to forecast
the flood level up to 5 hours in advance with reasonable prediction
accuracy. Finally, this program may also serve as a tool for real-time
flood monitoring and process control.
Abstract: In this study, we consider a special situation that only a pair of hydrophone on a moving underwater vehicle is available to localize a fixed acoustic source of far distance. The trigonometry can be used in this situation by using two different DOA of different locations. Notice that the distance between the two locations should be measured. Therefore, we assume that the vehicle is sailing straightly and the moving distance for each unit time is measured continuously. However, the accuracy of the localization using the trigonometry is highly dependent to the accuracy of DOAs and measured moving distances. Therefore, we proposed another method based on the extended Kalman filter that gives more robust and accurate localization result.