Abstract: This paper represents four unsupervised clustering algorithms namely sIB, RandomFlatClustering, FarthestFirst, and FilteredClusterer that previously works have not been used for network traffic classification. The methodology, the result, the products of the cluster and evaluation of these algorithms with efficiency of each algorithm from accuracy are shown. Otherwise, the efficiency of these algorithms considering form the time that it use to generate the cluster quickly and correctly. Our work study and test the best algorithm by using classify traffic anomaly in network traffic with different attribute that have not been used before. We analyses the algorithm that have the best efficiency or the best learning and compare it to the previously used (K-Means). Our research will be use to develop anomaly detection system to more efficiency and more require in the future.
Abstract: Since 2004, we have been developing an in-situ storage image sensor (ISIS) that captures more than 100 consecutive images at a frame rate of 10 Mfps with ultra-high sensitivity as well as the video camera for use with this ISIS. Currently, basic research is continuing in an attempt to increase the frame rate up to 100 Mfps and above. In order to suppress electro-magnetic noise at such high frequency, a digital-noiseless imaging transfer scheme has been developed utilizing solely sinusoidal driving voltages. This paper presents highly efficient-yet-accurate expressions to estimate attenuation as well as phase delay of driving voltages through RC networks of an ultra-high-speed image sensor. Elmore metric for a fundamental RC chain is employed as the first-order approximation. By application of dimensional analysis to SPICE data, we found a simple expression that significantly improves the accuracy of the approximation. Similarly, another simple closed-form model to estimate phase delay through fundamental RC networks is also obtained. Estimation error of both expressions is much less than previous works, only less 2% for most of the cases . The framework of this analysis can be extended to address similar issues of other VLSI structures.
Abstract: Crypto System Identification is one of the challenging tasks in Crypt analysis. The paper discusses the possibility of employing Neural Networks for identification of Cipher Systems from cipher texts. Cascade Correlation Neural Network and Back Propagation Network have been employed for identification of Cipher Systems. Very large collection of cipher texts were generated using a Block Cipher (Enhanced RC6) and a Stream Cipher (SEAL). Promising results were obtained in terms of accuracy using both the Neural Network models but it was observed that the Cascade Correlation Neural Network Model performed better compared to Back Propagation Network.
Abstract: The prediction of transmembrane helical segments
(TMHs) in membrane proteins is an important field in the
bioinformatics research. In this paper, a new method based on discrete
wavelet transform (DWT) has been developed to predict the number
and location of TMHs in membrane proteins. PDB coded as 1KQG
was chosen as an example to describe the prediction of the number and
location of TMHs in membrane proteins by using this method. To
access the effect of the method, 80 proteins with known 3D-structure
from Mptopo database are chosen at random as the test objects
(including 325 TMHs), 308 of which can be predicted accurately, the
average predicted accuracy is 96.3%. In addition, the above 80
membrane proteins are divided into 13 groups according to their
function and type. In particular, the results of the prediction of TMHs
of the 13 groups are satisfying.
Abstract: This paper has introduced a slope photogrammetric mapping using unmanned aerial vehicle. There are two units of UAV has been used in this study; namely; fixed wing and multi-rotor. Both UAVs were used to capture images at the study area. A consumer digital camera was mounted vertically at the bottom of UAV and captured the images at an altitude. The objectives of this study are to obtain three dimensional coordinates of slope area and to determine the accuracy of photogrammetric product produced from both UAVs. Several control points and checkpoints were established Real Time Kinematic Global Positioning System (RTK-GPS) in the study area. All acquired images from both UAVs went through all photogrammetric processes such as interior orientation, exterior orientation, aerial triangulation and bundle adjustment using photogrammetric software. Two primary results were produced in this study; namely; digital elevation model and digital orthophoto. Based on results, UAV system can be used to mapping slope area especially for limited budget and time constraints project.
Abstract: Ability of accurate and reliable location estimation in
indoor environment is the key issue in developing great number of
context aware applications and Location Based Services (LBS).
Today, the most viable solution for localization is the Received
Signal Strength (RSS) fingerprinting based approach using wireless
local area network (WLAN). This paper presents two RSS
fingerprinting based approaches – first we employ widely used
WLAN based positioning as a reference system and then investigate
the possibility of using GSM signals for positioning. To compare
them, we developed a positioning system in real world environment,
where realistic RSS measurements were collected. Multi-Layer
Perceptron (MLP) neural network was used as the approximation
function that maps RSS fingerprints and locations. Experimental
results indicate advantage of WLAN based approach in the sense of
lower localization error compared to GSM based approach, but GSM
signal coverage by far outreaches WLAN coverage and for some
LBS services requiring less precise accuracy our results indicate that
GSM positioning can also be a viable solution.
Abstract: Real world Speaker Identification (SI) application
differs from ideal or laboratory conditions causing perturbations that
leads to a mismatch between the training and testing environment
and degrade the performance drastically. Many strategies have been
adopted to cope with acoustical degradation; wavelet based Bayesian
marginal model is one of them. But Bayesian marginal models
cannot model the inter-scale statistical dependencies of different
wavelet scales. Simple nonlinear estimators for wavelet based
denoising assume that the wavelet coefficients in different scales are
independent in nature. However wavelet coefficients have significant
inter-scale dependency. This paper enhances this inter-scale
dependency property by a Circularly Symmetric Probability Density
Function (CS-PDF) related to the family of Spherically Invariant
Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain
and corresponding joint shrinkage estimator is derived by Maximum
a Posteriori (MAP) estimator. A framework is proposed based on
these to denoise speech signal for automatic speaker identification
problems. The robustness of the proposed framework is tested for
Text Independent Speaker Identification application on 100 speakers
of POLYCOST and 100 speakers of YOHO speech database in three
different noise environments. Experimental results show that the
proposed estimator yields a higher improvement in identification
accuracy compared to other estimators on popular Gaussian Mixture
Model (GMM) based speaker model and Mel-Frequency Cepstral
Coefficient (MFCC) features.
Abstract: The Spalart and Allmaras turbulence model has been
implemented in a numerical code to study the compressible turbulent
flows, which the system of governing equations is solved with a
finite volume approach using a structured grid. The AUSM+ scheme
is used to calculate the inviscid fluxes. Different benchmark
problems have been computed to validate the implementation and
numerical results are shown. A special Attention is paid to wall jet
applications. In this study, the jet is submitted to various wall
boundary conditions (adiabatic or uniform heat flux) in forced
convection regime and both two-dimensional and axisymmetric wall
jets are considered. The comparison between the numerical results
and experimental data has given the validity of this turbulence model
to study the turbulent wall jets especially in engineering applications.
Abstract: Wireless Sensor Networks can be used to monitor the
physical phenomenon in such areas where human approach is nearly
impossible. Hence the limited power supply is the major constraint of
the WSNs due to the use of non-rechargeable batteries in sensor
nodes. A lot of researches are going on to reduce the energy
consumption of sensor nodes. Energy map can be used with
clustering, data dissemination and routing techniques to reduce the
power consumption of WSNs. Energy map can also be used to know
which part of the network is going to fail in near future. In this paper,
Energy map is constructed using the prediction based approach.
Adaptive alpha GM(1,1) model is used as the prediction model.
GM(1,1) is being used worldwide in many applications for predicting
future values of time series using some past values due to its high
computational efficiency and accuracy.
Abstract: In this paper we propose a novel approach for ascertaining human identity based on fusion of profile face and gait biometric cues The identification approach based on feature learning in PCA-LDA subspace, and classification using multivariate Bayesian classifiers allows significant improvement in recognition accuracy for low resolution surveillance video scenarios. The experimental evaluation of the proposed identification scheme on a publicly available database [2] showed that the fusion of face and gait cues in joint PCA-LDA space turns out to be a powerful method for capturing the inherent multimodality in walking gait patterns, and at the same time discriminating the person identity..
Abstract: Mixed-traffic (e.g., pedestrians, bicycles, and vehicles)
data at an intersection is one of the essential factors for intersection
design and traffic control. However, some data such as pedestrian
volume cannot be directly collected by common detectors (e.g.
inductive loop, sonar and microwave sensors). In this paper, a video
based detection algorithm is proposed for mixed-traffic data collection
at intersections using surveillance cameras. The algorithm is derived
from Gaussian Mixture Model (GMM), and uses a mergence time
adjustment scheme to improve the traditional algorithm. Real-world
video data were selected to test the algorithm. The results show that
the proposed algorithm has the faster processing speed and more
accuracy than the traditional algorithm. This indicates that the
improved algorithm can be applied to detect mixed-traffic at
signalized intersection, even when conflicts occur.
Abstract: Binder drainage test is widely used to set an upper
limit to the design binder content of porous asphalt. However, the
presence of high amount of fine particles in the drained binder may
affect the accuracy of the test result. This paper presents a study to
characterize the composition and particle size distribution of fine
particles accumulated in the drained binder. Fine aggregates and filler
in the drained binder were extracted using a suitable solvent. Then,
wet and dry sieve analysis was carried out to identify the actual
composition of the extracted fine aggregates and filler. From the
results, almost half of the drained binder consisted of fine aggregates
and this significantly affects the accuracy of the design binder content
of porous asphalt mix. This simple finding highlights the importance
of taking into account the presence of fine aggregates in the
calculation of drained binder.
Abstract: Various models have been derived by studying large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. As Neuro-fuzzy based system is able to approximate the non-linear function with more precision. So, Neuro-Fuzzy system is used as a soft computing approach to generate model by formulating the relationship based on its training. In this paper, Neuro-Fuzzy technique is used for software estimation modeling of on NASA software project data and performance of the developed models are compared with the Halstead, Walston-Felix, Bailey-Basili and Doty Models mentioned in the literature.
Abstract: To achieve accurate and precise results of finite
element analysis (FEA) of bones, it is important to represent the
load/boundary conditions as identical as possible to the human body
such as the bone properties, the type and force of the muscles, the
contact force of the joints, and the location of the muscle attachment.
In this study, the difference in the Von-Mises stress and the total
deformation was compared by classifying them into Case 1, which
shows the actual anatomical form of the muscle attached to the femur
when the same muscle force was applied, and Case 2, which gives a
simplified representation of the attached location. An inverse
dynamical musculoskeletal model was simulated using data from an
actual walking experiment to complement the accuracy of the
muscular force, the input value of FEA. The FEA method using the
results of the muscular force that were calculated through the
simulation showed that the maximum Von-Mises stress and the
maximum total deformation in Case 2 were underestimated by 8.42%
and 6.29%, respectively, compared to Case 1. The torsion energy and
bending moment at each location of the femur occurred via the stress
ingredient. Due to the geometrical/morphological feature of the femur
of having a long bone shape when the stress distribution is wide, as
shown in Case 1, a greater Von-Mises stress and total deformation are
expected from the sum of the stress ingredients. More accurate results
can be achieved only when the muscular strength and the attachment
location in the FEA of the bones and the attachment form are the same
as those in the actual anatomical condition under the various moving
conditions of the human body.
Abstract: The aim of this study was to compare the
sensitometric properties of commonly used radiographic films
processed with chemical solutions in different workload hospitals.
The effect of different processing conditions on induced densities on
radiologic films was investigated. Two accessible double emulsions
Fuji and Kodak films were exposed with 11-step wedge and
processed with Champion and CPAC processing solutions. The
mentioned films provided in both workloads centers, high and low.
Our findings displays that the speed and contrast of Kodak filmscreen
in both work load (high and low) is higher than Fuji filmscreen
for both processing solutions. However there was significant
differences in films contrast for both workloads when CPAC solution
had been used (p=0.000 and 0.028). The results showed base plus
fog density for Kodak film was lower than Fuji. Generally Champion
processing solution caused more speed and contrast for investigated
films in different conditions and there was significant differences in
95% confidence level between two used processing solutions
(p=0.01). Low base plus fog density for Kodak films provide more
visibility and accuracy and higher contrast results in using lower
exposure factors to obtain better quality in resulting radiographs. In
this study we found an economic advantages since Champion
solution and Kodak film are used while it makes lower patient dose.
Thus, in a radiologic facility any change in film processor/processing
cycle or chemistry should be carefully investigated before
radiological procedures of patients are acquired.
Abstract: Tumor classification is a key area of research in the
field of bioinformatics. Microarray technology is commonly used in
the study of disease diagnosis using gene expression levels. The
main drawback of gene expression data is that it contains thousands
of genes and a very few samples. Feature selection methods are used
to select the informative genes from the microarray. These methods
considerably improve the classification accuracy. In the proposed
method, Genetic Algorithm (GA) is used for effective feature
selection. Informative genes are identified based on the T-Statistics,
Signal-to-Noise Ratio (SNR) and F-Test values. The initial candidate
solutions of GA are obtained from top-m informative genes. The
classification accuracy of k-Nearest Neighbor (kNN) method is used
as the fitness function for GA. In this work, kNN and Support Vector
Machine (SVM) are used as the classifiers. The experimental results
show that the proposed work is suitable for effective feature
selection. With the help of the selected genes, GA-kNN method
achieves 100% accuracy in 4 datasets and GA-SVM method
achieves in 5 out of 10 datasets. The GA with kNN and SVM
methods are demonstrated to be an accurate method for microarray
based tumor classification.
Abstract: In July 1, 2007, Taiwan Stock Exchange (TWSE) on
market observation post system (MOPS) adds a new "Financial
reference database" for investors to do investment reference. This
database as a warning to public offering companies listed on the
public financial information and it original within eight targets. In
this paper, this database provided by the indicators for the application
of company financial crisis early warning model verify that the
database provided by the indicator forecast for the financial crisis,
whether or not companies have a high accuracy rate as opposed to
domestic and foreign scholars have positive results. There is use of
Logistic Regression Model application of the financial early warning
model, in which no joined back-conditions is the first model, joined it
in is the second model, has been taken occurred in the financial crisis
of companies to research samples and then business took place
before the financial crisis point with T-1 and T-2 sample data to do
positive analysis. The results show that this database provided the
debt ratio and net per share for the best forecast variables.
Abstract: It has become crucial over the years for nations to
improve their credit scoring methods and techniques in light of the
increasing volatility of the global economy. Statistical methods or
tools have been the favoured means for this; however artificial
intelligence or soft computing based techniques are becoming
increasingly preferred due to their proficient and precise nature and
relative simplicity. This work presents a comparison between Support
Vector Machines and Artificial Neural Networks two popular soft
computing models when applied to credit scoring. Amidst the
different criteria-s that can be used for comparisons; accuracy,
computational complexity and processing times are the selected
criteria used to evaluate both models. Furthermore the German credit
scoring dataset which is a real world dataset is used to train and test
both developed models. Experimental results obtained from our study
suggest that although both soft computing models could be used with
a high degree of accuracy, Artificial Neural Networks deliver better
results than Support Vector Machines.
Abstract: In the last 15 years, a number of methods have been proposed for forecasting based on fuzzy time series. Most of the fuzzy time series methods are presented for forecasting of enrollments at the University of Alabama. However, the forecasting accuracy rates of the existing methods are not good enough. In this paper, we compared our proposed new method of fuzzy time series forecasting with existing methods. Our method is based on frequency density based partitioning of the historical enrollment data. The proposed method belongs to the kth order and time-variant methods. The proposed method can get the best forecasting accuracy rate for forecasting enrollments than the existing methods.
Abstract: Recommender Systems act as personalized decision
guides, aiding users in decisions on matters related to personal taste.
Most previous research on Recommender Systems has focused on the
statistical accuracy of the algorithms driving the systems, with no
emphasis on the trustworthiness of the user. RS depends on
information provided by different users to gather its knowledge. We
believe, if a large group of users provide wrong information it will
not be possible for the RS to arrive in an accurate conclusion. The
system described in this paper introduce the concept of Testing the
knowledge of user to filter out these “bad users".
This paper emphasizes on the mechanism used to provide robust
and effective recommendation.