Abstract: The network traffic data provided for the design of
intrusion detection always are large with ineffective information and
enclose limited and ambiguous information about users- activities.
We study the problems and propose a two phases approach in our
intrusion detection design. In the first phase, we develop a
correlation-based feature selection algorithm to remove the worthless
information from the original high dimensional database. Next, we
design an intrusion detection method to solve the problems of
uncertainty caused by limited and ambiguous information. In the
experiments, we choose six UCI databases and DARPA KDD99
intrusion detection data set as our evaluation tools. Empirical studies
indicate that our feature selection algorithm is capable of reducing the
size of data set. Our intrusion detection method achieves a better
performance than those of participating intrusion detectors.
Abstract: Wavelet transform has been extensively used in
machine fault diagnosis and prognosis owing to its strength to deal
with non-stationary signals. The existing Wavelet transform based
schemes for fault diagnosis employ wavelet decomposition of the
entire vibration frequency which not only involve huge
computational overhead in extracting the features but also increases
the dimensionality of the feature vector. This increase in the
dimensionality has the tendency to 'over-fit' the training data and
could mislead the fault diagnostic model. In this paper a novel
technique, envelope wavelet packet transform (EWPT) is proposed in
which features are extracted based on wavelet packet transform of the
filtered envelope signal rather than the overall vibration signal. It not
only reduces the computational overhead in terms of reduced number
of wavelet decomposition levels and features but also improves the
fault detection accuracy. Analytical expressions are provided for the
optimal frequency resolution and decomposition level selection in
EWPT. Experimental results with both actual and simulated machine
fault data demonstrate significant gain in fault detection ability by
EWPT at reduced complexity compared to existing techniques.
Abstract: Interactive push VOD system is a new kind of system
that incorporates push technology and interactive technique. It can
push movies to users at high speeds at off-peak hours for optimal
network usage so as to save bandwidth. This paper presents effective
software-based solution for processing mass downstream data at
terminals of interactive push VOD system, where the service can
download movie according to a viewer-s selection. The downstream
data is divided into two catalogs: (1) the carousel data delivered
according to DSM-CC protocol; (2) IP data delivered according to
Euro-DOCSIS protocol. In order to accelerate download speed and
reduce data loss rate at terminals, this software strategy introduces
caching, multi-thread and resuming mechanisms. The experiments
demonstrate advantages of the software-based solution.
Abstract: The paper presents a detailed calculation of characteristic of five different topology permanent magnet machines for high performance traction including hybrid -electric vehicles using finite element analysis (FEA) method. These machines include V-shape single layer interior PM, W-shape single-layer interior PM, Segment interior PM and surface PM on the rotor and with distributed winding on the stator. The performance characteristics which include the back-emf voltage and its harmonic, magnet mass, iron loss and ripple torque are compared and analyzed. One of a 7.5kW IPM prototype was tested and verified finite-element analysis results. The aim of the paper is given some guidance and reference for machine designer which are interested in IPM machine selection for high performance traction application.
Abstract: A genetic algorithm (GA) based feature subset
selection algorithm is proposed in which the correlation structure of
the features is exploited. The subset of features is validated according
to the classification performance. Features derived from the
continuous wavelet transform are potentially strongly correlated.
GA-s that do not take the correlation structure of features into
account are inefficient. The proposed algorithm forms clusters of
correlated features and searches for a good candidate set of clusters.
Secondly a search within the clusters is performed. Different
simulations of the algorithm on a real-case data set with strong
correlations between features show the increased classification
performance. Comparison is performed with a standard GA without
use of the correlation structure.
Abstract: This paper gives a novel method for improving
classification performance for cancer classification with very few
microarray Gene expression data. The method employs classification
with individual gene ranking and gene subset ranking. For selection
and classification, the proposed method uses the same classifier. The
method is applied to three publicly available cancer gene expression
datasets from Lymphoma, Liver and Leukaemia datasets. Three
different classifiers namely Support vector machines-one against all
(SVM-OAA), K nearest neighbour (KNN) and Linear Discriminant
analysis (LDA) were tested and the results indicate the improvement
in performance of SVM-OAA classifier with satisfactory results on
all the three datasets when compared with the other two classifiers.
Abstract: In order to reduce cost, increase quality, and for
timely supplying production systems has considerably taken the
advantages of supply chain management and these advantages are
also competitive. Selection of appropriate supplier has an important
role in improvement and efficiency of systems.
The models of supplier selection which have already been used by
researchers have considered selection one or more suppliers from
potential suppliers but in this paper selecting one supplier as partner
from one supplier that have minimum one period supplying to buyer
is considered.
This paper presents a conceptual model for partner selection and
application of Degree of Adoptive (DOA) model for final selection.
The attributes weight in this model is prepared through AHP
model. After making the descriptive model, determining the
attributes and measuring the parameters of the adaptive is examined
in an auto industry of Iran(Zagross Khodro co.) and results are
presented.
Abstract: Adaptive Genetic Algorithms extend the Standard Gas
to use dynamic procedures to apply evolutionary operators such as
crossover, mutation and selection. In this paper, we try to propose a
new adaptive genetic algorithm, which is based on the statistical
information of the population as a guideline to tune its crossover,
selection and mutation operators. This algorithms is called Statistical
Genetic Algorithm and is compared with traditional GA in some
benchmark problems.
Abstract: In this paper the development of a heat exchanger as a
pilot plant for educational purpose is discussed and the use of neural
network for controlling the process is being presented. The aim of the
study is to highlight the need of a specific Pseudo Random Binary
Sequence (PRBS) to excite a process under control. As the neural
network is a data driven technique, the method for data generation
plays an important role. In light of this a careful experimentation
procedure for data generation was crucial task. Heat exchange is a
complex process, which has a capacity and a time lag as process
elements. The proposed system is a typical pipe-in- pipe type heat
exchanger. The complexity of the system demands careful selection,
proper installation and commissioning. The temperature, flow, and
pressure sensors play a vital role in the control performance. The
final control element used is a pneumatically operated control valve.
While carrying out the experimentation on heat exchanger a welldrafted
procedure is followed giving utmost attention towards safety
of the system. The results obtained are encouraging and revealing
the fact that if the process details are known completely as far as
process parameters are concerned and utilities are well stabilized then
feedback systems are suitable, whereas neural network control
paradigm is useful for the processes with nonlinearity and less
knowledge about process. The implementation of NN control
reinforces the concepts of process control and NN control paradigm.
The result also underlined the importance of excitation signal
typically for that process. Data acquisition, processing, and
presentation in a typical format are the most important parameters
while validating the results.
Abstract: This paper describes various stages of design and prototyping of a modular robot for use in various industrial applications. The major goal of current research has been to design and make different robotic joints at low cost capable of being assembled together in any given order for achieving various robot configurations. Five different types of joins were designed and manufactured where extensive research has been carried out on the design of each joint in order to achieve optimal strength, size, modularity, and price. This paper presents various stages of research and development undertaken to engineer these joints that include material selection, manufacturing, and strength analysis. The outcome of this research addresses the birth of a new generation of modular industrial robots with a wider range of applications and greater efficiency.
Abstract: This paper presents a computational methodology
based on matrix operations for a computer based solution to the
problem of performance analysis of software reliability models
(SRMs). A set of seven comparison criteria have been formulated to
rank various non-homogenous Poisson process software reliability
models proposed during the past 30 years to estimate software
reliability measures such as the number of remaining faults, software
failure rate, and software reliability. Selection of optimal SRM for
use in a particular case has been an area of interest for researchers in
the field of software reliability. Tools and techniques for software
reliability model selection found in the literature cannot be used with
high level of confidence as they use a limited number of model
selection criteria. A real data set of middle size software project from
published papers has been used for demonstration of matrix method.
The result of this study will be a ranking of SRMs based on the
Permanent value of the criteria matrix formed for each model based
on the comparison criteria. The software reliability model with
highest value of the Permanent is ranked at number – 1 and so on.
Abstract: In the recent years multimedia traffic and in particular
VoIP services are growing dramatically. We present a new algorithm
to control the resource utilization and to optimize the voice codec
selection during SIP call setup on behalf of the traffic condition
estimated on the network path.
The most suitable methodologies and the tools that perform realtime
evaluation of the available bandwidth on a network path have
been integrated with our proposed algorithm: this selects the best
codec for a VoIP call in function of the instantaneous available
bandwidth on the path. The algorithm does not require any explicit
feedback from the network, and this makes it easily deployable over
the Internet. We have also performed intensive tests on real network
scenarios with a software prototype, verifying the algorithm
efficiency with different network topologies and traffic patterns
between two SIP PBXs.
The promising results obtained during the experimental validation
of the algorithm are now the basis for the extension towards a larger
set of multimedia services and the integration of our methodology
with existing PBX appliances.
Abstract: The evolution of technology and construction techniques has enabled the upgrading of transport networks. In particular, the high-speed rail networks allow convoys to peak at above 300 km/h. These structures, however, often significantly impact the surrounding environment. Among the effects of greater importance are the ones provoked by the soundwave connected to train transit. The wave propagation affects the quality of life in areas surrounding the tracks, often for several hundred metres. There are substantial damages to properties (buildings and land), in terms of market depreciation. The present study, integrating expertise in acoustics, computering and evaluation fields, outlines a useful model to select project paths so as to minimize the noise impact and reduce the causes of possible litigation. It also facilitates the rational selection of initiatives to contain the environmental damage to the already existing railway tracks. The research is developed with reference to the Italian regulatory framework (usually more stringent than European and international standards) and refers to a case study concerning the high speed network in Italy.
Abstract: Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of features selection methods to reduce the dimensionality of the document-representation vector. Four feature selection methods are evaluated: Random Selection, Information Gain (IG), Support Vector Machine (called SVM_FS) and Genetic Algorithm with SVM (GA_FS). We showed that the best results were obtained with SVM_FS and GA_FS methods for a relatively small dimension of the features vector comparative with the IG method that involves longer vectors, for quite similar classification accuracies. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).
Abstract: The purpose of this research is to develop a security model for voice eavesdropping protection over digital networks. The proposed model provides an encryption scheme and a personal secret key exchange between communicating parties, a so-called voice data transformation system, resulting in a real-privacy conversation. The operation of this system comprises two main steps as follows: The first one is the personal secret key exchange for using the keys in the data encryption process during conversation. The key owner could freely make his/her choice in key selection, so it is recommended that one should exchange a different key for a different conversational party, and record the key for each case into the memory provided in the client device. The next step is to set and record another personal option of encryption, either taking all frames or just partial frames, so-called the figure of 1:M. Using different personal secret keys and different sets of 1:M to different parties without the intervention of the service operator, would result in posing quite a big problem for any eavesdroppers who attempt to discover the key used during the conversation, especially in a short period of time. Thus, it is quite safe and effective to protect the case of voice eavesdropping. The results of the implementation indicate that the system can perform its function accurately as designed. In this regard, the proposed system is suitable for effective use in voice eavesdropping protection over digital networks, without any requirements to change presently existing network systems, mobile phone network and VoIP, for instance.
Abstract: This paper describes text mining technique for automatically extracting association rules from collections of textual documents. The technique called, Extracting Association Rules from Text (EART). It depends on keyword features for discover association rules amongst keywords labeling the documents. In this work, the EART system ignores the order in which the words occur, but instead focusing on the words and their statistical distributions in documents. The main contributions of the technique are that it integrates XML technology with Information Retrieval scheme (TFIDF) (for keyword/feature selection that automatically selects the most discriminative keywords for use in association rules generation) and use Data Mining technique for association rules discovery. It consists of three phases: Text Preprocessing phase (transformation, filtration, stemming and indexing of the documents), Association Rule Mining (ARM) phase (applying our designed algorithm for Generating Association Rules based on Weighting scheme GARW) and Visualization phase (visualization of results). Experiments applied on WebPages news documents related to the outbreak of the bird flu disease. The extracted association rules contain important features and describe the informative news included in the documents collection. The performance of the EART system compared with another system that uses the Apriori algorithm throughout the execution time and evaluating extracted association rules.
Abstract: Multiprocessor task scheduling is a NP-hard problem and Genetic Algorithm (GA) has been revealed as an excellent technique for finding an optimal solution. In the past, several methods have been considered for the solution of this problem based on GAs. But, all these methods consider single criteria and in the present work, minimization of the bi-criteria multiprocessor task scheduling problem has been considered which includes weighted sum of makespan & total completion time. Efficiency and effectiveness of genetic algorithm can be achieved by optimization of its different parameters such as crossover, mutation, crossover probability, selection function etc. The effects of GA parameters on minimization of bi-criteria fitness function and subsequent setting of parameters have been accomplished by central composite design (CCD) approach of response surface methodology (RSM) of Design of Experiments. The experiments have been performed with different levels of GA parameters and analysis of variance has been performed for significant parameters for minimisation of makespan and total completion time simultaneously.
Abstract: In this paper, cloud resource broker using goalbased
request in medical application is proposed. To handle recent
huge production of digital images and data in medical informatics
application, the cloud resource broker could be used by medical
practitioner for proper process in discovering and selecting correct
information and application. This paper summarizes several
reviewed articles to relate medical informatics application with
current broker technology and presents a research work in applying
goal-based request in cloud resource broker to optimize the use of
resources in cloud environment. The objective of proposing a new
kind of resource broker is to enhance the current resource
scheduling, discovery, and selection procedures. We believed that
it could help to maximize resources allocation in medical
informatics application.
Abstract: Detection of incipient abnormal events is important to
improve safety and reliability of machine operations and reduce losses
caused by failures. Improper set-ups or aligning of parts often leads to
severe problems in many machines. The construction of prediction
models for predicting faulty conditions is quite essential in making
decisions on when to perform machine maintenance. This paper
presents a multivariate calibration monitoring approach based on the
statistical analysis of machine measurement data. The calibration
model is used to predict two faulty conditions from historical reference
data. This approach utilizes genetic algorithms (GA) based variable
selection, and we evaluate the predictive performance of several
prediction methods using real data. The results shows that the
calibration model based on supervised probabilistic principal
component analysis (SPPCA) yielded best performance in this work.
By adopting a proper variable selection scheme in calibration models,
the prediction performance can be improved by excluding
non-informative variables from their model building steps.
Abstract: The zero inflated models are usually used in modeling
count data with excess zeros where the existence of the excess zeros
could be structural zeros or zeros which occur by chance. These type
of data are commonly found in various disciplines such as finance,
insurance, biomedical, econometrical, ecology, and health sciences
which involve sex and health dental epidemiology. The most popular
zero inflated models used by many researchers are zero inflated
Poisson and zero inflated negative binomial models. In addition, zero
inflated generalized Poisson and zero inflated double Poisson models
are also discussed and found in some literature. Recently zero
inflated inverse trinomial model and zero inflated strict arcsine
models are advocated and proven to serve as alternative models in
modeling overdispersed count data caused by excessive zeros and
unobserved heterogeneity. The purpose of this paper is to review
some related literature and provide a variety of examples from
different disciplines in the application of zero inflated models.
Different model selection methods used in model comparison are
discussed.