Abstract: Network security attacks are the violation of
information security policy that received much attention to the
computational intelligence society in the last decades. Data mining
has become a very useful technique for detecting network intrusions
by extracting useful knowledge from large number of network data
or logs. Naïve Bayesian classifier is one of the most popular data
mining algorithm for classification, which provides an optimal way
to predict the class of an unknown example. It has been tested that
one set of probability derived from data is not good enough to have
good classification rate. In this paper, we proposed a new learning
algorithm for mining network logs to detect network intrusions
through naïve Bayesian classifier, which first clusters the network
logs into several groups based on similarity of logs, and then
calculates the prior and conditional probabilities for each group of
logs. For classifying a new log, the algorithm checks in which cluster
the log belongs and then use that cluster-s probability set to classify
the new log. We tested the performance of our proposed algorithm by
employing KDD99 benchmark network intrusion detection dataset,
and the experimental results proved that it improves detection rates
as well as reduces false positives for different types of network
intrusions.
Abstract: Classifying biomedical literature is a difficult and
challenging task, especially when a large number of biomedical
articles should be organized into a hierarchical structure. In this paper,
we present an approach for classifying a collection of biomedical text
abstracts downloaded from Medline database with the help of
ontology alignment. To accomplish our goal, we construct two types
of hierarchies, the OHSUMED disease hierarchy and the Medline
abstract disease hierarchies from the OHSUMED dataset and the
Medline abstracts, respectively. Then, we enrich the OHSUMED
disease hierarchy before adapting it to ontology alignment process for
finding probable concepts or categories. Subsequently, we compute
the cosine similarity between the vector in probable concepts (in the
“enriched" OHSUMED disease hierarchy) and the vector in Medline
abstract disease hierarchies. Finally, we assign category to the new
Medline abstracts based on the similarity score. The results obtained
from the experiments show the performance of our proposed approach
for hierarchical classification is slightly better than the performance of
the multi-class flat classification.
Abstract: This paper describes about the process of recognition and classification of brain images such as normal and abnormal based on PSO-SVM. Image Classification is becoming more important for medical diagnosis process. In medical area especially for diagnosis the abnormality of the patient is classified, which plays a great role for the doctors to diagnosis the patient according to the severeness of the diseases. In case of DICOM images it is very tough for optimal recognition and early detection of diseases. Our work focuses on recognition and classification of DICOM image based on collective approach of digital image processing. For optimal recognition and classification Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Support Vector Machine (SVM) are used. The collective approach by using PSO-SVM gives high approximation capability and much faster convergence.
Abstract: This paper deals with efficient computation of
probability coefficients which offers computational simplicity as
compared to spectral coefficients. It eliminates the need of inner
product evaluations in determination of signature of a combinational
circuit realizing given Boolean function. The method for computation
of probability coefficients using transform matrix, fast transform
method and using BDD is given. Theoretical relations for achievable
computational advantage in terms of required additions in computing
all 2n probability coefficients of n variable function have been
developed. It is shown that for n ≥ 5, only 50% additions are needed
to compute all probability coefficients as compared to spectral
coefficients. The fault detection techniques based on spectral
signature can be used with probability signature also to offer
computational advantage.
Abstract: In an era of knowledge explosion, the growth of data
increases rapidly day by day. Since data storage is a limited resource,
how to reduce the data space in the process becomes a challenge issue.
Data compression provides a good solution which can lower the
required space. Data mining has many useful applications in recent
years because it can help users discover interesting knowledge in large
databases. However, existing compression algorithms are not
appropriate for data mining. In [1, 2], two different approaches were
proposed to compress databases and then perform the data mining
process. However, they all lack the ability to decompress the data to
their original state and improve the data mining performance. In this
research a new approach called Mining Merged Transactions with the
Quantification Table (M2TQT) was proposed to solve these problems.
M2TQT uses the relationship of transactions to merge related
transactions and builds a quantification table to prune the candidate
itemsets which are impossible to become frequent in order to improve
the performance of mining association rules. The experiments show
that M2TQT performs better than existing approaches.
Abstract: In this paper we present a system for classifying videos
by frequency spectra. Many videos contain activities with repeating
movements. Sports videos, home improvement videos, or videos
showing mechanical motion are some example areas. Motion of these
areas usually repeats with a certain main frequency and several side
frequencies. Transforming repeating motion to its frequency domain
via FFT reveals these frequencies. Average amplitudes of frequency
intervals can be seen as features of cyclic motion. Hence determining
these features can help to classify videos with repeating movements.
In this paper we explain how to compute frequency spectra for video
clips and how to use them for classifying. Our approach utilizes series
of image moments as a function. This function again is transformed
into its frequency domain.
Abstract: Today, building automation is advancing from simple
monitoring and control tasks of lightning and heating towards more
and more complex applications that require a dynamic perception
and interpretation of different scenes occurring in a building. Current
approaches cannot handle these newly upcoming demands. In this
article, a bionically inspired approach for multimodal, dynamic scene
perception and interpretation is presented, which is based on neuroscientific
and neuro-psychological research findings about the perceptual
system of the human brain. This approach bases on data from diverse
sensory modalities being processed in a so-called neuro-symbolic
network. With its parallel structure and with its basic elements being
information processing and storing units at the same time, a very
efficient method for scene perception is provided overcoming the
problems and bottlenecks of classical dynamic scene interpretation
systems.
Abstract: One major difficulty that faces developers of
concurrent and distributed software is analysis for concurrency based
faults like deadlocks. Petri nets are used extensively in the
verification of correctness of concurrent programs. ECATNets [2] are
a category of algebraic Petri nets based on a sound combination of
algebraic abstract types and high-level Petri nets. ECATNets have
'sound' and 'complete' semantics because of their integration in
rewriting logic [12] and its programming language Maude [13].
Rewriting logic is considered as one of very powerful logics in terms
of description, verification and programming of concurrent systems.
We proposed in [4] a method for translating Ada-95 tasking
programs to ECATNets formalism (Ada-ECATNet). In this paper,
we show that ECATNets formalism provides a more compact
translation for Ada programs compared to the other approaches based
on simple Petri nets or Colored Petri nets (CPNs). Such translation
doesn-t reduce only the size of program, but reduces also the number
of program states. We show also, how this compact Ada-ECATNet
may be reduced again by applying reduction rules on it. This double
reduction of Ada-ECATNet permits a considerable minimization of
the memory space and run time of corresponding Maude program.
Abstract: In this paper, we propose effective system for digital music retrieval. We divided proposed system into Client and Server. Client part consists of pre-processing and Content-based feature extraction stages. In pre-processing stage, we minimized Time code Gap that is occurred among same music contents. As content-based feature, first-order differentiated MFCC were used. These presented approximately envelop of music feature sequences. Server part included Music Server and Music Matching stage. Extracted features from 1,000 digital music files were stored in Music Server. In Music Matching stage, we found retrieval result through similarity measure by DTW. In experiment, we used 450 queries. These were made by mixing different compression standards and sound qualities from 50 digital music files. Retrieval accurate indicated 97% and retrieval time was average 15ms in every single query. Out experiment proved that proposed system is effective in retrieve digital music and robust at various user environments of web.
Abstract: Forecasting the values of the indicators, which
characterize the effectiveness of performance of organizations is of
great importance for their successful development. Such forecasting
is necessary in order to assess the current state and to foresee future
developments, so that measures to improve the organization-s
activity could be undertaken in time. The article presents an
overview of the applied mathematical and statistical methods for
developing forecasts. Special attention is paid to artificial neural
networks as a forecasting tool. Their strengths and weaknesses are
analyzed and a synopsis is made of the application of artificial neural
networks in the field of forecasting of the values of different
education efficiency indicators. A method of evaluation of the
activity of universities using the Balanced Scorecard is proposed and
Key Performance Indicators for assessment of e-learning are
selected. Resulting indicators for the evaluation of efficiency of the
activity are proposed. An artificial neural network is constructed and
applied in the forecasting of the values of indicators for e-learning
efficiency on the basis of the KPI values.
Abstract: In this paper, Speed Sensorless Indirect Field Oriented Control (IFOC) of a Permanent Magnet Synchronous machine (PMSM) is studied. The closed loop scheme of the drive system utilizes fuzzy speed and current controllers. Due to the well known drawbacks of the speed sensor, an algorithm is proposed in this paper to eliminate it. In fact, based on the model of the PMSM, the stator currents and rotor speed are estimated simultaneously using adaptive Luenberger observer for currents and MRAS (Model Reference Adaptive System) observer for rotor speed. To overcome the sensivity of this algorithm against parameter variation, adaptive for on line stator resistance tuning is proposed. The validity of the proposed method is verified by an extensive simulation work.
Abstract: Prediction of fault-prone modules provides one way to
support software quality engineering. Clustering is used to determine
the intrinsic grouping in a set of unlabeled data. Among various
clustering techniques available in literature K-Means clustering
approach is most widely being used. This paper introduces K-Means
based Clustering approach for software finding the fault proneness of
the Object-Oriented systems. The contribution of this paper is that it
has used Metric values of JEdit open source software for generation
of the rules for the categorization of software modules in the
categories of Faulty and non faulty modules and thereafter
empirically validation is performed. The results are measured in
terms of accuracy of prediction, probability of Detection and
Probability of False Alarms.
Abstract: The paper presents the method developed to assess
rating points of objects with qualitative indexes. The novelty of the
method lies in the fact that the authors use linguistic scales that allow
to formalize the values of the indexes with the help of fuzzy sets. As
a result it is possible to operate correctly with dissimilar indexes on
the unified basis and to get stable final results. The obtained rating
points are used in decision making based on fuzzy expert opinions.
Abstract: In this paper, a new method of controlling position of AC Servomotor using Field Programmable Gate Array (FPGA). FPGA controller is used to generate direction and the number of pulses required to rotate for a given angle. Pulses are sent as a square wave, the number of pulses determines the angle of rotation and frequency of square wave determines the speed of rotation. The proposed control scheme has been realized using XILINX FPGA SPARTAN XC3S400 and tested using MUMA012PIS model Alternating Current (AC) servomotor. Experimental results show that the position of the AC Servo motor can be controlled effectively. KeywordsAlternating Current (AC), Field Programmable Gate Array (FPGA), Liquid Crystal Display (LCD).
Abstract: An adaptive Fuzzy Inference Perceptual model has
been proposed for watermarking of digital images. The model
depends on the human visual characteristics of image sub-regions in
the frequency multi-resolution wavelet domain. In the proposed
model, a multi-variable fuzzy based architecture has been designed to
produce a perceptual membership degree for both candidate
embedding sub-regions and strength watermark embedding factor.
Different sizes of benchmark images with different sizes of
watermarks have been applied on the model. Several experimental
attacks have been applied such as JPEG compression, noises and
rotation, to ensure the robustness of the scheme. In addition, the
model has been compared with different watermarking schemes. The
proposed model showed its robustness to attacks and at the same time
achieved a high level of imperceptibility.
Abstract: Nowadays companies strive to survive in a
competitive global environment. To speed up product
development/modifications, it is suggested to adopt a collaborative
product development approach. However, despite the advantages of
new IT improvements still many CAx systems work separately and
locally. Collaborative design and manufacture requires a product
information model that supports related CAx product data models. To
solve this problem many solutions are proposed, which the most
successful one is adopting the STEP standard as a product data model
to develop a collaborative CAx platform. However, the improvement
of the STEP-s Application Protocols (APs) over the time, huge
number of STEP AP-s and cc-s, the high costs of implementation,
costly process for conversion of older CAx software files to the STEP
neutral file format; and lack of STEP knowledge, that usually slows
down the implementation of the STEP standard in collaborative data
exchange, management and integration should be considered. In this
paper the requirements for a successful collaborative CAx system is
discussed. The STEP standard capability for product data integration
and its shortcomings as well as the dominant platforms for supporting
CAx collaboration management and product data integration are
reviewed. Finally a platform named LAYMOD to fulfil the
requirements of CAx collaborative environment and integrating the
product data is proposed. The platform is a layered platform to enable
global collaboration among different CAx software
packages/developers. It also adopts the STEP modular architecture
and the XML data structures to enable collaboration between CAx
software packages as well as overcoming the STEP standard
limitations. The architecture and procedures of LAYMOD platform
to manage collaboration and avoid contradicts in product data
integration are introduced.
Abstract: Due to the coexistence of different Radio Access
Technologies (RATs), Next Generation Wireless Networks (NGWN)
are predicted to be heterogeneous in nature. The coexistence of
different RATs requires a need for Common Radio Resource
Management (CRRM) to support the provision of Quality of Service
(QoS) and the efficient utilization of radio resources. RAT selection
algorithms are part of the CRRM algorithms. Simply, their role is to
verify if an incoming call will be suitable to fit into a heterogeneous
wireless network, and to decide which of the available RATs is most
suitable to fit the need of the incoming call and admit it.
Guaranteeing the requirements of QoS for all accepted calls and at
the same time being able to provide the most efficient utilization of
the available radio resources is the goal of RAT selection algorithm.
The normal call admission control algorithms are designed for
homogeneous wireless networks and they do not provide a solution
to fit a heterogeneous wireless network which represents the NGWN.
Therefore, there is a need to develop RAT selection algorithm for
heterogeneous wireless network. In this paper, we propose an
approach for RAT selection which includes receiving different
criteria, assessing and making decisions, then selecting the most
suitable RAT for incoming calls. A comprehensive survey of
different RAT selection algorithms for a heterogeneous wireless
network is studied.
Abstract: Visual inputs are one of the key sources from which
humans perceive the environment and 'understand' what is
happening. Artificial systems perceive the visual inputs as digital
images. The images need to be processed and analysed. Within the
human brain, processing of visual inputs and subsequent
development of perception is one of its major functionalities. In this
paper we present part of our research project, which aims at the
development of an artificial model for visual perception (or
'understanding') based on the human perceptive and cognitive
systems. We propose a new model for perception from visual inputs
and a way of understaning or interpreting images using the model.
We demonstrate the implementation and use of the model with a real
image data set.
Abstract: The applications on numbers are across-the-board that there is much scope for study. The chic of writing numbers is diverse and comes in a variety of form, size and fonts. Identification of Indian languages scripts is challenging problems. In Optical Character Recognition [OCR], machine printed or handwritten characters/numerals are recognized. There are plentiful approaches that deal with problem of detection of numerals/character depending on the sort of feature extracted and different way of extracting them. This paper proposes a recognition scheme for handwritten Hindi (devnagiri) numerals; most admired one in Indian subcontinent our work focused on a technique in feature extraction i.e. Local-based approach, a method using 16-segment display concept, which is extracted from halftoned images & Binary images of isolated numerals. These feature vectors are fed to neural classifier model that has been trained to recognize a Hindi numeral. The archetype of system has been tested on varieties of image of numerals. Experimentation result shows that recognition rate of halftoned images is 98 % compared to binary images (95%).
Abstract: Through inward perceptions, we intuitively expect
distributed software development to increase the risks associated with
achieving cost, schedule, and quality goals. To compound this
problem, agile software development (ASD) insists one of the main
ingredients of its success is cohesive communication attributed to
collocation of the development team. The following study identified
the degree of communication richness needed to achieve comparable
software quality (reduce pre-release defects) between distributed and
collocated teams. This paper explores the relevancy of
communication richness in various development phases and its
impact on quality. Through examination of a large distributed agile
development project, this investigation seeks to understand the levels
of communication required within each ASD phase to produce
comparable quality results achieved by collocated teams. Obviously,
a multitude of factors affects the outcome of software projects.
However, within distributed agile software development teams, the
mode of communication is one of the critical components required to
achieve team cohesiveness and effectiveness. As such, this study
constructs a distributed agile communication model (DAC-M) for
potential application to similar distributed agile development efforts
using the measurement of the suitable level of communication. The
results of the study show that less rich communication methods, in
the appropriate phase, might be satisfactory to achieve equivalent
quality in distributed ASD efforts.