Abstract: As the information age matures, major social
infrastructures such as communication, finance, military and energy,
have become ever more dependent on information communication
systems. And since these infrastructures are connected to the Internet,
electronic intrusions such as hacking and viruses have become a new
security threat. Especially, disturbance or neutralization of a major
social infrastructure can result in extensive material damage and social
disorder. To address this issue, many nations around the world are
researching and developing various techniques and information
security policies as a government-wide effort to protect their
infrastructures from newly emerging threats. This paper proposes an
evaluation method for information security levels of CIIP (Critical
Information Infrastructure Protection), which can enhance the security
level of critical information infrastructure by checking the current
security status and establish security measures accordingly to protect
infrastructures effectively.
Abstract: A new approach to promote the generalization ability
of neural networks is presented. It is based on the point of view of
fuzzy theory. This approach is implemented through shrinking or
magnifying the input vector, thereby reducing the difference between
training set and testing set. It is called “shrinking-magnifying
approach" (SMA). At the same time, a new algorithm; α-algorithm is
presented to find out the appropriate shrinking-magnifying-factor
(SMF) α and obtain better generalization ability of neural networks.
Quite a few simulation experiments serve to study the effect of SMA
and α-algorithm. The experiment results are discussed in detail, and
the function principle of SMA is analyzed in theory. The results of
experiments and analyses show that the new approach is not only
simpler and easier, but also is very effective to many neural networks
and many classification problems. In our experiments, the proportions
promoting the generalization ability of neural networks have even
reached 90%.
Abstract: This paper proposes a bi-objective model for the
facility location problem under a congestion system. The idea of the
model is motivated by applications of locating servers in bank
automated teller machines (ATMS), communication networks, and so
on. This model can be specifically considered for situations in which
fixed service facilities are congested by stochastic demand within
queueing framework. We formulate this model with two perspectives
simultaneously: (i) customers and (ii) service provider. The
objectives of the model are to minimize (i) the total expected
travelling and waiting time and (ii) the average facility idle-time.
This model represents a mixed-integer nonlinear programming
problem which belongs to the class of NP-hard problems. In addition,
to solve the model, two metaheuristic algorithms including nondominated
sorting genetic algorithms (NSGA-II) and non-dominated
ranking genetic algorithms (NRGA) are proposed. Besides, to
evaluate the performance of the two algorithms some numerical
examples are produced and analyzed with some metrics to determine
which algorithm works better.
Abstract: It is by reason of the unified measure of varieties of resources and the unified processing of the disposal of varieties of resources, that these closely related three of new basic models called the resources assembled node and the disposition integrated node as well as the intelligent organizing node are put forth in this paper; the three closely related quantities of integrative analytical mechanics including the disposal intensity and disposal- weighted intensity as well as the charge of resource charge are set; and then the resources assembled space and the disposition integrated space as well as the intelligent organizing space are put forth. The system of fundamental equations and model of complete factor synergetics is preliminarily approached for the general situation in this paper, to form the analytical base of complete factor synergetics. By the essential variables constituting this system of equations we should set twenty variables respectively with relation to the essential dynamical effect, external synergetic action and internal synergetic action of the system.
Abstract: Software maintenance is extremely important activity in software development life cycle. It involves a lot of human efforts, cost and time. Software maintenance may be further subdivided into different activities such as fault prediction, fault detection, fault prevention, fault correction etc. This topic has gained substantial attention due to sophisticated and complex applications, commercial hardware, clustered architecture and artificial intelligence. In this paper we surveyed the work done in the field of software maintenance. Software fault prediction has been studied in context of fault prone modules, self healing systems, developer information, maintenance models etc. Still a lot of things like modeling and weightage of impact of different kind of faults in the various types of software systems need to be explored in the field of fault severity.
Abstract: Time series models have been used to make predictions of academic enrollments, weather, road accident, casualties and stock prices, etc. Based on the concepts of quartile regression models, we have developed a simple time variant quantile based fuzzy time series forecasting method. The proposed method bases the forecast using prediction of future trend of the data. In place of actual quantiles of the data at each point, we have converted the statistical concept into fuzzy concept by using fuzzy quantiles using fuzzy membership function ensemble. We have given a fuzzy metric to use the trend forecast and calculate the future value. The proposed model is applied for TAIFEX forecasting. It is shown that proposed method work best as compared to other models when compared with respect to model complexity and forecasting accuracy.
Abstract: To reduce accidents in the industry, WSNs(Wireless Sensor
networks)- sensor data is used. WSNs- sensor data has the persistence and
continuity. therefore, we design and exploit the buffer management system that
has the persistence and continuity to avoid and delivery data conflicts. To
develop modules, we use the multi buffers and design the buffer management
modules that transfer sensor data through the context-aware methods.
Abstract: Music Information Retrieval (MIR) and modern data mining techniques are applied to identify style markers in midi music for stylometric analysis and author attribution. Over 100 attributes are extracted from a library of 2830 songs then mined using supervised learning data mining techniques. Two attributes are identified that provide high informational gain. These attributes are then used as style markers to predict authorship. Using these style markers the authors are able to correctly distinguish songs written by the Beatles from those that were not with a precision and accuracy of over 98 per cent. The identification of these style markers as well as the architecture for this research provides a foundation for future research in musical stylometry.
Abstract: The conventional GA combined with a local search
algorithm, such as the 2-OPT, forms a hybrid genetic algorithm(HGA)
for the traveling salesman problem (TSP). However, the geometric
properties which are problem specific knowledge can be used to
improve the search process of the HGA. Some tour segments (edges)
of TSPs are fine while some maybe too long to appear in a short tour.
This knowledge could constrain GAs to work out with fine tour
segments without considering long tour segments as often.
Consequently, a new algorithm is proposed, called intelligent-OPT
hybrid genetic algorithm (IOHGA), to improve the GA and the 2-OPT
algorithm in order to reduce the search time for the optimal solution.
Based on the geometric properties, all the tour segments are assigned
2-level priorities to distinguish between good and bad genes. A
simulation study was conducted to evaluate the performance of the
IOHGA. The experimental results indicate that in general the IOHGA
could obtain near-optimal solutions with less time and better accuracy
than the hybrid genetic algorithm with simulated annealing algorithm
(HGA(SA)).
Abstract: Text Mining is around applying knowledge discovery
techniques to unstructured text is termed knowledge discovery in text
(KDT), or Text data mining or Text Mining. In decision tree
approach is most useful in classification problem. With this
technique, tree is constructed to model the classification process.
There are two basic steps in the technique: building the tree and
applying the tree to the database. This paper describes a proposed
C5.0 classifier that performs rulesets, cross validation and boosting
for original C5.0 in order to reduce the optimization of error ratio.
The feasibility and the benefits of the proposed approach are
demonstrated by means of medial data set like hypothyroid. It is
shown that, the performance of a classifier on the training cases from
which it was constructed gives a poor estimate by sampling or using a
separate test file, either way, the classifier is evaluated on cases that
were not used to build and evaluate the classifier are both are large. If
the cases in hypothyroid.data and hypothyroid.test were to be
shuffled and divided into a new 2772 case training set and a 1000
case test set, C5.0 might construct a different classifier with a lower
or higher error rate on the test cases. An important feature of see5 is
its ability to classifiers called rulesets. The ruleset has an error rate
0.5 % on the test cases. The standard errors of the means provide an
estimate of the variability of results. One way to get a more reliable
estimate of predictive is by f-fold –cross- validation. The error rate of
a classifier produced from all the cases is estimated as the ratio of the
total number of errors on the hold-out cases to the total number of
cases. The Boost option with x trials instructs See5 to construct up to
x classifiers in this manner. Trials over numerous datasets, large and
small, show that on average 10-classifier boosting reduces the error
rate for test cases by about 25%.
Abstract: In the recent past, there has been an increasing interest
in applying evolutionary methods to Knowledge Discovery in
Databases (KDD) and a number of successful applications of Genetic
Algorithms (GA) and Genetic Programming (GP) to KDD have been
demonstrated. The most predominant representation of the
discovered knowledge is the standard Production Rules (PRs) in the
form If P Then D. The PRs, however, are unable to handle
exceptions and do not exhibit variable precision. The Censored
Production Rules (CPRs), an extension of PRs, were proposed by
Michalski & Winston that exhibit variable precision and supports an
efficient mechanism for handling exceptions. A CPR is an
augmented production rule of the form:
If P Then D Unless C, where C (Censor) is an exception to the rule.
Such rules are employed in situations, in which the conditional
statement 'If P Then D' holds frequently and the assertion C holds
rarely. By using a rule of this type we are free to ignore the exception
conditions, when the resources needed to establish its presence are
tight or there is simply no information available as to whether it
holds or not. Thus, the 'If P Then D' part of the CPR expresses
important information, while the Unless C part acts only as a switch
and changes the polarity of D to ~D.
This paper presents a classification algorithm based on evolutionary
approach that discovers comprehensible rules with exceptions in the
form of CPRs.
The proposed approach has flexible chromosome encoding, where
each chromosome corresponds to a CPR. Appropriate genetic
operators are suggested and a fitness function is proposed that
incorporates the basic constraints on CPRs. Experimental results are
presented to demonstrate the performance of the proposed algorithm.
Abstract: The paper presents an optimization study based on
genetic algorithms (GA-s) for a radio-frequency applicator used in
heating dielectric band products. The weakly coupled electro-thermal
problem is analyzed using 2D-FEM. The design variables in the
optimization process are: the voltage of a supplementary “guard"
electrode and six geometric parameters of the applicator. Two
objective functions are used: temperature uniformity and total active
power absorbed by the dielectric. Both mono-objective and multiobjective
formulations are implemented in GA optimization.
Abstract: Shot boundary detection is a fundamental step for the organization of large video data. In this paper, we propose a new method for video gradual shots detection and classification, using advantages of fractal analysis and AIS-based classifier. Proposed features are “vertical intercept" and “fractal dimension" of each frame of videos which are computed using Fourier transform coefficients. We also used a classifier based on Clonal Selection Algorithm. We have carried out our solution and assessed it according to the TRECVID2006 benchmark dataset.
Abstract: Many factors affect the success of Machine Learning
(ML) on a given task. The representation and quality of the instance
data is first and foremost. If there is much irrelevant and redundant
information present or noisy and unreliable data, then knowledge
discovery during the training phase is more difficult. It is well known
that data preparation and filtering steps take considerable amount of
processing time in ML problems. Data pre-processing includes data
cleaning, normalization, transformation, feature extraction and
selection, etc. The product of data pre-processing is the final training
set. It would be nice if a single sequence of data pre-processing
algorithms had the best performance for each data set but this is not
happened. Thus, we present the most well know algorithms for each
step of data pre-processing so that one achieves the best performance
for their data set.
Abstract: Recently research on human wayfinding has focused
mainly on mental representations rather than processes of
wayfinding. The objective of this paper is to demonstrate the
rationality behind applying multi-agent simulation paradigm to the
modeling of rescuer team wayfinding in order to develop
computational theory of perceptual wayfinding in crisis situations
using image schemata and affordances, which explains how people
find a specific destination in an unfamiliar building such as a
hospital. The hypothesis of this paper is that successful navigation is
possible if the agents are able to make the correct decision through
well-defined cues in critical cases, so the design of the building
signage is evaluated through the multi-agent-based simulation. In
addition, a special case of wayfinding in a building, finding one-s
way through three hospitals, is used to demonstrate the model.
Thereby, total rescue time for rescue operation during building fire is
computed. This paper discuses the computed rescue time for various
signage localization and provides experimental result for
optimization of building signage design. Therefore the most
appropriate signage design resulted in the shortest total rescue time in
various situations.
Abstract: This paper addresses a stock-cutting problem with rotation of items and without the guillotine cutting constraint. In order to solve the large-scale problem effectively and efficiently, we propose a simple but fast heuristic algorithm. It is shown that this heuristic outperforms the latest published algorithms for large-scale problem instances.
Abstract: In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.
Abstract: Rule Discovery is an important technique for mining
knowledge from large databases. Use of objective measures for
discovering interesting rules leads to another data mining problem,
although of reduced complexity. Data mining researchers have
studied subjective measures of interestingness to reduce the volume
of discovered rules to ultimately improve the overall efficiency of
KDD process.
In this paper we study novelty of the discovered rules as a
subjective measure of interestingness. We propose a hybrid approach
based on both objective and subjective measures to quantify novelty
of the discovered rules in terms of their deviations from the known
rules (knowledge). We analyze the types of deviation that can arise
between two rules and categorize the discovered rules according to
the user specified threshold. We implement the proposed framework
and experiment with some public datasets. The experimental results
are promising.
Abstract: Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.
Abstract: This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.