Abstract: The security of computer networks plays a strategic
role in modern computer systems. Intrusion Detection Systems (IDS)
act as the 'second line of defense' placed inside a protected
network, looking for known or potential threats in network traffic
and/or audit data recorded by hosts. We developed an Intrusion
Detection System using LAMSTAR neural network to learn patterns
of normal and intrusive activities, to classify observed system
activities and compared the performance of LAMSTAR IDS with
other classification techniques using 5 classes of KDDCup99 data.
LAMSAR IDS gives better performance at the cost of high
Computational complexity, Training time and Testing time, when
compared to other classification techniques (Binary Tree classifier,
RBF classifier, Gaussian Mixture classifier). we further reduced the
Computational Complexity of LAMSTAR IDS by reducing the
dimension of the data using principal component analysis which in
turn reduces the training and testing time with almost the same
performance.
Abstract: As many scientific applications require large data processing, the importance of parallel I/O has been increasingly recognized. Collective I/O is one of the considerable features of parallel I/O and enables application programmers to easily handle their large data volume. In this paper we measured and analyzed the performance of original collective I/O and the subgroup method, the way of using collective I/O of MPI effectively. From the experimental results, we found that the subgroup method showed good performance with small data size.
Abstract: The recognition of handwritten numeral is an
important area of research for its applications in post office, banks
and other organizations. This paper presents automatic recognition of
handwritten Kannada numerals based on structural features. Five
different types of features, namely, profile based 10-segment string,
water reservoir; vertical and horizontal strokes, end points and
average boundary length from the minimal bounding box are used in
the recognition of numeral. The effect of each feature and their
combination in the numeral classification is analyzed using nearest
neighbor classifiers. It is common to combine multiple categories of
features into a single feature vector for the classification. Instead,
separate classifiers can be used to classify based on each visual
feature individually and the final classification can be obtained based
on the combination of separate base classification results. One
popular approach is to combine the classifier results into a feature
vector and leaving the decision to next level classifier. This method
is extended to extract a better information, possibility distribution,
from the base classifiers in resolving the conflicts among the
classification results. Here, we use fuzzy k Nearest Neighbor (fuzzy
k-NN) as base classifier for individual feature sets, the results of
which together forms the feature vector for the final k Nearest
Neighbor (k-NN) classifier. Testing is done, using different features,
individually and in combination, on a database containing 1600
samples of different numerals and the results are compared with the
results of different existing methods.
Abstract: VRML( The virtual reality modeling language) is a standard language used to build up 3D virtualized models. The quick development of internet technology and computer manipulation has promoted the commercialization of reality virtualization. VRML, thereof, is expected to be the most effective framework of building up virtual reality. This article has studied plans to build virtualized scenes based on the technology of virtual reality and Java programe, and introduced how to execute real-time data transactions of VRML file and Java programe by applying Script Node, in doing so we have the VRML interactivity being strengthened.
Abstract: Digital watermarking is a way to provide the facility of secure multimedia data communication besides its copyright protection approach. The Spread Spectrum modulation principle is widely used in digital watermarking to satisfy the robustness of multimedia signals against various signal-processing operations. Several SS watermarking algorithms have been proposed for multimedia signals but very few works have discussed on the issues responsible for secure data communication and its robustness improvement. The current paper has critically analyzed few such factors namely properties of spreading codes, proper signal decomposition suitable for data embedding, security provided by the key, successive bit cancellation method applied at decoder which have greater impact on the detection reliability, secure communication of significant signal under camouflage of insignificant signals etc. Based on the analysis, robust SS watermarking scheme for secure data communication is proposed in wavelet domain and improvement in secure communication and robustness performance is reported through experimental results. The reported result also shows improvement in visual and statistical invisibility of the hidden data.
Abstract: Knowledge is indispensable but voluminous knowledge becomes a bottleneck for efficient processing. A great challenge for data mining activity is the generation of large number of potential rules as a result of mining process. In fact sometimes result size is comparable to the original data. Traditional data mining pruning activities such as support do not sufficiently reduce the huge rule space. Moreover, many practical applications are characterized by continual change of data and knowledge, thereby making knowledge voluminous with each change. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. Michalski & Winston proposed Censored Production Rules (CPRs), as an extension of production rules, that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence, are tight or there is simply no information available as to whether it holds or not. Thus the 'If P Then D' part of the CPR expresses important information while the Unless C part acts only as a switch changes the polarity of D to ~D. In this paper a scheme based on Dempster-Shafer Theory (DST) interpretation of a CPR is suggested for discovering CPRs from the discovered flat PRs. The discovery of CPRs from flat rules would result in considerable reduction of the already discovered rules. The proposed scheme incrementally incorporates new knowledge and also reduces the size of knowledge base considerably with each episode. Examples are given to demonstrate the behaviour of the proposed scheme. The suggested cumulative learning scheme would be useful in mining data streams.
Abstract: Most of fuzzy clustering algorithms have some
discrepancies, e.g. they are not able to detect clusters with convex
shapes, the number of the clusters should be a priori known, they
suffer from numerical problems, like sensitiveness to the
initialization, etc. This paper studies the synergistic combination of
the hierarchical and graph theoretic minimal spanning tree based
clustering algorithm with the partitional Gath-Geva fuzzy clustering
algorithm. The aim of this hybridization is to increase the robustness
and consistency of the clustering results and to decrease the number
of the heuristically defined parameters of these algorithms to
decrease the influence of the user on the clustering results. For the
analysis of the resulted fuzzy clusters a new fuzzy similarity measure
based tool has been presented. The calculated similarities of the
clusters can be used for the hierarchical clustering of the resulted
fuzzy clusters, which information is useful for cluster merging and
for the visualization of the clustering results. As the examples used
for the illustration of the operation of the new algorithm will show,
the proposed algorithm can detect clusters from data with arbitrary
shape and does not suffer from the numerical problems of the
classical Gath-Geva fuzzy clustering algorithm.
Abstract: In this paper, a robust statistics based filter to remove salt and pepper noise in digital images is presented. The function of the algorithm is to detect the corrupted pixels first since the impulse noise only affect certain pixels in the image and the remaining pixels are uncorrupted. The corrupted pixels are replaced by an estimated value using the proposed robust statistics based filter. The proposed method perform well in removing low to medium density impulse noise with detail preservation upto a noise density of 70% compared to standard median filter, weighted median filter, recursive weighted median filter, progressive switching median filter, signal dependent rank ordered mean filter, adaptive median filter and recently proposed decision based algorithm. The visual and quantitative results show the proposed algorithm outperforms in restoring the original image with superior preservation of edges and better suppression of impulse noise
Abstract: Finding synchronizing sequences for the finite automata is a very important problem in many practical applications (part orienters in industry, reset problem in biocomputing theory, network issues etc). Problem of finding the shortest synchronizing sequence is NP-hard, so polynomial algorithms probably can work only as heuristic ones. In this paper we propose two versions of polynomial algorithms which work better than well-known Eppstein-s Greedy and Cycle algorithms.
Abstract: Realistic 3D face model is desired in various
applications such as face recognition, games, avatars, animations, and
etc. Construction of 3D face model is composed of 1) building a face
shape model and 2) rendering the face shape model. Thus, building a
realistic 3D face shape model is an essential step for realistic 3D face
model. Recently, 3D morphable model is successfully introduced to
deal with the various human face shapes. 3D dense correspondence
problem should be precedently resolved for constructing a realistic 3D
dense morphable face shape model. Several approaches to 3D dense
correspondence problem in 3D face modeling have been proposed
previously, and among them optical flow based algorithms and TPS
(Thin Plate Spline) based algorithms are representative. Optical flow
based algorithms require texture information of faces, which is
sensitive to variation of illumination. In TPS based algorithms
proposed so far, TPS process is performed on the 2D projection
representation in cylindrical coordinates of the 3D face data, not
directly on the 3D face data and thus errors due to distortion in data
during 2D TPS process may be inevitable.
In this paper, we propose a new 3D dense correspondence algorithm
for 3D dense morphable face shape modeling. The proposed algorithm
does not need texture information and applies TPS directly on 3D face
data. Through construction procedures, it is observed that the proposed
algorithm constructs realistic 3D face morphable model reliably and
fast.
Abstract: Tablet computers and Multifunctional Hardcopy Devices (MHDs) are common devices in daily life. Though, many scientific studies have not been published. The tablet computers are straightforward to use whereas the MHDs are comparatively difficult to use. Thus, to assist different levels of users, we propose combining these two devices to achieve straightforward intelligent user interface (UI) and versatile What You See Is What You Get (WYSIWYG) document management and production. Our approach to this issue is to design an intelligent user dependent UI for a MHD applying a tablet computer. Furthermore, we propose hardware interconnection and versatile intelligent software between these two devices. In this study, we first provide a state-of-the-art survey on MHDs and tablet computers, and their interconnections. Secondly we provide a comparative UI survey on two state-of-the-art MHDs with a proposal of a novel UI for the MHDs using Jakob Nielsen-s Ten Usability Heuristics Evaluation.
Abstract: Heuristics-based search methodologies normally
work on searching a problem space of possible solutions toward
finding a “satisfactory" solution based on “hints" estimated from the
problem-specific knowledge. Research communities use different
types of methodologies. Unfortunately, most of the times, these hints
are immature and can lead toward hindering these methodologies by
a premature convergence. This is due to a decrease of diversity in
search space that leads to a total implosion and ultimately fitness
stagnation of the population. In this paper, a novel Decision Maturity
framework (DMF) is introduced as a solution to this problem. The
framework simply improves the decision on the direction of the
search by materializing hints enough before using them. Ideas from
this framework are injected into the particle swarm optimization
methodology. Results were obtained under both static and dynamic
environment. The results show that decision maturity prevents
premature converges to a high degree.
Abstract: Due to short product life cycles, increasing variety of
products and short cycles of leap innovations manufacturing
companies have to increase the flexibility of factory structures.
Flexibility of factory structures is based on defined factory planning
processes in which product, process and resource data of various
partial domains have to be considered. Thus factory planning
processes can be characterized as iterative, interdisciplinary and
participative processes [1]. To support interdisciplinary and
participative character of planning processes, a federative factory
data management (FFDM) as a holistic solution will be described.
FFDM is already implemented in form of a prototype. The interim
results of the development of FFDM will be shown in this paper. The
principles are the extracting of product, process and resource data
from documents of various partial domains providing as web services
on a server. The described data can be requested by the factory
planner by using a FFDM-browser.
Abstract: Developing a university course schedule is
difficult. This is due to the limitations in the resources
available. The process is made even harder with different
faculties or departments having different ways of stating their
schedule requirements. The person in charge of taking the
schedule requirements and turning them into a proper course
schedule is not only burden with the task of allocating the
appropriate classes and time to lecturers and students, they
also need to understand the schedule requirements. Therefore
a scheduling support system named SATA is developed to
assist ICRESS in the course scheduling process. SATA has
been put to use for several semesters and the results have been
encouraging. It won a bronze medal in the 2008 Invention,
Innovation and Design competition (IID-08) and has been
submitted to be patented in October 2008
Abstract: Component-Based software engineering provides an
opportunity for better quality and increased productivity in software
development by using reusable software components [10]. One of the
most critical aspects of the quality of a software system is its
performance. The systematic application of software performance
engineering techniques throughout the development process can help
to identify design alternatives that preserve desirable qualities such
as extensibility and reusability while meeting performance objectives
[1]. In the present scenario, software engineering methodologies
strongly focus on the functionality of the system, while applying a
“fix- it-later" approach to software performance aspects [3]. As a
result, lengthy fine-tunings, expensive extra hard ware, or even
redesigns are necessary for the system to meet the performance
requirements. In this paper, we propose design based,
implementation independent, performance prediction approach to
reduce the overhead associated in the later phases while developing a
performance guaranteed software product with the help of Unified
Modeling Language (UML).
Abstract: Sign language is used by the deaf and hard of hearing people for communication. Automatic sign language recognition is a challenging research area since sign language often is the only way of communication for the deaf people. Sign language includes different components of visual actions made by the signer using the hands, the face, and the torso, to convey his/her meaning. To use different aspects of signs, we combine the different groups of features which have been extracted from the image frames recorded directly by a stationary camera. We combine the features in two levels by employing three techniques. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, or by concatenating feature groups over time and using LDA to choose the most discriminant elements. At the model level, a late fusion of differently trained models can be carried out by a log-linear model combination. In this paper, we investigate these three combination techniques in an automatic sign language recognition system and show that the recognition rate can be significantly improved.
Abstract: Social, mobility and information aggregation inside
business environment need to converge to reach the next step of
collaboration to enhance interaction and innovation. The following
article is based on the “Assemblage" concept seen as a framework to
formalize new user interfaces and applications. The area of research
is the Energy Social Business Environment, especially the Energy
Smart Grids, which are considered as functional and technical
foundations of the revolution of the Energy Sector of tomorrow. The
assemblages are modelized by means of mereology and simplicial
complexes. Its objective is to offer new central attention and
decision-making tools to end-users.
Abstract: In this paper, we present a new learning algorithm for
anomaly based network intrusion detection using improved self
adaptive naïve Bayesian tree (NBTree), which induces a hybrid of
decision tree and naïve Bayesian classifier. The proposed approach
scales up the balance detections for different attack types and keeps
the false positives at acceptable level in intrusion detection. In
complex and dynamic large intrusion detection dataset, the detection
accuracy of naïve Bayesian classifier does not scale up as well as
decision tree. It has been successfully tested in other problem
domains that naïve Bayesian tree improves the classification rates in
large dataset. In naïve Bayesian tree nodes contain and split as
regular decision-trees, but the leaves contain naïve Bayesian
classifiers. The experimental results on KDD99 benchmark network
intrusion detection dataset demonstrate that this new approach scales
up the detection rates for different attack types and reduces false
positives in network intrusion detection.
Abstract: Memory forensic is important in digital investigation.
The forensic is based on the data stored in physical memory that
involve memory management and processing time. However, the
current forensic tools do not consider the efficiency in terms of
storage management and the processing time. This paper shows the
high redundancy of data found in the physical memory that cause
inefficiency in processing time and memory management. The
experiment is done using Borland C compiler on Windows XP with
512 MB of physical memory.
Abstract: The main idea behind in network aggregation is that,
rather than sending individual data items from sensors to sinks,
multiple data items are aggregated as they are forwarded by the
sensor network. Existing sensor network data aggregation techniques
assume that the nodes are preprogrammed and send data to a central
sink for offline querying and analysis. This approach faces two major
drawbacks. First, the system behavior is preprogrammed and cannot
be modified on the fly. Second, the increased energy wastage due to
the communication overhead will result in decreasing the overall
system lifetime. Thus, energy conservation is of prime consideration
in sensor network protocols in order to maximize the network-s
operational lifetime. In this paper, we give an energy efficient
approach to query processing by implementing new optimization
techniques applied to in-network aggregation. We first discuss earlier
approaches in sensors data management and highlight their
disadvantages. We then present our approach “Energy Efficient
Indexed Aggregation" (EEIA) and evaluate it through several
simulations to prove its efficiency, competence and effectiveness.