Abstract: Recent widespread use of information and
communication technology has greatly changed information security
risks that businesses and institutions encounter. Along with this
situation, in order to ensure security and have confidence in electronic
trading, it has become important for organizations to take competent
information security measures to provide international confidence that
sensitive information is secure. Against this backdrop, the approach to
information security checking has come to an important issue, which
is believed to be common to all countries. The purpose of this paper is
to introduce the new system of information security checking program
in Korea and to propose synthetic information security
countermeasures under domestic circumstances in order to protect
physical equipment, security management and technology, and the
operation of security check for securing services on ISP(Internet
Service Provider), IDC(Internet Data Center), and
e-commerce(shopping malls, etc.)
Abstract: The kinematics of manipulators is a central problem in the automatic control of robot manipulators. Theoretical background for the analysis of the 5 Dof Lynx-6 educational Robot Arm kinematics is presented in this paper. The kinematics problem is defined as the transformation from the Cartesian space to the joint space and vice versa. The Denavit-Harbenterg (D-H) model of representation is used to model robot links and joints in this study. Both forward and inverse kinematics solutions for this educational manipulator are presented, An effective method is suggested to decrease multiple solutions in inverse kinematics. A visual software package, named MSG, is also developed for testing Motional Characteristics of the Lynx-6 Robot arm. The kinematics solutions of the software package were found to be identical with the robot arm-s physical motional behaviors.
Abstract: The information systems with incomplete attribute
values and fuzzy decisions commonly exist in practical problems. On
the base of the notion of variable precision rough set model for
incomplete information system and the rough set model for
incomplete and fuzzy decision information system, the variable rough
set model for incomplete and fuzzy decision information system is
constructed, which is the generalization of the variable precision
rough set model for incomplete information system and that of rough
set model for incomplete and fuzzy decision information system. The
knowledge reduction and heuristic algorithm, built on the method and
theory of precision reduction, are proposed.
Abstract: Many natural language expressions are ambiguous, and
need to draw on other sources of information to be interpreted.
Interpretation of the e word تعاون to be considered as a noun or a verb
depends on the presence of contextual cues. To interpret words we
need to be able to discriminate between different usages. This paper
proposes a hybrid of based- rules and a machine learning method for
tagging Arabic words. The particularity of Arabic word that may be
composed of stem, plus affixes and clitics, a small number of rules
dominate the performance (affixes include inflexional markers for
tense, gender and number/ clitics include some prepositions,
conjunctions and others). Tagging is closely related to the notion of
word class used in syntax. This method is based firstly on rules (that
considered the post-position, ending of a word, and patterns), and
then the anomaly are corrected by adopting a memory-based learning
method (MBL). The memory_based learning is an efficient method to
integrate various sources of information, and handling exceptional
data in natural language processing tasks. Secondly checking the
exceptional cases of rules and more information is made available to
the learner for treating those exceptional cases. To evaluate the
proposed method a number of experiments has been run, and in
order, to improve the importance of the various information in
learning.
Abstract: This paper describes the design and results of FROID,
an outbound intrusion detection system built with agent technology
and supported by an attacker-centric ontology. The prototype
features a misuse-based detection mechanism that identifies remote
attack tools in execution. Misuse signatures composed of attributes
selected through entropy analysis of outgoing traffic streams and
process runtime data are derived from execution variants of attack
programs. The core of the architecture is a mesh of self-contained
detection cells organized non-hierarchically that group agents in a
functional fashion. The experiments show performance gains when
the ontology is enabled as well as an increase in accuracy achieved
when correlation cells combine detection evidence received from
independent detection cells.
Abstract: Group key management is an important functional
building block for any secure multicast architecture.
Thereby, it has been extensively studied in the literature.
In this paper we present relevant group key management
protocols. Then, we compare them against some pertinent
performance criteria.
Abstract: Fractional Fourier Transform is a powerful tool,
which is a generalization of the classical Fourier Transform. This
paper provides a mathematical relation relating the span in Fractional
Fourier domain with the amplitude and phase functions of the signal,
which is further used to study the variation of quality factor with
different values of the transform order. It is seen that with the
increase in the number of transients in the signal, the deviation of
average Fractional Fourier span from the frequency bandwidth
increases. Also, with the increase in the transient nature of the signal,
the optimum value of transform order can be estimated based on the
quality factor variation, and this value is found to be very close to
that for which one can obtain the most compact representation. With
the entire mathematical analysis and experimentation, we consolidate
the fact that Fractional Fourier Transform gives more optimal
representations for a number of transform orders than Fourier
transform.
Abstract: In this paper, a model of self-organizing spiking neural networks is introduced and applied to mobile robot environment representation and path planning problem. A network of spike-response-model neurons with a recurrent architecture is used to create robot-s internal representation from surrounding environment. The overall activity of network simulates a self-organizing system with unsupervised learning. A modified A* algorithm is used to find the best path using this internal representation between starting and goal points. This method can be used with good performance for both known and unknown environments.
Abstract: This paper describes a practical approach to design
and develop a hybrid learning with acceleration feedback control
(HLC) scheme for input tracking and end-point vibration suppression
of flexible manipulator systems. Initially, a collocated proportionalderivative
(PD) control scheme using hub-angle and hub-velocity
feedback is developed for control of rigid-body motion of the system.
This is then extended to incorporate a further hybrid control scheme
of the collocated PD control and iterative learning control with
acceleration feedback using genetic algorithms (GAs) to optimize the
learning parameters. Experimental results of the response of the
manipulator with the control schemes are presented in the time and
frequency domains. The performance of the HLC is assessed in terms
of input tracking, level of vibration reduction at resonance modes and
robustness with various payloads.
Abstract: Current OCR technology does not allow to
accurately recognizing small text images, such as those found
in web images. Our goal is to investigate new approaches to
recognize very low resolution text images containing antialiased
character shapes.
This paper presents a preliminary study on the variability of
such characters and the feasibility to discriminate them by
using geometrical features. In a first stage we analyze the
distribution of these features. In a second stage we present a
study on the discriminative power for recognizing isolated
characters, using various rendering methods and font
properties. Finally we present interesting results of our
evaluation tests leading to our conclusion and future focus.
Abstract: Online Communities are an example of sociallyaware,
self-organising, complex adaptive computing systems.
The multi-agent systems (MAS) paradigm coordinated by
self-organisation mechanisms has been used as an effective
way for the simulation and modeling of such systems. In this
paper, we propose a model for simulating an online health
community using a situated multi-agent system approach,
governed by the co-evolution of the social and spatial
organisations of the agents.
Abstract: Resource-constrained project scheduling is an NPhard
optimisation problem. There are many different heuristic
strategies how to shift activities in time when resource requirements
exceed their available amounts. These strategies are frequently based
on priorities of activities. In this paper, we assume that a suitable
heuristic has been chosen to decide which activities should be
performed immediately and which should be postponed and
investigate the resource-constrained project scheduling problem
(RCPSP) from the implementation point of view. We propose an
efficient routine that, instead of shifting the activities, extends their
duration. It makes it possible to break down their duration into active
and sleeping subintervals. Then we can apply the classical Critical
Path Method that needs only polynomial running time. This
algorithm can simply be adapted for multiproject scheduling with
limited resources.
Abstract: This paper presents a novel approach for representing
the spatio-temporal topology of the camera network with overlapping
and non-overlapping fields of view (FOVs). The topology is
determined by tracking moving objects and establishing object
correspondence across multiple cameras. To track people successfully
in multiple camera views, we used the Merge-Split (MS) approach for
object occlusion in a single camera and the grid-based approach for
extracting the accurate object feature. In addition, we considered the
appearance of people and the transition time between entry and exit
zones for tracking objects across blind regions of multiple cameras
with non-overlapping FOVs. The main contribution of this paper is to
estimate transition times between various entry and exit zones, and to
graphically represent the camera topology as an undirected weighted
graph using the transition probabilities.
Abstract: We propose a novel graphical technique (SVision) for
intrusion detection, which pictures the network as a community of
hosts independently roaming in a 3D space defined by the set of
services that they use. The aim of SVision is to graphically cluster
the hosts into normal and abnormal ones, highlighting only the ones
that are considered as a threat to the network. Our experimental
results using DARPA 1999 and 2000 intrusion detection and
evaluation datasets show the proposed technique as a good candidate
for the detection of various threats of the network such as vertical
and horizontal scanning, Denial of Service (DoS), and Distributed
DoS (DDoS) attacks.
Abstract: In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is proposed. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. An example is given to show usefulness of this method. Finally a simulation verifies the results of proposed method.
Abstract: Term Extraction, a key data preparation step in Text
Mining, extracts the terms, i.e. relevant collocation of words,
attached to specific concepts (e.g. genetic-algorithms and decisiontrees
are terms associated to the concept “Machine Learning" ). In
this paper, the task of extracting interesting collocations is achieved
through a supervised learning algorithm, exploiting a few
collocations manually labelled as interesting/not interesting. From
these examples, the ROGER algorithm learns a numerical function,
inducing some ranking on the collocations. This ranking is optimized
using genetic algorithms, maximizing the trade-off between the false
positive and true positive rates (Area Under the ROC curve). This
approach uses a particular representation for the word collocations,
namely the vector of values corresponding to the standard statistical
interestingness measures attached to this collocation. As this
representation is general (over corpora and natural languages),
generality tests were performed by experimenting the ranking
function learned from an English corpus in Biology, onto a French
corpus of Curriculum Vitae, and vice versa, showing a good
robustness of the approaches compared to the state-of-the-art Support
Vector Machine (SVM).
Abstract: In this paper we propose a computational model for the representation and processing of morpho-phonological phenomena in a natural language, like Modern Greek. We aim at a unified treatment of inflection, compounding, and word-internal phonological changes, in a model that is used for both analysis and generation. After discussing certain difficulties cuase by well-known finitestate approaches, such as Koskenniemi-s two-level model [7] when applied to a computational treatment of compounding, we argue that a morphology-based model provides a more adequate account of word-internal phenomena. Contrary to the finite state approaches that cannot handle hierarchical word constituency in a satisfactory way, we propose a unification-based word grammar, as the nucleus of our strategy, which takes into consideration word representations that are based on affixation and [stem stem] or [stem word] compounds. In our formalism, feature-passing operations are formulated with the use of the unification device, and phonological rules modeling the correspondence between lexical and surface forms apply at morpheme boundaries. In the paper, examples from Modern Greek illustrate our approach. Morpheme structures, stress, and morphologically conditioned phoneme changes are analyzed and generated in a principled way.
Abstract: Natural Language Understanding Systems (NLU) will not be widely deployed unless they are technically mature and cost effective to develop. Cost effective development hinges on the availability of tools and techniques enabling the rapid production of NLU applications through minimal human resources. Further, these tools and techniques should allow quick development of applications in a user friendly way and should be easy to upgrade in order to continuously follow the evolving technologies and standards. This paper presents a visual tool for the structuring and editing of dialog forms, the key element of driving conversation in NLU applications based on IBM technology. The main focus is given on the basic component used to describe Human – Machine interactions of that kind, the Dialogue Manager. In essence, the description of a tool that enables the visual representation of the Dialogue Manager mainly during the implementation phase is illustrated.
Abstract: In this paper we present a combined
hashing/watermarking method for image authentication. A robust
image hash, invariant to legitimate modifications, but fragile to
illegitimate modifications is generated from the local image
characteristics. To increase security of the system the watermark is
generated using the image hash as a key. Quantized Index
Modulation of DCT coefficients is used for watermark embedding.
Watermark detection is performed without use of the original image.
Experimental results demonstrate the effectiveness of the presented
method in terms of robustness and fragility.
Abstract: In this paper a way of hiding text message (Steganography) in the gray image has been presented. In this method tried to find binary value of each character of text message and then in the next stage, tried to find dark places of gray image (black) by converting the original image to binary image for labeling each object of image by considering on 8 connectivity. Then these images have been converted to RGB image in order to find dark places. Because in this way each sequence of gray color turns into RGB color and dark level of grey image is found by this way if the Gary image is very light the histogram must be changed manually to find just dark places. In the final stage each 8 pixels of dark places has been considered as a byte and binary value of each character has been put in low bit of each byte that was created manually by dark places pixels for increasing security of the main way of steganography (LSB).