Abstract: The network traffic data provided for the design of
intrusion detection always are large with ineffective information and
enclose limited and ambiguous information about users- activities.
We study the problems and propose a two phases approach in our
intrusion detection design. In the first phase, we develop a
correlation-based feature selection algorithm to remove the worthless
information from the original high dimensional database. Next, we
design an intrusion detection method to solve the problems of
uncertainty caused by limited and ambiguous information. In the
experiments, we choose six UCI databases and DARPA KDD99
intrusion detection data set as our evaluation tools. Empirical studies
indicate that our feature selection algorithm is capable of reducing the
size of data set. Our intrusion detection method achieves a better
performance than those of participating intrusion detectors.
Abstract: This paper is mainly concerned with the application of
a novel technique of data interpretation for classifying measurements
of plasma columns in Tokamak reactors for nuclear fusion
applications. The proposed method exploits several concepts derived
from soft computing theory. In particular, Artificial Neural Networks
and Multi-Class Support Vector Machines have been exploited to
classify magnetic variables useful to determine shape and position of
the plasma with a reduced computational complexity. The proposed
technique is used to analyze simulated databases of plasma equilibria
based on ITER geometry configuration. As well as demonstrating the
successful recovery of scalar equilibrium parameters, we show that
the technique can yield practical advantages compared with earlier
methods.
Abstract: Cryptographic algorithms play a crucial role in the
information society by providing protection from unauthorized
access to sensitive data. It is clear that information technology will
become increasingly pervasive, Hence we can expect the emergence
of ubiquitous or pervasive computing, ambient intelligence. These
new environments and applications will present new security
challenges, and there is no doubt that cryptographic algorithms and
protocols will form a part of the solution. The efficiency of a public
key cryptosystem is mainly measured in computational overheads,
key size and bandwidth. In particular the RSA algorithm is used in
many applications for providing the security. Although the security
of RSA is beyond doubt, the evolution in computing power has
caused a growth in the necessary key length. The fact that most chips
on smart cards can-t process key extending 1024 bit shows that there
is need for alternative. NTRU is such an alternative and it is a
collection of mathematical algorithm based on manipulating lists of
very small integers and polynomials. This allows NTRU to high
speeds with the use of minimal computing power. NTRU (Nth degree
Truncated Polynomial Ring Unit) is the first secure public key
cryptosystem not based on factorization or discrete logarithm
problem. This means that given sufficient computational resources
and time, an adversary, should not be able to break the key. The
multi-party communication and requirement of optimal resource
utilization necessitated the need for the present day demand of
applications that need security enforcement technique .and can be
enhanced with high-end computing. This has promoted us to develop
high-performance NTRU schemes using approaches such as the use
of high-end computing hardware. Peer-to-peer (P2P) or enterprise
grids are proven as one of the approaches for developing high-end
computing systems. By utilizing them one can improve the
performance of NTRU through parallel execution. In this paper we
propose and develop an application for NTRU using enterprise grid
middleware called Alchemi. An analysis and comparison of its
performance for various text files is presented.
Abstract: This paper presents design features of a rescue robot, named CEO Mission II. Its body is designed to be the track wheel type with double front flippers for climbing over the collapse and the rough terrain. With 125 cm. long, 5-joint mechanical arm installed on the robot body, it is deployed not only for surveillance from the top view but also easier and faster access to the victims to get their vital signs. Two cameras and sensors for searching vital signs are set up at the tip of the multi-joint mechanical arm. The third camera is at the back of the robot for driving control. Hardware and software of the system, which controls and monitors the rescue robot, are explained. The control system is used for controlling the robot locomotion, the 5-joint mechanical arm, and for turning on/off devices. The monitoring system gathers all information from 7 distance sensors, IR temperature sensors, 3 CCD cameras, voice sensor, robot wheels encoders, yawn/pitch/roll angle sensors, laser range finder and 8 spare A/D inputs. All sensors and controlling data are communicated with a remote control station via IEEE 802.11b Wi-Fi. The audio and video data are compressed and sent via another IEEE 802.11g Wi-Fi transmitter for getting real-time response. At remote control station site, the robot locomotion and the mechanical arm are controlled by joystick. Moreover, the user-friendly GUI control program is developed based on the clicking and dragging method to easily control the movement of the arm. Robot traveling map is plotted from computing the information of wheel encoders and the yawn/pitch data. 2D Obstacle map is plotted from data of the laser range finder. The concept and design of this robot can be adapted to suit many other applications. As the Best Technique awardee from Thailand Rescue Robot Championship 2006, all testing results are satisfied.
Abstract: This paper proposes a novel system for monitoring the
health of underground pipelines. Some of these pipelines transport
dangerous contents and any damage incurred might have catastrophic
consequences. However, most of these damage are unintentional and
usually a result of surrounding construction activities. In order to
prevent these potential damages, monitoring systems are
indispensable. This paper focuses on acoustically recognizing road
cutters since they prelude most construction activities in modern
cities. Acoustic recognition can be easily achieved by installing a
distributed computing sensor network along the pipelines and using
smart sensors to “listen" for potential threat; if there is a real threat,
raise some form of alarm. For efficient pipeline monitoring, a novel
monitoring approach is proposed. Principal Component Analysis
(PCA) was studied and applied. Eigenvalues were regarded as the
special signature that could characterize a sound sample, and were
thus used for the feature vector for sound recognition. The denoising
ability of PCA could make it robust to noise interference. One class
SVM was used for classifier. On-site experiment results show that the
proposed PCA and SVM based acoustic recognition system will be
very effective with a low tendency for raising false alarms.
Abstract: The work reported in this paper is motivated by the fact that there is a need to apply autonomic computing concepts to parallel computing systems. Advancing on prior work based on intelligent cores [36], a swarm-array computing approach, this paper focuses on 'Intelligent agents' another swarm-array computing approach in which the task to be executed on a parallel computing core is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and is seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-ware objectives of autonomic computing. The feasibility of the proposed swarm-array computing approach is validated on a multi-agent simulator.
Abstract: A parallel computational fluid dynamics code has been
developed for the study of aerodynamic heating problem in hypersonic
flows. The code employs the 3D Navier-Stokes equations as the basic
governing equations to simulate the laminar hypersonic flow. The cell
centered finite volume method based on structured grid is applied for
spatial discretization. The AUSMPW+ scheme is used for the inviscid
fluxes, and the MUSCL approach is used for higher order spatial
accuracy. The implicit LU-SGS scheme is applied for time integration
to accelerate the convergence of computations in steady flows. A
parallel programming method based on MPI is employed to shorten
the computing time. The validity of the code is demonstrated by
comparing the numerical calculation result with the experimental data
of a hypersonic flow field around a blunt body.
Abstract: In this research work, investigations are carried out on
Continuous Wave (CW) Nd:YAG laser welding system after
preliminary experimentation to understand the influencing parameters
associated with laser welding of AISI 304. The experimental
procedure involves a series of laser welding trials on AISI 304
stainless steel sheets with various combinations of process parameters
like beam power, beam incident angle and beam incident angle. An
industrial 2 kW CW Nd:YAG laser system, available at Welding
Research Institute (WRI), BHEL Tiruchirappalli, is used for
conducting the welding trials for this research. After proper tuning of
laser beam, laser welding experiments are conducted on AISI 304
grade sheets to evaluate the influence of various input parameters on
weld bead geometry i.e. bead width (BW) and depth of penetration
(DOP). From the laser welding results, it is noticed that the beam
power and welding speed are the two influencing parameters on
depth and width of the bead. Three dimensional finite element
simulation of high density heat source have been performed for laser
welding technique using finite element code ANSYS for predicting
the temperature profile of laser beam heat source on AISI 304
stainless steel sheets. The temperature dependent material properties
for AISI 304 stainless steel are taken into account in the simulation,
which has a great influence in computing the temperature profiles.
The latent heat of fusion is considered by the thermal enthalpy of
material for calculation of phase transition problem. A Gaussian
distribution of heat flux using a moving heat source with a conical
shape is used for analyzing the temperature profiles. Experimental
and simulated values for weld bead profiles are analyzed for stainless
steel material for different beam power, welding speed and beam
incident angle. The results obtained from the simulation are
compared with those from the experimental data and it is observed
that the results of numerical analysis (FEM) are in good agreement
with experimental results, with an overall percentage of error
estimated to be within ±6%.
Abstract: The hospital and the health-care center of a
community, as a place for people-s life-care and health-care settings,
must provide more and better services for patients or residents. After
Establishing Electronic Medical Record (EMR) system -which is a
necessity- in the hospital, providing pervasive services is a further
step. Our objective in this paper is to use pervasive computing in a
case study of healthcare, based on EMR database that coordinates
application services over network to form a service environment for
medical and health-care. Our method also categorizes the hospital
spaces into 3 spaces: Public spaces, Private spaces and Isolated
spaces. Although, there are many projects about using pervasive
computing in healthcare, but all of them concentrate on the disease
recognition, designing smart cloths, or provide services only for
patient. The proposed method is implemented in a hospital. The
obtained results show that it is suitable for our purpose.
Abstract: The service sector continues to grow and the percentage
of GDP accounted for by service industries keeps increasing. The
growth and importance of service to an economy is not just a
phenomenon of advanced economies, service is now a majority of the
world gross domestic products. However, the performance evaluation
process of new service development problems generally involves
uncertain and imprecise data. This paper presents a 2-tuple fuzzy
linguistic computing approach to dealing with heterogeneous
information and information loss problems while the processes of
subjective evaluation integration. The proposed method based on group
decision-making scenario to assist business managers in measuring
performance of new service development manipulates the
heterogeneity integration processes and avoids the information loss
effectively.
Abstract: Speedups from mapping four real-life DSP
applications on an embedded system-on-chip that couples coarsegrained
reconfigurable logic with an instruction-set processor are
presented. The reconfigurable logic is realized by a 2-Dimensional
Array of Processing Elements. A design flow for improving
application-s performance is proposed. Critical software parts, called
kernels, are accelerated on the Coarse-Grained Reconfigurable
Array. The kernels are detected by profiling the source code. For
mapping the detected kernels on the reconfigurable logic a prioritybased
mapping algorithm has been developed. Two 4x4 array
architectures, which differ in their interconnection structure among
the Processing Elements, are considered. The experiments for eight
different instances of a generic system show that important overall
application speedups have been reported for the four applications.
The performance improvements range from 1.86 to 3.67, with an
average value of 2.53, compared with an all-software execution.
These speedups are quite close to the maximum theoretical speedups
imposed by Amdahl-s law.
Abstract: In this work, we study the impact of dynamically changing link slowdowns on the stability properties of packetswitched networks under the Adversarial Queueing Theory framework. Especially, we consider the Adversarial, Quasi-Static Slowdown Queueing Theory model, where each link slowdown may take on values in the two-valued set of integers {1, D} with D > 1 which remain fixed for a long time, under a (w, p)-adversary. In this framework, we present an innovative systematic construction for the estimation of adversarial injection rate lower bounds, which, if exceeded, cause instability in networks that use the LIS (Longest-in- System) protocol for contention-resolution. In addition, we show that a network that uses the LIS protocol for contention-resolution may result in dropping its instability bound at injection rates p > 0 when the network size and the high slowdown D take large values. This is the best ever known instability lower bound for LIS networks.
Abstract: The heuristic decision rules used for project
scheduling will vary depending upon the project-s size, complexity,
duration, personnel, and owner requirements. The concept of project
complexity has received little detailed attention. The need to
differentiate between easy and hard problem instances and the
interest in isolating the fundamental factors that determine the
computing effort required by these procedures inspired a number of
researchers to develop various complexity measures.
In this study, the most common measures of project complexity are
presented. A new measure of project complexity is developed. The
main privilege of the proposed measure is that, it considers size,
shape and logic characteristics, time characteristics, resource
demands and availability characteristics as well as number of critical
activities and critical paths. The degree of sensitivity of the proposed
measure for complexity of project networks has been tested and
evaluated against the other measures of complexity of the considered
fifty project networks under consideration in the current study. The
developed measure showed more sensitivity to the changes in the
network data and gives accurate quantified results when comparing
the complexities of networks.
Abstract: This paper discusses EM algorithm and Bootstrap
approach combination applied for the improvement of the satellite
image fusion process. This novel satellite image fusion method based
on estimation theory EM algorithm and reinforced by Bootstrap
approach was successfully implemented and tested. The sensor
images are firstly split by a Bayesian segmentation method to
determine a joint region map for the fused image. Then, we use the
EM algorithm in conjunction with the Bootstrap approach to develop
the bootstrap EM fusion algorithm, hence producing the fused
targeted image. We proposed in this research to estimate the
statistical parameters from some iterative equations of the EM
algorithm relying on a reference of representative Bootstrap samples
of images. Sizes of those samples are determined from a new
criterion called 'hybrid criterion'. Consequently, the obtained results
of our work show that using the Bootstrap EM (BEM) in image
fusion improve performances of estimated parameters which involve
amelioration of the fused image quality; and reduce the computing
time during the fusion process.
Abstract: A Watson-Crick automaton is recently introduced as a
computational model of DNA computing framework. It works on
tapes consisting of double stranded sequences of symbols. Symbols
placed on the corresponding cells of the double-stranded sequences are
related by a complimentary relation. In this paper, we investigate a
variation of Watson-Crick automata in which both heads read the tape
in reverse directions. They are called reverse Watson-Crick finite
automata (RWKFA). We show that all of following four classes, i.e.,
simple, 1-limited, all-final, all-final and simple, are equal to
non-restricted version of RWKFA.
Abstract: Groundlessness of application probability-statistic methods are especially shown at an early stage of the aviation GTE technical condition diagnosing, when the volume of the information has property of the fuzzy, limitations, uncertainty and efficiency of application of new technology Soft computing at these diagnosing stages by using the fuzzy logic and neural networks methods. It is made training with high accuracy of multiple linear and nonlinear models (the regression equations) received on the statistical fuzzy data basis. At the information sufficiency it is offered to use recurrent algorithm of aviation GTE technical condition identification on measurements of input and output parameters of the multiple linear and nonlinear generalized models at presence of noise measured (the new recursive least squares method (LSM)). As application of the given technique the estimation of the new operating aviation engine D30KU-154 technical condition at height H=10600 m was made.
Abstract: As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done on time especially for the critical applications. In this paper, we have explored the different predictor models to NASA-s public domain defect dataset coded in Perl programming language. Different machine learning algorithms belonging to the different learner categories of the WEKA project including Mamdani Based Fuzzy Inference System and Neuro-fuzzy based system have been evaluated for the modeling of maintenance severity or impact of fault severity. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provides relatively better prediction accuracy as compared to other models and hence, can be used for the maintenance severity prediction of the software.
Abstract: Academic digital libraries emerged as a result of advances in computing and information systems technologies, and had been introduced in universities and to public. As results, moving in parallel with current technology in learning and researching environment indeed offers myriad of advantages especially to students and academicians, as well as researchers. This is due to dramatic changes in learning environment through the use of digital library system which giving spectacular impact on these societies- way of performing their study/research. This paper presents a survey of current criteria for evaluating academic digital libraries- performance. The goal is to discuss criteria being applied so far for academic digital libraries evaluation in the context of user-centered design. Although this paper does not comprehensively take into account all previous researches in evaluating academic digital libraries but at least it can be a guide in understanding the evaluation criteria being widely applied.
Abstract: Grid computing is growing rapidly in the distributed
heterogeneous systems for utilizing and sharing large-scale resources
to solve complex scientific problems. Scheduling is the most recent
topic used to achieve high performance in grid environments. It aims
to find a suitable allocation of resources for each job. A typical
problem which arises during this task is the decision of scheduling. It
is about an effective utilization of processor to minimize tardiness
time of a job, when it is being scheduled. This paper, therefore,
addresses the problem by developing a general framework of grid
scheduling using dynamic information and an ant colony
optimization algorithm to improve the decision of scheduling. The
performance of various dispatching rules such as First Come First
Served (FCFS), Earliest Due Date (EDD), Earliest Release Date
(ERD), and an Ant Colony Optimization (ACO) are compared.
Moreover, the benefit of using an Ant Colony Optimization for
performance improvement of the grid Scheduling is also discussed. It
is found that the scheduling system using an Ant Colony
Optimization algorithm can efficiently and effectively allocate jobs
to proper resources.
Abstract: Detection, feature extraction and pose estimation of
people in images and video is made challenging by the variability of
human appearance, the complexity of natural scenes and the high
dimensionality of articulated body models and also the important
field in Image, Signal and Vision Computing in recent years. In this
paper, four types of people in 2D dimension image will be tested and
proposed. The system will extract the size and the advantage of them
(such as: tall fat, short fat, tall thin and short thin) from image. Fat
and thin, according to their result from the human body that has been
extract from image, will be obtained. Also the system extract every
size of human body such as length, width and shown them in output.