Abstract: A novel behavioral detection framework is proposed
to detect zero day buffer overflow vulnerabilities (based on network
behavioral signatures) using zero-day exploits, instead of the
signature-based or anomaly-based detection solutions currently
available for IDPS techniques. At first we present the detection
model that uses shadow honeypot. Our system is used for the online
processing of network attacks and generating a behavior detection
profile. The detection profile represents the dataset of 112 types of
metrics describing the exact behavior of malware in the network. In
this paper we present the examples of generating behavioral
signatures for two attacks – a buffer overflow exploit on FTP server
and well known Conficker worm. We demonstrated the visualization
of important aspects by showing the differences between valid
behavior and the attacks. Based on these metrics we can detect
attacks with a very high probability of success, the process of
detection is however very expensive.
Abstract: Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.
Abstract: An adaptive dynamic cerebellar model articulation
controller (DCMAC) neural network used for solving the prediction
and identification problem is proposed in this paper. The proposed
DCMAC has superior capability to the conventional cerebellar model
articulation controller (CMAC) neural network in efficient learning
mechanism, guaranteed system stability and dynamic response. The
recurrent network is embedded in the DCMAC by adding feedback
connections in the association memory space so that the DCMAC
captures the dynamic response, where the feedback units act as
memory elements. The dynamic gradient descent method is adopted to
adjust DCMAC parameters on-line. Moreover, the analytical method
based on a Lyapunov function is proposed to determine the
learning-rates of DCMAC so that the variable optimal learning-rates
are derived to achieve most rapid convergence of identifying error.
Finally, the adaptive DCMAC is applied in two computer simulations.
Simulation results show that accurate identifying response and
superior dynamic performance can be obtained because of the
powerful on-line learning capability of the proposed DCMAC.
Abstract: Fault tolerance is critical in many of today's large computer systems. This paper focuses on improving fault tolerance through testing. Moreover, it concentrates on the memory faults: how to access the editable part of a process memory space and how this part is affected. A special Software Fault Injection Technique (SFIT) is proposed for this purpose. This is done by sequentially scanning the memory of the target process, and trying to edit maximum number of bytes inside that memory. The technique was implemented and tested on a group of programs in software packages such as jet-audio, Notepad, Microsoft Word, Microsoft Excel, and Microsoft Outlook. The results from the test sample process indicate that the size of the scanned area depends on several factors. These factors are: process size, process type, and virtual memory size of the machine under test. The results show that increasing the process size will increase the scanned memory space. They also show that input-output processes have more scanned area size than other processes. Increasing the virtual memory size will also affect the size of the scanned area but to a certain limit.
Abstract: This document shows a software that shows different chaotic generator, as continuous as discrete time. The software gives the option for obtain the different signals, using different parameters and initial condition value. The program shows then critical parameter for each model. All theses models are capable of encrypter information, this software show it too.
Abstract: Spatial outliers in remotely sensed imageries represent
observed quantities showing unusual values compared to their
neighbor pixel values. There have been various methods to detect the
spatial outliers based on spatial autocorrelations in statistics and data
mining. These methods may be applied in detecting forest fire pixels
in the MODIS imageries from NASA-s AQUA satellite. This is
because the forest fire detection can be referred to as finding spatial
outliers using spatial variation of brightness temperature. This point is
what distinguishes our approach from the traditional fire detection
methods. In this paper, we propose a graph-based forest fire detection
algorithm which is based on spatial outlier detection methods, and test
the proposed algorithm to evaluate its applicability. For this the
ordinary scatter plot and Moran-s scatter plot were used. In order to
evaluate the proposed algorithm, the results were compared with the
MODIS fire product provided by the NASA MODIS Science Team,
which showed the possibility of the proposed algorithm in detecting
the fire pixels.
Abstract: Ontologies play an important role in semantic web
applications and are often developed by different groups and
continues to evolve over time. The knowledge in ontologies changes
very rapidly that make the applications outdated if they continue to
use old versions or unstable if they jump to new versions. Temporal
frames using frame versioning and slot versioning are used to take
care of dynamic nature of the ontologies. The paper proposes new
tags and restructured OWL format enabling the applications to work
with the old or new version of ontologies. Gene Ontology, a very
dynamic ontology, has been used as a case study to explain the OWL
Ontology with Temporal Tags.
Abstract: IVE toolkit has been created for facilitating research,education and development in the ?eld of virtual storytelling andcomputer games. Primarily, the toolkit is intended for modellingaction selection mechanisms of virtual humans, investigating level-of-detail AI techniques for large virtual environments, and for exploringjoint behaviour and role-passing technique (Sec. V). Additionally, thetoolkit can be used as an AI middleware without any changes. Themain facility of IVE is that it serves for prototyping both the AI andvirtual worlds themselves. The purpose of this paper is to describeIVE?s features in general and to present our current work - includingan educational game - on this platform.Keywords? AI middleware, simulation, virtual world.
Abstract: Image Edge Detection is one of the most important
parts of image processing. In this paper, by fuzzy technique, a new
method is used to improve digital image edge detection. In this
method, a 3x3 mask is employed to process each pixel by means of
vicinity. Each pixel is considered a fuzzy input and by examining
fuzzy rules in its vicinity, the edge pixel is specified and by utilizing
calculation algorithms in image processing, edges are displayed more
clearly. This method shows significant improvement compared to
different edge detection methods (e.g. Sobel, Canny).
Abstract: IP networks are evolving from data communication
infrastructure into many real-time applications such as video
conferencing, IP telephony and require stringent Quality of Service
(QoS) requirements. A rudimentary issue in QoS routing is to find a
path between a source-destination pair that satisfies two or more endto-
end constraints and termed to be NP hard or complete. In this
context, we present an algorithm Multi Constraint Path Problem
Version 3 (MCPv3), where all constraints are approximated and
return a feasible path in much quicker time. We present another
algorithm namely Delay Coerced Multi Constrained Routing
(DCMCR) where coerce one constraint and approximate the
remaining constraints. Our algorithm returns a feasible path, if exists,
in polynomial time between a source-destination pair whose first
weight satisfied by the first constraint and every other weight is
bounded by remaining constraints by a predefined approximation
factor (a). We present our experimental results with different
topologies and network conditions.
Abstract: MM-Path, an acronym for Method/Message Path, describes the dynamic interactions between methods in object-oriented systems. This paper discusses the classifications of MM-Path, based on the characteristics of object-oriented software. We categorize it according to the generation reasons, the effect scope and the composition of MM-Path. A formalized representation of MM-Path is also proposed, which has considered the influence of state on response method sequences of messages. .Moreover, an automatic MM-Path generation approach based on UML Statechart diagram has been presented, and the difficulties in identifying and generating MM-Path can be solved. . As a result, it provides a solid foundation for further research on test cases generation based on MM-Path.
Abstract: The robustness of color-based signatures in the presence of a selection of representative distortions is investigated. Considered are five signatures that have been developed and evaluated within a new modular framework. Two signatures presented in this work are directly derived from histograms gathered from video frames. The other three signatures are based on temporal information by computing difference histograms between adjacent frames. In order to obtain objective and reproducible results, the evaluations are conducted based on several randomly assembled test sets. These test sets are extracted from a video repository that contains a wide range of broadcast content including documentaries, sports, news, movies, etc. Overall, the experimental results show the adequacy of color-histogram-based signatures for video fingerprinting applications and indicate which type of signature should be preferred in the presence of certain distortions.
Abstract: Decentralized Tuple Space (DTS) implements tuple
space model among a series of decentralized hosts and provides the
logical global shared tuple repository. Replication has been introduced
to promote performance problem incurred by remote tuple access. In
this paper, we propose a replication approach of DTS allowing
replication policies self-adapting. The accesses from users or other
nodes are monitored and collected to contribute the decision making.
The replication policy may be changed if the better performance is
expected. The experiments show that this approach suitably adjusts the
replication policies, which brings negligible overhead.
Abstract: This paper investigates the encryption efficiency of RC6 block cipher application to digital images, providing a new mathematical measure for encryption efficiency, which we will call the encryption quality instead of visual inspection, The encryption quality of RC6 block cipher is investigated among its several design parameters such as word size, number of rounds, and secret key length and the optimal choices for the best values of such design parameters are given. Also, the security analysis of RC6 block cipher for digital images is investigated from strict cryptographic viewpoint. The security estimations of RC6 block cipher for digital images against brute-force, statistical, and differential attacks are explored. Experiments are made to test the security of RC6 block cipher for digital images against all aforementioned types of attacks. Experiments and results verify and prove that RC6 block cipher is highly secure for real-time image encryption from cryptographic viewpoint. Thorough experimental tests are carried out with detailed analysis, demonstrating the high security of RC6 block cipher algorithm. So, RC6 block cipher can be considered to be a real-time secure symmetric encryption for digital images.
Abstract: Software Reusability is primary attribute of software
quality. There are metrics for identifying the quality of reusable
components but the function that makes use of these metrics to find
reusability of software components is still not clear. These metrics if
identified in the design phase or even in the coding phase can help us
to reduce the rework by improving quality of reuse of the component
and hence improve the productivity due to probabilistic increase in
the reuse level. In this paper, we have devised the framework of
metrics that uses McCabe-s Cyclometric Complexity Measure for
Complexity measurement, Regularity Metric, Halstead Software
Science Indicator for Volume indication, Reuse Frequency metric
and Coupling Metric values of the software component as input
attributes and calculated reusability of the software component. Here,
comparative analysis of the fuzzy, Neuro-fuzzy and Fuzzy-GA
approaches is performed to evaluate the reusability of software
components and Fuzzy-GA results outperform the other used
approaches. The developed reusability model has produced high
precision results as expected by the human experts.
Abstract: The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.
Abstract: The Genetic Algorithm (GA) is one of the most important methods used to solve many combinatorial optimization problems. Therefore, many researchers have tried to improve the GA by using different methods and operations in order to find the optimal solution within reasonable time. This paper proposes an improved GA (IGA), where the new crossover operation, population reformulates operation, multi mutation operation, partial local optimal mutation operation, and rearrangement operation are used to solve the Traveling Salesman Problem. The proposed IGA was then compared with three GAs, which use different crossover operations and mutations. The results of this comparison show that the IGA can achieve better results for the solutions in a faster time.
Abstract: The goal of a network-based intrusion detection
system is to classify activities of network traffics into two major
categories: normal and attack (intrusive) activities. Nowadays, data
mining and machine learning plays an important role in many
sciences; including intrusion detection system (IDS) using both
supervised and unsupervised techniques. However, one of the
essential steps of data mining is feature selection that helps in
improving the efficiency, performance and prediction rate of
proposed approach. This paper applies unsupervised K-means
clustering algorithm with information gain (IG) for feature selection
and reduction to build a network intrusion detection system. For our
experimental analysis, we have used the new NSL-KDD dataset,
which is a modified dataset for KDDCup 1999 intrusion detection
benchmark dataset. With a split of 60.0% for the training set and the
remainder for the testing set, a 2 class classifications have been
implemented (Normal, Attack). Weka framework which is a java
based open source software consists of a collection of machine
learning algorithms for data mining tasks has been used in the testing
process. The experimental results show that the proposed approach is
very accurate with low false positive rate and high true positive rate
and it takes less learning time in comparison with using the full
features of the dataset with the same algorithm.
Abstract: Mostly transforms are used for speech data
compressions which are lossy algorithms. Such algorithms are
tolerable for speech data compression since the loss in quality is not
perceived by the human ear. However the vector quantization (VQ)
has a potential to give more data compression maintaining the same
quality. In this paper we propose speech data compression algorithm
using vector quantization technique. We have used VQ algorithms
LBG, KPE and FCG. The results table shows computational
complexity of these three algorithms. Here we have introduced a new
performance parameter Average Fractional Change in Speech
Sample (AFCSS). Our FCG algorithm gives far better performance
considering mean absolute error, AFCSS and complexity as
compared to others.
Abstract: This paper proposes a novel improvement of forecasting approach based on using time-invariant fuzzy time series. In contrast to traditional forecasting methods, fuzzy time series can be also applied to problems, in which historical data are linguistic values. It is shown that proposed time-invariant method improves the performance of forecasting process. Further, the effect of using different number of fuzzy sets is tested as well. As with the most of cited papers, historical enrollment of the University of Alabama is used in this study to illustrate the forecasting process. Subsequently, the performance of the proposed method is compared with existing fuzzy time series time-invariant models based on forecasting accuracy. It reveals a certain performance superiority of the proposed method over methods described in the literature.