Abstract: Intrusion detection is a mechanism used to protect a
system and analyse and predict the behaviours of system users. An
ideal intrusion detection system is hard to achieve due to
nonlinearity, and irrelevant or redundant features. This study
introduces a new anomaly-based intrusion detection model. The
suggested model is based on particle swarm optimisation and
nonlinear, multi-class and multi-kernel support vector machines.
Particle swarm optimisation is used for feature selection by applying
a new formula to update the position and the velocity of a particle;
the support vector machine is used as a classifier. The proposed
model is tested and compared with the other methods using the KDD
CUP 1999 dataset. The results indicate that this new method achieves
better accuracy rates than previous methods.
Abstract: In this paper we use the property of co-occurrence
matrix in finding parallel lines in binary pictures for fingerprint
identification. In our proposed algorithm, we reduce the noise by
filtering the fingerprint images and then transfer the fingerprint
images to binary images using a proper threshold. Next, we divide
the binary images into some regions having parallel lines in the same
direction. The lines in each region have a specific angle that can be
used for comparison. This method is simple, performs the
comparison step quickly and has a good resistance in the presence of
the noise.
Abstract: The cumulative conformance count (CCC) charts are
widespread in process monitoring of high-yield manufacturing.
Recently, it is found the use of variable sampling interval (VSI)
scheme could further enhance the efficiency of the standard CCC
charts. The average time to signal (ATS) a shift in defect rate has
become traditional measure of efficiency of a chart with the VSI
scheme. Determining the ATS is frequently a difficult and tedious
task. A simple method based on a finite Markov Chain approach for
modeling the ATS is developed. In addition, numerical results are
given.
Abstract: Researchers have been applying artificial/ computational intelligence (AI/CI) methods to computer games. In this research field, further researchesare required to compare AI/CI methods with respect to each game application. In thispaper, we report our experimental result on the comparison of evolution strategy, genetic algorithm and their hybrids, applied to evolving controller agents for MarioAI. GA revealed its advantage in our experiment, whereas the expected ability of ES in exploiting (fine-tuning) solutions was not clearly observed. The blend crossover operator and the mutation operator of GA might contribute well to explore the vast search space.
Abstract: This paper presents the enhanced frame-based video coding scheme. The input source video to the enhanced frame-based video encoder consists of a rectangular-size video and shapes of arbitrarily-shaped objects on video frames. The rectangular frame texture is encoded by the conventional frame-based coding technique and the video object-s shape is encoded using the contour-based vertex coding. It is possible to achieve several useful content-based functionalities by utilizing the shape information in the bitstream at the cost of a very small overhead to the bitrate.
Abstract: The volume of XML data exchange is explosively
increasing, and the need for efficient mechanisms of XML data
management is vital. Many XML storage models have been proposed
for storing XML DTD-independent documents in relational database
systems. Benchmarking is the best way to highlight pros and cons of
different approaches. In this study, we use a common benchmarking
scheme, known as XMark to compare the most cited and newly
proposed DTD-independent methods in terms of logical reads,
physical I/O, CPU time and duration. We show the effect of Label
Path, extracting values and storing in another table and type of join
needed for each method-s query answering.
Abstract: Now a days, a significant part of commercial and governmental organisations like museums, cultural organizations, libraries, commercial enterprises, etc. invest intensively in new technologies for image digitization, digital libraries, image archiving and retrieval. Hence image authorization, authentication and security has become prime need. In this paper, we present a semi-fragile watermarking scheme for color images. The method converts the host image into YIQ color space followed by application of orthogonal dual domains of DCT and DWT transforms. The DCT helps to separate relevant from irrelevant image content to generate silent image features. DWT has excellent spatial localisation to help aid in spatial tamper characterisation. Thus image adaptive watermark is generated based of image features which allows the sharp detection of microscopic changes to locate modifications in the image. Further, the scheme utilises the multipurpose watermark consisting of soft authenticator watermark and chrominance watermark. Which has been proved fragile to some predefined processing like intentinal fabrication of the image or forgery and robust to other incidental attacks caused in the communication channel.
Abstract: In this era of technology, fueled by the pervasive usage of the internet, security is a prime concern. The number of new attacks by the so-called “bots", which are automated programs, is increasing at an alarming rate. They are most likely to attack online registration systems. Technology, called “CAPTCHA" (Completely Automated Public Turing test to tell Computers and Humans Apart) do exist, which can differentiate between automated programs and humans and prevent replay attacks. Traditionally CAPTCHA-s have been implemented with the challenge involved in recognizing textual images and reproducing the same. We propose an approach where the visual challenge has to be read out from which randomly selected keywords are used to verify the correctness of spoken text and in turn detect the presence of human. This is supplemented with a speaker recognition system which can identify the speaker also. Thus, this framework fulfills both the objectives – it can determine whether the user is a human or not and if it is a human, it can verify its identity.
Abstract: Image compression is one of the most important
applications Digital Image Processing. Advanced medical imaging
requires storage of large quantities of digitized clinical data. Due to
the constrained bandwidth and storage capacity, however, a medical
image must be compressed before transmission and storage. There
are two types of compression methods, lossless and lossy. In Lossless
compression method the original image is retrieved without any
distortion. In lossy compression method, the reconstructed images
contain some distortion. Direct Cosine Transform (DCT) and Fractal
Image Compression (FIC) are types of lossy compression methods.
This work shows that lossy compression methods can be chosen for
medical image compression without significant degradation of the
image quality. In this work DCT and Fractal Compression using
Partitioned Iterated Function Systems (PIFS) are applied on different
modalities of images like CT Scan, Ultrasound, Angiogram, X-ray
and mammogram. Approximately 20 images are considered in each
modality and the average values of compression ratio and Peak
Signal to Noise Ratio (PSNR) are computed and studied. The quality
of the reconstructed image is arrived by the PSNR values. Based on
the results it can be concluded that the DCT has higher PSNR values
and FIC has higher compression ratio. Hence in medical image
compression, DCT can be used wherever picture quality is preferred
and FIC is used wherever compression of images for storage and
transmission is the priority, without loosing picture quality
diagnostically.
Abstract: The selection for plantation of a particular type of
mustard plant depending on its productivity (pod yield) at the stage
of maturity. The growth of mustard plant dependent on some
parameters of that plant, these are shoot length, number of leaves,
number of roots and roots length etc. As the plant is growing, some
leaves may be fall down and some new leaves may come, so it can
not gives the idea to develop the relationship with the seeds weight at
mature stage of that plant. It is not possible to find the number of
roots and root length of mustard plant at growing stage that will be
harmful of this plant as roots goes deeper to deeper inside the land.
Only the value of shoot length which increases in course of time can
be measured at different time instances. Weather parameters are
maximum and minimum humidity, rain fall, maximum and minimum
temperature may effect the growth of the plant. The parameters of
pollution, water, soil, distance and crop management may be
dominant factors of growth of plant and its productivity. Considering
all parameters, the growth of the plant is very uncertain, fuzzy
environment can be considered for the prediction of shoot length at
maturity of the plant. Fuzzification plays a greater role for
fuzzification of data, which is based on certain membership
functions. Here an effort has been made to fuzzify the original data
based on gaussian function, triangular function, s-function,
Trapezoidal and L –function. After that all fuzzified data are
defuzzified to get normal form. Finally the error analysis
(calculation of forecasting error and average error) indicates the
membership function appropriate for fuzzification of data and use to
predict the shoot length at maturity. The result is also verified using
residual (Absolute Residual, Maximum of Absolute Residual, Mean
Absolute Residual, Mean of Mean Absolute Residual, Median of
Absolute Residual and Standard Deviation) analysis.
Abstract: In this paper, we present an innovative scheme of
blindly extracting message bits from an image distorted by an attack.
Support Vector Machine (SVM) is used to nonlinearly classify the
bits of the embedded message. Traditionally, a hard decoder is used
with the assumption that the underlying modeling of the Discrete
Cosine Transform (DCT) coefficients does not appreciably change.
In case of an attack, the distribution of the image coefficients is
heavily altered. The distribution of the sufficient statistics at the
receiving end corresponding to the antipodal signals overlap and a
simple hard decoder fails to classify them properly. We are
considering message retrieval of antipodal signal as a binary
classification problem. Machine learning techniques like SVM is
used to retrieve the message, when certain specific class of attacks is
most probable. In order to validate SVM based decoding scheme, we
have taken Gaussian noise as a test case. We generate a data set using
125 images and 25 different keys. Polynomial kernel of SVM has
achieved 100 percent accuracy on test data.
Abstract: The evaluation of the question answering system is a major research area that needs much attention. Before the rise of domain-oriented question answering systems based on natural language understanding and reasoning, evaluation is never a problem as information retrieval-based metrics are readily available for use. However, when question answering systems began to be more domains specific, evaluation becomes a real issue. This is especially true when understanding and reasoning is required to cater for a wider variety of questions and at the same time achieve higher quality responses The research in this paper discusses the inappropriateness of the existing measure for response quality evaluation and in a later part, the call for new standard measures and the related considerations are brought forward. As a short-term solution for evaluating response quality of heterogeneous systems, and to demonstrate the challenges in evaluating systems of different nature, this research presents a black-box approach using observation, classification scheme and a scoring mechanism to assess and rank three example systems (i.e. AnswerBus, START and NaLURI).
Abstract: One of the most important aspects expected from an
ERP system is to mange user\administrator manual documents
dynamically. Since an ERP package is frequently changed during its
implementation in customer sites, it is often needed to add new
documents and/or apply required changes to existing documents in
order to cover new or changed capabilities. The worse is that since
these changes occur continuously, the corresponding documents
should be updated dynamically; otherwise, implementing the ERP
package in the organization encounters serious risks. In this paper, we
propose a new architecture which is based on the agent oriented
vision and supplies the dynamic document generation expected from
ERP systems using several independent but cooperative agents.
Beside the dynamic document generation which is the main issue of
this paper, the presented architecture will address some aspects of
intelligence and learning capabilities existing in ERP.
Abstract: Field Association (FA) terms are a limited set of discriminating terms that give us the knowledge to identify document fields which are effective in document classification, similar file retrieval and passage retrieval. But the problem lies in the lack of an effective method to extract automatically relevant Arabic FA Terms to build a comprehensive dictionary. Moreover, all previous studies are based on FA terms in English and Japanese, and the extension of FA terms to other language such Arabic could be definitely strengthen further researches. This paper presents a new method to extract, Arabic FA Terms from domain-specific corpora using part-of-speech (POS) pattern rules and corpora comparison. Experimental evaluation is carried out for 14 different fields using 251 MB of domain-specific corpora obtained from Arabic Wikipedia dumps and Alhyah news selected average of 2,825 FA Terms (single and compound) per field. From the experimental results, recall and precision are 84% and 79% respectively. Therefore, this method selects higher number of relevant Arabic FA Terms at high precision and recall.
Abstract: Linearization of graph embedding has been emerged
as an effective dimensionality reduction technique in pattern
recognition. However, it may not be optimal for nonlinearly
distributed real world data, such as face, due to its linear nature. So, a
kernelization of graph embedding is proposed as a dimensionality
reduction technique in face recognition. In order to further boost the
recognition capability of the proposed technique, the Fisher-s
criterion is opted in the objective function for better data
discrimination. The proposed technique is able to characterize the
underlying intra-class structure as well as the inter-class separability.
Experimental results on FRGC database validate the effectiveness of
the proposed technique as a feature descriptor.
Abstract: Through inward perceptions, we intuitively expect
distributed software development to increase the risks associated with
achieving cost, schedule, and quality goals. To compound this
problem, agile software development (ASD) insists one of the main
ingredients of its success is cohesive communication attributed to
collocation of the development team. The following study identified
the degree of communication richness needed to achieve comparable
software quality (reduce pre-release defects) between distributed and
collocated teams. This paper explores the relevancy of
communication richness in various development phases and its
impact on quality. Through examination of a large distributed agile
development project, this investigation seeks to understand the levels
of communication required within each ASD phase to produce
comparable quality results achieved by collocated teams. Obviously,
a multitude of factors affects the outcome of software projects.
However, within distributed agile software development teams, the
mode of communication is one of the critical components required to
achieve team cohesiveness and effectiveness. As such, this study
constructs a distributed agile communication model (DAC-M) for
potential application to similar distributed agile development efforts
using the measurement of the suitable level of communication. The
results of the study show that less rich communication methods, in
the appropriate phase, might be satisfactory to achieve equivalent
quality in distributed ASD efforts.
Abstract: The applications on numbers are across-the-board that there is much scope for study. The chic of writing numbers is diverse and comes in a variety of form, size and fonts. Identification of Indian languages scripts is challenging problems. In Optical Character Recognition [OCR], machine printed or handwritten characters/numerals are recognized. There are plentiful approaches that deal with problem of detection of numerals/character depending on the sort of feature extracted and different way of extracting them. This paper proposes a recognition scheme for handwritten Hindi (devnagiri) numerals; most admired one in Indian subcontinent our work focused on a technique in feature extraction i.e. Local-based approach, a method using 16-segment display concept, which is extracted from halftoned images & Binary images of isolated numerals. These feature vectors are fed to neural classifier model that has been trained to recognize a Hindi numeral. The archetype of system has been tested on varieties of image of numerals. Experimentation result shows that recognition rate of halftoned images is 98 % compared to binary images (95%).
Abstract: Visual inputs are one of the key sources from which
humans perceive the environment and 'understand' what is
happening. Artificial systems perceive the visual inputs as digital
images. The images need to be processed and analysed. Within the
human brain, processing of visual inputs and subsequent
development of perception is one of its major functionalities. In this
paper we present part of our research project, which aims at the
development of an artificial model for visual perception (or
'understanding') based on the human perceptive and cognitive
systems. We propose a new model for perception from visual inputs
and a way of understaning or interpreting images using the model.
We demonstrate the implementation and use of the model with a real
image data set.
Abstract: Due to the coexistence of different Radio Access
Technologies (RATs), Next Generation Wireless Networks (NGWN)
are predicted to be heterogeneous in nature. The coexistence of
different RATs requires a need for Common Radio Resource
Management (CRRM) to support the provision of Quality of Service
(QoS) and the efficient utilization of radio resources. RAT selection
algorithms are part of the CRRM algorithms. Simply, their role is to
verify if an incoming call will be suitable to fit into a heterogeneous
wireless network, and to decide which of the available RATs is most
suitable to fit the need of the incoming call and admit it.
Guaranteeing the requirements of QoS for all accepted calls and at
the same time being able to provide the most efficient utilization of
the available radio resources is the goal of RAT selection algorithm.
The normal call admission control algorithms are designed for
homogeneous wireless networks and they do not provide a solution
to fit a heterogeneous wireless network which represents the NGWN.
Therefore, there is a need to develop RAT selection algorithm for
heterogeneous wireless network. In this paper, we propose an
approach for RAT selection which includes receiving different
criteria, assessing and making decisions, then selecting the most
suitable RAT for incoming calls. A comprehensive survey of
different RAT selection algorithms for a heterogeneous wireless
network is studied.
Abstract: An adaptive Fuzzy Inference Perceptual model has
been proposed for watermarking of digital images. The model
depends on the human visual characteristics of image sub-regions in
the frequency multi-resolution wavelet domain. In the proposed
model, a multi-variable fuzzy based architecture has been designed to
produce a perceptual membership degree for both candidate
embedding sub-regions and strength watermark embedding factor.
Different sizes of benchmark images with different sizes of
watermarks have been applied on the model. Several experimental
attacks have been applied such as JPEG compression, noises and
rotation, to ensure the robustness of the scheme. In addition, the
model has been compared with different watermarking schemes. The
proposed model showed its robustness to attacks and at the same time
achieved a high level of imperceptibility.