Abstract: The next stage of the home networking environment is
supposed to be ubiquitous, where each piece of material is equipped
with an RFID (Radio Frequency Identification) tag. To fully support
the ubiquitous environment, home networking middleware should be
able to recommend home services based on a user-s interests and
efficiently manage information on service usage profiles for the users.
Therefore, USN (Ubiquitous Sensor Network) technology, which
recognizes and manages a appliance-s state-information (location,
capabilities, and so on) by connecting RFID tags is considered. The
Intelligent Multi-Agent Middleware (IMAM) architecture was
proposed to intelligently manage the mobile RFID-based home
networking and to automatically supply information about home
services that match a user-s interests. Evaluation results for
personalization services for IMAM using Bayesian-Net and Decision
Trees are presented.
Abstract: This paper focuses on a critical component of the situational awareness (SA), the neural control of depth flight of an autonomous underwater vehicle (AUV). Constant depth flight is a challenging but important task for AUVs to achieve high level of autonomy under adverse conditions. With the SA strategy, we proposed a multirate neural control of an AUV trajectory for a nontrivial mid-small size AUV “r2D4" stochastic model. This control system has been demonstrated and evaluated by simulation of diving maneuvers using software package Simulink. From the simulation results it can be seen that the chosen AUV model is stable in the presence of noises, and also can be concluded that the proposed research technique will be useful for fast SA of similar AUV systems in real-time search-and-rescue operations.
Abstract: Wavelet neural networks (WNNs) have emerged as a vital alternative to the vastly studied multilayer perceptrons (MLPs) since its first implementation. In this paper, we applied various clustering algorithms, namely, K-means (KM), Fuzzy C-means (FCM), symmetry-based K-means (SBKM), symmetry-based Fuzzy C-means (SBFCM) and modified point symmetry-based K-means (MPKM) clustering algorithms in choosing the translation parameter of a WNN. These modified WNNs are further applied to the heterogeneous cancer classification using benchmark microarray data and were compared against the conventional WNN with random initialization method. Experimental results showed that a WNN classifier with the MPKM algorithm is more precise than the conventional WNN as well as the WNNs with other clustering algorithms.
Abstract: The Model for Knowledge Base of Computational Objects
(KBCO model) has been successfully applied to represent the
knowledge of human like Plane Geometry, Physical, Calculus. However,
the original model cannot easyly apply in inorganic chemistry
field because of the knowledge specific problems. So, the aim of
this article is to introduce how we extend the Computional Object
(Com-Object) in KBCO model, kinds of fact, problems model, and
inference algorithms to develop a program for solving problems
in inorganic chemistry. Our purpose is to develop the application
that can help students in their study inorganic chemistry at schools.
This application was built successful by using Maple, C# and WPF
technology. It can solve automatically problems and give human
readable solution agree with those writting by students and teachers.
Abstract: Missing data is a persistent problem in almost all
areas of empirical research. The missing data must be treated very
carefully, as data plays a fundamental role in every analysis.
Improper treatment can distort the analysis or generate biased results.
In this paper, we compare and contrast various imputation techniques
on missing data sets and make an empirical evaluation of these
methods so as to construct quality software models. Our empirical
study is based on NASA-s two public dataset. KC4 and KC1. The
actual data sets of 125 cases and 2107 cases respectively, without
any missing values were considered. The data set is used to create
Missing at Random (MAR) data Listwise Deletion(LD), Mean
Substitution(MS), Interpolation, Regression with an error term and
Expectation-Maximization (EM) approaches were used to compare
the effects of the various techniques.
Abstract: In this work, we present an automatic vehicle detection
system for airborne videos using combined features. We propose a
pixel-wise classification method for vehicle detection using Dynamic
Bayesian Networks. In spite of performing pixel-wise classification,
relations among neighboring pixels in a region are preserved in the
feature extraction process. The main novelty of the detection scheme is
that the extracted combined features comprise not only pixel-level
information but also region-level information. Afterwards, tracking is
performed on the detected vehicles. Tracking is performed using
efficient Kalman filter with dynamic particle sampling. Experiments
were conducted on a wide variety of airborne videos. We do not
assume prior information of camera heights, orientation, and target
object sizes in the proposed framework. The results demonstrate
flexibility and good generalization abilities of the proposed method on
a challenging dataset.
Abstract: Artificial Bee Colony (ABC) algorithm is a relatively new swarm intelligence technique for clustering. It produces higher
quality clusters compared to other population-based algorithms but with poor energy efficiency, cluster quality consistency and typically slower in convergence speed. Inspired by energy saving foraging behavior of natural honey bees this paper presents a Quality and Quantity Aware Artificial Bee Colony (Q2ABC) algorithm to improve quality of cluster identification, energy efficiency and convergence speed of the original ABC. To evaluate the performance of Q2ABC algorithm, experiments were conducted on a suite of ten benchmark UCI datasets. The results demonstrate Q2ABC outperformed ABC and K-means algorithm in the quality of clusters delivered.
Abstract: In biological and biomedical research motif finding tools are important in locating regulatory elements in DNA sequences. There are many such motif finding tools available, which often yield position weight matrices and significance indicators. These indicators, p-values and E-values, describe the likelihood that a motif alignment is generated by the background process, and the expected number of occurrences of the motif in the data set, respectively. The various tools often estimate these indicators differently, making them not directly comparable. One approach for comparing motifs from different tools, is computing the E-value as the product of the p-value and the number of possible alignments in the data set. In this paper we explore the combinatorics of the motif alignment models OOPS, ZOOPS, and ANR, and propose a generic algorithm for computing the number of possible combinations accurately. We also show that using the wrong alignment model can give E-values that significantly diverge from their true values.
Abstract: A gene network gives the knowledge of the regulatory
relationships among the genes. Each gene has its activators and
inhibitors that regulate its expression positively and negatively
respectively. Genes themselves are believed to act as activators and
inhibitors of other genes. They can even activate one set of genes and
inhibit another set. Identifying gene networks is one of the most
crucial and challenging problems in Bioinformatics. Most work done
so far either assumes that there is no time delay in gene regulation or
there is a constant time delay. We here propose a Dynamic Time-
Lagged Correlation Based Method (DTCBM) to learn the gene
networks, which uses time-lagged correlation to find the potential
gene interactions, and then uses a post-processing stage to remove
false gene interactions to common parents, and finally uses dynamic
correlation thresholds for each gene to construct the gene network.
DTCBM finds correlation between gene expression signals shifted in
time, and therefore takes into consideration the multi time delay
relationships among the genes. The implementation of our method is
done in MATLAB and experimental results on Saccharomyces
cerevisiae gene expression data and comparison with other methods
indicate that it has a better performance.
Abstract: Robustness is one of the primary performance criteria for an Intelligent Video Surveillance (IVS) system. One of the key factors in enhancing the robustness of dynamic video analysis is,providing accurate and reliable means for shadow detection. If left undetected, shadow pixels may result in incorrect object tracking and classification, as it tends to distort localization and measurement information. Most of the algorithms proposed in literature are computationally expensive; some to the extent of equalling computational requirement of motion detection. In this paper, the homogeneity property of shadows is explored in a novel way for shadow detection. An adaptive division image (which highlights homogeneity property of shadows) analysis followed by a relatively simpler projection histogram analysis for penumbra suppression is the key novelty in our approach.
Abstract: Texture classification is an important image processing
task with a broad application range. Many different techniques for
texture classification have been explored. Using sparse approximation
as a feature extraction method for texture classification is a relatively
new approach, and Skretting et al. recently presented the Frame
Texture Classification Method (FTCM), showing very good results on
classical texture images. As an extension of that work the FTCM is
here tested on a real world application as detection of abnormalities
in mammograms. Some extensions to the original FTCM that are
useful in some applications are implemented; two different smoothing
techniques and a vector augmentation technique. Both detection of
microcalcifications (as a primary detection technique and as a last
stage of a detection scheme), and soft tissue lesions in mammograms
are explored. All the results are interesting, and especially the results
using FTCM on regions of interest as the last stage in a detection
scheme for microcalcifications are promising.
Abstract: The modern queueing theory is one of the powerful
tools for a quantitative and qualitative analysis of communication systems, computer networks, transportation systems, and many other technical systems. The paper is designated to the analysis of queueing
systems, arising in the networks theory and communications theory
(called open queueing network). The authors of this research in the
sphere of queueing theory present the theorem about the law of the iterated logarithm (LIL) for the queue length of a customers in open
queueing network and its application to the mathematical model of
the open message switching system.
Abstract: The advances in multimedia and networking technologies
have created opportunities for Internet pirates, who can easily
copy multimedia contents and illegally distribute them on the Internet,
thus violating the legal rights of content owners. This paper describes
how a simple and well-known watermarking procedure based on a
spread spectrum method and a watermark recovery by correlation can
be improved to effectively and adaptively protect MPEG-2 videos
distributed on the Internet. In fact, the procedure, in its simplest
form, is vulnerable to a variety of attacks. However, its security
and robustness have been increased, and its behavior has been
made adaptive with respect to the video terminals used to open
the videos and the network transactions carried out to deliver them
to buyers. In fact, such an adaptive behavior enables the proposed
procedure to efficiently embed watermarks, and this characteristic
makes the procedure well suited to be exploited in web contexts,
where watermarks usually generated from fingerprinting codes have
to be inserted into the distributed videos “on the fly", i.e. during the
purchase web transactions.
Abstract: Text data mining is a process of exploratory data
analysis. Classification maps data into predefined groups or classes.
It is often referred to as supervised learning because the classes are
determined before examining the data. This paper describes proposed
radial basis function Classifier that performs comparative crossvalidation
for existing radial basis function Classifier. The feasibility
and the benefits of the proposed approach are demonstrated by means
of data mining problem: direct Marketing. Direct marketing has
become an important application field of data mining. Comparative
Cross-validation involves estimation of accuracy by either stratified
k-fold cross-validation or equivalent repeated random subsampling.
While the proposed method may have high bias; its performance
(accuracy estimation in our case) may be poor due to high variance.
Thus the accuracy with proposed radial basis function Classifier was
less than with the existing radial basis function Classifier. However
there is smaller the improvement in runtime and larger improvement
in precision and recall. In the proposed method Classification
accuracy and prediction accuracy are determined where the
prediction accuracy is comparatively high.
Abstract: An active suspension system has been proposed to
improve the ride comfort. A quarter-car 2 degree-of-freedom (DOF)
system is designed and constructed on the basis of the concept of a
four-wheel independent suspension to simulate the actions of an
active vehicle suspension system. The purpose of a suspension
system is to support the vehicle body and increase ride comfort. The
aim of the work described in the paper was to illustrate the
application of fuzzy logic technique to the control of a continuously
damping automotive suspension system. The ride comfort is
improved by means of the reduction of the body acceleration caused
by the car body when road disturbances from smooth road and real
road roughness.
The paper describes also the model and controller used in the
study and discusses the vehicle response results obtained from a
range of road input simulations. In the conclusion, a comparison of
active suspension fuzzy control and Proportional Integration
derivative (PID) control is shown using MATLAB simulations.
Abstract: This paper describes an enhanced cookie-based
method for counting the visitors of web sites by using a web log
processing system that aims to cope with the ambitious goal of
creating countrywide statistics about the browsing practices of real
human individuals. The focus is put on describing a new more
efficient way of detecting human beings behind web users by placing
different identifiers on the client computers. We briefly introduce our
processing system designed to handle the massive amount of data
records continuously gathered from the most important content
providers of the Hungary. We conclude by showing statistics of
different time spans comparing the efficiency of multiple visitor
counting methods to the one presented here, and some interesting
charts about content providers and web usage based on real data
recorded in 2007 will also be presented.
Abstract: In this paper, we present a method named Signal Level
Matrix (SLM) which can improve the accuracy and stability of active
RFID indoor positioning system. Considering the accuracy and cost,
we use uniform distribution mode to set up and separate the
overlapped signal covering areas, in order to achieve preliminary
location setting. Then, based on the proposed SLM concept and the
characteristic of the signal strength value that attenuates as the
distance increases, this system cross-examines the distribution of
adjacent signals to locate the users more accurately. The experimental
results indicate that the adaptive positioning method proposed in this
paper could improve the accuracy and stability of the positioning
system effectively and satisfyingly.
Abstract: The recent growth of using multimedia transmission
over wireless communication systems, have challenges to protect the
data from lost due to wireless channel effect. Images are corrupted
due to the noise and fading when transmitted over wireless channel,
in wireless channel the image is transmitted block by block, Due to
severe fading, entire image blocks can be damaged. The aim of this
paper comes out from need to enhance the digital images at the
wireless receiver side. Proposed Boundary Interpolation (BI)
Algorithm using wavelet, have been adapted here used to
reconstruction the lost block in the image at the receiver depend on
the correlation between the lost block and its neighbors. New
Proposed technique by using Boundary Interpolation (BI) Algorithm
using wavelet with Pixel interleaver has been implemented. Pixel
interleaver work on distribute the pixel to new pixel position of
original image before transmitting the image. The block lost through
wireless channel is only effects individual pixel. The lost pixels at the
receiver side can be recovered by using Boundary Interpolation (BI)
Algorithm using wavelet. The results showed that the New proposed
algorithm boundary interpolation (BI) using wavelet with pixel
interleaver is better in term of MSE and PSNR.
Abstract: This paper describes a probabilistic method for
three-dimensional object recognition using a shared pool of surface
signatures. This technique uses flatness, orientation, and convexity
signatures that encode the surface of a free-form object into three
discriminative vectors, and then creates a shared pool of data by
clustering the signatures using a distance function. This method
applies the Bayes-s rule for recognition process, and it is extensible
to a large collection of three-dimensional objects.
Abstract: QoS Routing aims to find paths between senders and
receivers satisfying the QoS requirements of the application which
efficiently using the network resources and underlying routing
algorithm to be able to find low-cost paths that satisfy given QoS
constraints. The problem of finding least-cost routing is known to be
NP hard or complete and some algorithms have been proposed to
find a near optimal solution. But these heuristics or algorithms either
impose relationships among the link metrics to reduce the complexity
of the problem which may limit the general applicability of the
heuristic, or are too costly in terms of execution time to be applicable
to large networks. In this paper, we analyzed two algorithms namely
Characterized Delay Constrained Routing (CDCR) and Optimized
Delay Constrained Routing (ODCR). The CDCR algorithm dealt an
approach for delay constrained routing that captures the trade-off
between cost minimization and risk level regarding the delay
constraint. The ODCR which uses an adaptive path weight function
together with an additional constraint imposed on the path cost, to
restrict search space and hence ODCR finds near optimal solution in
much quicker time.