Abstract: There are many researches to detect collision between real object and virtual object in 3D space. In general, these techniques are need to huge computing power. So, many research and study are constructed by using cloud computing, network computing, and distribute computing. As a reason of these, this paper proposed a novel fast 3D collision detection algorithm between real and virtual object using 2D intersection area. Proposed algorithm uses 4 multiple cameras and coarse-and-fine method to improve accuracy and speed performance of collision detection. In the coarse step, this system examines the intersection area between real and virtual object silhouettes from all camera views. The result of this step is the index of virtual sensors which has a possibility of collision in 3D space. To decide collision accurately, at the fine step, this system examines the collision detection in 3D space by using the visual hull algorithm. Performance of the algorithm is verified by comparing with existing algorithm. We believe proposed algorithm help many other research, study and application fields such as HCI, augmented reality, intelligent space, and so on.
Abstract: We propose a multi-agent based utilitarian approach
to model and understand information flows in social networks that
lead to Pareto optimal informational exchanges. We model the
individual expected utility function of the agents to reflect the net
value of information received. We show how this model, adapted
from a theorem by Karl Borch dealing with an actuarial Risk
Exchange concept in the Insurance industry, can be used for social
network analysis. We develop a utilitarian framework that allows us
to interpret Pareto optimal exchanges of value as potential
information flows, while achieving a maximization of a sum of
expected utilities of information of the group of agents. We examine
some interesting conditions on the utility function under which the
flows are optimal. We illustrate the promise of this new approach to
attach economic value to information in networks with a synthetic
example.
Abstract: In this paper, Differential Evolution (DE) algorithm, a new promising evolutionary algorithm, is proposed to train Radial Basis Function (RBF) network related to automatic configuration of network architecture. Classification tasks on data sets: Iris, Wine, New-thyroid, and Glass are conducted to measure the performance of neural networks. Compared with a standard RBF training algorithm in Matlab neural network toolbox, DE achieves more rational architecture for RBF networks. The resulting networks hence obtain strong generalization abilities.
Abstract: Web 2.0 (social networking, blogging and online
forums) can serve as a data source for social science research because
it contains vast amount of information from many different users.
The volume of that information has been growing at a very high rate
and becoming a network of heterogeneous data; this makes things
difficult to find and is therefore not almost useful. We have proposed
a novel theoretical model for gathering and processing data from
Web 2.0, which would reflect semantic content of web pages in
better way. This article deals with the analysis part of the model and
its usage for content analysis of blogs. The introductory part of the
article describes methodology for the gathering and processing data
from blogs. The next part of the article is focused on the evaluation
and content analysis of blogs, which write about specific trend.
Abstract: In this paper, a new reverse converter for the moduli set {2n, 2n–1, 2n–1–1} is presented. We improved a previously introduced conversion algorithm for deriving an efficient hardware design for reverse converter. Hardware architecture of the proposed converter is based on carry-save adders and regular binary adders, without the requirement for modular adders. The presented design is faster than the latest introduced reverse converter for moduli set {2n, 2n–1, 2n–1–1}. Also, it has better performance than the reverse converters for the recently introduced moduli set {2n+1–1, 2n, 2n–1}
Abstract: In this paper the application of neuro-fuzzy system for equalization of channel distortion is considered. The structure and operation algorithm of neuro-fuzzy equalizer are described. The use of neuro-fuzzy equalizer in digital signal transmission allows to decrease training time of parameters and decrease the complexity of the network. The simulation of neuro-fuzzy equalizer is performed. The obtained result satisfies the efficiency of application of neurofuzzy technology in channel equalization.
Abstract: This paper proposes view-point insensitive human
pose recognition system using neural network. Recognition system
consists of silhouette image capturing module, data driven database,
and neural network. The advantages of our system are first, it is
possible to capture multiple view-point silhouette images of 3D human
model automatically. This automatic capture module is helpful to
reduce time consuming task of database construction. Second, we
develop huge feature database to offer view-point insensitivity at pose
recognition. Third, we use neural network to recognize human pose
from multiple-view because every pose from each model have similar
feature patterns, even though each model has different appearance and
view-point. To construct database, we need to create 3D human model
using 3D manipulate tools. Contour shape is used to convert silhouette
image to feature vector of 12 degree. This extraction task is processed
semi-automatically, which benefits in that capturing images and
converting to silhouette images from the real capturing environment is
needless. We demonstrate the effectiveness of our approach with
experiments on virtual environment.
Abstract: Lurking behavior is common in information-seeking oriented communities. Transferring users with lurking behavior to be contributors can assist virtual communities to obtain competitive advantages. Based on the ecological cognition framework, this study proposes a model to examine the antecedents of lurking behavior in information-seeking oriented virtual communities. This study argues desire for emotional support, desire for information support, desire for performance-approach, desire for performance -avoidance, desire for mastery-approach, desire for mastery-avoidance, desire for ability trust, desire for benevolence trust, and desire for integrity trust effect on lurking behavior. This study offers an approach to understanding the determinants of lurking behavior in online contexts.
Abstract: The Linear discriminant analysis (LDA) can be
generalized into a nonlinear form - kernel LDA (KLDA) expediently
by using the kernel functions. But KLDA is often referred to a general
eigenvalue problem in singular case. To avoid this complication, this
paper proposes an iterative algorithm for the two-class KLDA. The
proposed KLDA is used as a nonlinear discriminant classifier, and the
experiments show that it has a comparable performance with SVM.
Abstract: This paper proposes a technique to protect against
email bombing. The technique employs a statistical approach, Naïve
Bayes (NB), and Neural Networks to show that it is possible to
differentiate between good and bad traffic to protect against email
bombing attacks. Neural networks and Naïve Bayes can be trained
by utilizing many email messages that include both input and output
data for legitimate and non-legitimate emails. The input to the model
includes the contents of the body of the messages, the subject, and
the headers. This information will be used to determine if the email
is normal or an attack email. Preliminary tests suggest that Naïve
Bayes can be trained to produce an accurate response to confirm
which email represents an attack.
Abstract: In this work a novel approach for color image
segmentation using higher order entropy as a textural feature for
determination of thresholds over a two dimensional image histogram
is discussed. A similar approach is applied to achieve multi-level
thresholding in both grayscale and color images. The paper discusses
two methods of color image segmentation using RGB space as the
standard processing space. The threshold for segmentation is decided
by the maximization of conditional entropy in the two dimensional
histogram of the color image separated into three grayscale images of
R, G and B. The features are first developed independently for the
three ( R, G, B ) spaces, and combined to get different color
component segmentation. By considering local maxima instead of the
maximum of conditional entropy yields multiple thresholds for the
same image which forms the basis for multilevel thresholding.
Abstract: Current tools for data migration between documentoriented
and relational databases have several disadvantages. We
propose a new approach for data migration between documentoriented
and relational databases. During data migration the relational
schema of the target (relational database) is automatically created
from collection of XML documents. Proposed approach is verified on
data migration between document-oriented database IBM Lotus/
Notes Domino and relational database implemented in relational
database management system (RDBMS) MySQL.
Abstract: Classification is one of the primary themes in
computational biology. The accuracy of classification strongly
depends on quality of a dataset, and we need some method to
evaluate this quality. In this paper, we propose a new graphical
analysis method using 'Membership-Deviation Graph (MDG)' for
analyzing quality of a dataset. MDG represents degree of
membership and deviations for instances of a class in the dataset. The
result of MDG analysis is used for understanding specific feature and
for selecting best feature for classification.
Abstract: In this paper we present a new method for coin
identification. The proposed method adopts a hybrid scheme using
Eigenvalues of covariance matrix, Circular Hough Transform (CHT)
and Bresenham-s circle algorithm. The statistical and geometrical
properties of the small and large Eigenvalues of the covariance
matrix of a set of edge pixels over a connected region of support are
explored for the purpose of circular object detection. Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze
zero elements and contain only a small number of non-zero elements,
they provide an advantage of matrix storage space and computational
time. Neighborhood suppression scheme is used to find the valid
Hough peaks. The accurate position of the circumference pixels is
identified using Raster scan algorithm which uses geometrical
symmetry property. After finding circular objects, the proposed
method uses the texture on the surface of the coins called texton,
which are unique properties of coins, refers to the fundamental micro
structure in generic natural images. This method has been tested on
several real world images including coin and non-coin images. The
performance is also evaluated based on the noise withstanding
capability.
Abstract: We present analysis of spatial patterns of generic
disease spread simulated by a stochastic long-range correlation SIR
model, where individuals can be infected at long distance in a power
law distribution. We integrated various tools, namely perimeter,
circularity, fractal dimension, and aggregation index to characterize
and investigate spatial pattern formations. Our primary goal was to
understand for a given model of interest which tool has an advantage
over the other and to what extent. We found that perimeter and
circularity give information only for a case of strong correlation–
while the fractal dimension and aggregation index exhibit the growth
rule of pattern formation, depending on the degree of the correlation
exponent (β). The aggregation index method used as an alternative
method to describe the degree of pathogenic ratio (α). This study may
provide a useful approach to characterize and analyze the pattern
formation of epidemic spreading
Abstract: We try to give a solution of version control for
documents in web service, that-s why we propose a new approach
used specially for the XML documents. The new approach is applied
in a centralized repository, this repository coexist with other
repositories in a decentralized system. To achieve the activities of
this approach in a standard model we use the ECA active rules. We
also show how the Event-Condition-Action rules (ECA rules) have
been incorporated as a mechanism for the version control of
documents. The need to integrate ECA rules is that it provides a clear
declarative semantics and induces an immediate operational
realization in the system without the need for human intervention.
Abstract: In pattern recognition applications the low level
segmentation and the high level object recognition are generally
considered as two separate steps. The paper presents a method that
bridges the gap between the low and the high level object
recognition. It is based on a Bayesian network representation and
network propagation algorithm. At the low level it uses hierarchical
structure of quadratic spline wavelet image bases. The method is
demonstrated for a simple circuit diagram component identification
problem.
Abstract: This paper is concerned with motion recognition based fuzzy WP(Wavelet Packet) feature extraction approach from Vicon physical data sets. For this purpose, we use an efficient fuzzy mutual-information-based WP transform for feature extraction. This method estimates the required mutual information using a novel approach based on fuzzy membership function. The physical action data set includes 10 normal and 10 aggressive physical actions that measure the human activity. The data have been collected from 10 subjects using the Vicon 3D tracker. The experiments consist of running, seating, and walking as physical activity motion among various activities. The experimental results revealed that the presented feature extraction approach showed good recognition performance.
Abstract: Real-time 3D applications have to guarantee
interactive rendering speed. There is a restriction for the number of
polygons which is rendered due to performance of a graphics hardware
or graphics algorithms. Generally, the rendering performance will be
drastically increased when handling only the dynamic 3d models,
which is much fewer than the static ones. Since shapes and colors of
the static objects don-t change when the viewing direction is fixed, the
information can be reused. We render huge amounts of polygon those
cannot handled by conventional rendering techniques in real-time by
using a static object image and merging it with rendering result of the
dynamic objects. The performance must be decreased as a
consequence of updating the static object image including removing
an static object that starts to move, re-rending the other static objects
being overlapped by the moving ones. Based on visibility of the object
beginning to move, we can skip the updating process. As a result, we
enhance rendering performance and reduce differences of rendering
speed between each frame. Proposed method renders total
200,000,000 polygons that consist of 500,000 dynamic polygons and
the rest are static polygons in about 100 frames per second.
Abstract: The neural network's performance can be measured by efficiency and accuracy. The major disadvantages of neural network approach are that the generalization capability of neural networks is often significantly low, and it may take a very long time to tune the weights in the net to generate an accurate model for a highly complex and nonlinear systems. This paper presents a novel Neuro-fuzzy architecture based on Extended Kalman filter. To test the performance and applicability of the proposed neuro-fuzzy model, simulation study of nonlinear complex dynamic system is carried out. The proposed method can be applied to an on-line incremental adaptive learning for the prediction of financial time series. A benchmark case studie is used to demonstrate that the proposed model is a superior neuro-fuzzy modeling technique.