Abstract: This paper proposes a three-dimensional motion capture and feedback system of flying disc throwing action learners with use of Kinect device. Rather than conventional 3-D motion capture system, Kinect has advantages of cost merit, easy system development and operation. A novice learner of flying disc is trained to keep arm movement in steady height, to twist the waist, and to stretch the elbow according to the waist angle. The proposing system captures learners- body movement, checks their skeleton positions in pre-motion / motion / post-motion in several ways, and displays feedback messages to refine their actions.
Abstract: this paper gives a novel approach towards real-time speed estimation of multiple traffic vehicles using fuzzy logic and image processing techniques with proper arrangement of camera parameters. The described algorithm consists of several important steps. First, the background is estimated by computing median over time window of specific frames. Second, the foreground is extracted using fuzzy similarity approach (FSA) between estimated background pixels and the current frame pixels containing foreground and background. Third, the traffic lanes are divided into two parts for both direction vehicles for parallel processing. Finally, the speeds of vehicles are estimated by Maximum a Posterior Probability (MAP) estimator. True ground speed is determined by utilizing infrared sensors for three different vehicles and the results are compared to the proposed algorithm with an accuracy of ± 0.74 kmph.
Abstract: For the communication between human and computer
in an interactive computing environment, the gesture recognition is
studied vigorously. Therefore, a lot of studies have proposed efficient
methods about the recognition algorithm using 2D camera captured
images. However, there is a limitation to these methods, such as the
extracted features cannot fully represent the object in real world.
Although many studies used 3D features instead of 2D features for
more accurate gesture recognition, the problem, such as the processing
time to generate 3D objects, is still unsolved in related researches.
Therefore we propose a method to extract the 3D features combined
with the 3D object reconstruction. This method uses the modified
GPU-based visual hull generation algorithm which disables unnecessary
processes, such as the texture calculation to generate three kinds
of 3D projection maps as the 3D feature: a nearest boundary, a farthest
boundary, and a thickness of the object projected on the base-plane. In
the section of experimental results, we present results of proposed
method on eight human postures: T shape, both hands up, right hand
up, left hand up, hands front, stand, sit and bend, and compare the
computational time of the proposed method with that of the previous
methods.
Abstract: In the last decades, a number of robust fuzzy clustering algorithms have been proposed to partition data sets affected by noise and outliers. Robust fuzzy C-means (robust-FCM) is certainly one of the most known among these algorithms. In robust-FCM, noise is modeled as a separate cluster and is characterized by a prototype that has a constant distance δ from all data points. Distance δ determines the boundary of the noise cluster and therefore is a critical parameter of the algorithm. Though some approaches have been proposed to automatically determine the most suitable δ for the specific application, up to today an efficient and fully satisfactory solution does not exist. The aim of this paper is to propose a novel method to compute the optimal δ based on the analysis of the distribution of the percentage of objects assigned to the noise cluster in repeated executions of the robust-FCM with decreasing values of δ . The extremely encouraging results obtained on some data sets found in the literature are shown and discussed.
Abstract: In the paper a method of modeling text for Polish is
discussed. The method is aimed at transforming continuous input text
into a text consisting of sentences in so called canonical form, whose
characteristic is, among others, a complete structure as well as no
anaphora or ellipses. The transformation is lossless as to the content
of text being transformed. The modeling method has been worked
out for the needs of the Thetos system, which translates Polish
written texts into the Polish sign language. We believe that the
method can be also used in various applications that deal with the
natural language, e.g. in a text summary generator for Polish.
Abstract: Multiple sequence alignment is a fundamental part in
many bioinformatics applications such as phylogenetic analysis.
Many alignment methods have been proposed. Each method gives a
different result for the same data set, and consequently generates a
different phylogenetic tree. Hence, the chosen alignment method
affects the resulting tree. However in the literature, there is no
evaluation of multiple alignment methods based on the comparison of
their phylogenetic trees. This work evaluates the following eight
aligners: ClustalX, T-Coffee, SAGA, MUSCLE, MAFFT, DIALIGN,
ProbCons and Align-m, based on their phylogenetic trees (test trees)
produced on a given data set. The Neighbor-Joining method is used
to estimate trees. Three criteria, namely, the dNNI, the dRF and the
Id_Tree are established to test the ability of different alignment
methods to produce closer test tree compared to the reference one
(true tree). Results show that the method which produces the most
accurate alignment gives the nearest test tree to the reference tree.
MUSCLE outperforms all aligners with respect to the three criteria
and for all datasets, performing particularly better when sequence
identities are within 10-20%. It is followed by T-Coffee at lower
sequence identity (30%), trees scores of all methods
become similar.
Abstract: Human computer interaction has progressed
considerably from the traditional modes of interaction. Vision based
interfaces are a revolutionary technology, allowing interaction
through human actions, gestures. Researchers have developed
numerous accurate techniques, however, with an exception to few
these techniques are not evaluated using standard HCI techniques. In
this paper we present a comprehensive framework to address this
issue. Our evaluation of a computer vision application shows that in
addition to the accuracy, it is vital to address human factors
Abstract: We consider different types of aggregation operators
such as the heavy ordered weighted averaging (HOWA) operator and
the fuzzy ordered weighted averaging (FOWA) operator. We
introduce a new extension of the OWA operator called the fuzzy
heavy ordered weighted averaging (FHOWA) operator. The main
characteristic of this aggregation operator is that it deals with
uncertain information represented in the form of fuzzy numbers (FN)
in the HOWA operator. We develop the basic concepts of this
operator and study some of its properties. We also develop a wide
range of families of FHOWA operators such as the fuzzy push up
allocation, the fuzzy push down allocation, the fuzzy median
allocation and the fuzzy uniform allocation.
Abstract: This paper proposes a scheduling scheme using feedback
control to reduce the response time of aperiodic tasks with soft
real-time constraints. We design an algorithm based on the proposed
scheduling scheme and Total Bandwidth Server (TBS) that is a
conventional server technique for scheduling aperiodic tasks. We then
describe the feedback controller of the algorithm and give the control
parameter tuning methods. The simulation study demonstrates that the
algorithm can reduce the mean response time up to 26% compared
to TBS in exchange for slight deadline misses.
Abstract: The development of many measurement and inspection systems of products based on real-time image processing can not be carried out totally in a laboratory due to the size or the temperature of the manufactured products. Those systems must be developed in successive phases. Firstly, the system is installed in the production line with only an operational service to acquire images of the products and other complementary signals. Next, a recording service of the image and signals must be developed and integrated in the system. Only after a large set of images of products is available, the development of the real-time image processing algorithms for measurement or inspection of the products can be accomplished under realistic conditions. Finally, the recording service is turned off or eliminated and the system operates only with the real-time services for the acquisition and processing of the images. This article presents a systematic performance evaluation of the image compression algorithms currently available to implement a real-time recording service. The results allow establishing a trade off between the reduction or compression of the image size and the CPU time required to get that compression level.
Abstract: CIM is the standard formalism for modeling management
information developed by the Distributed Management Task
Force (DMTF) in the context of its WBEM proposal, designed to
provide a conceptual view of the managed environment. In this
paper, we propose the inclusion of formal knowledge representation
techniques, based on Description Logics (DLs) and the Web Ontology
Language (OWL), in CIM-based conceptual modeling, and then we
examine the benefits of such a decision. The proposal is specified
as a CIM metamodel level mapping to a highly expressive subset
of DLs capable of capturing all the semantics of the models. The
paper shows how the proposed mapping provides CIM diagrams with
precise semantics and can be used for automatic reasoning about the
management information models, as a design aid, by means of newgeneration
CASE tools, thanks to the use of state-of-the-art automatic
reasoning systems that support the proposed logic and use algorithms
that are sound and complete with respect to the semantics. Such a
CASE tool framework has been developed by the authors and its
architecture is also introduced. The proposed formalization is not
only useful at design time, but also at run time through the use of
rational autonomous agents, in response to a need recently recognized
by the DMTF.
Abstract: This paper presents a constrained valley detection
algorithm. The intent is to find valleys in the map for the path planning
that enables a robot or a vehicle to move safely. The constraint to the
valley is a desired width and a desired depth to ensure the space for
movement when a vehicle passes through the valley. We propose an
algorithm to find valleys satisfying these 2 dimensional constraints.
The merit of our algorithm is that the pre-processing and the
post-processing are not necessary to eliminate undesired small valleys.
The algorithm is validated through simulation using digitized
elevation data.
Abstract: Optical burst switching (OBS) has been proposed to
realize the next generation Internet based on the wavelength division
multiplexing (WDM) network technologies. In the OBS, the burst
contention is one of the major problems. The deflection routing has
been designed for resolving the problem. However, the deflection
routing becomes difficult to prevent from the burst contentions as the
network load becomes high. In this paper, we introduce a flow rate
control methods to reduce burst contentions. We propose new flow
rate control methods based on the leaky bucket algorithm and
deflection routing, i.e. separate leaky bucket deflection method, and
dynamic leaky bucket deflection method. In proposed methods, edge
nodes which generate data bursts carry out the flow rate control
protocols. In order to verify the effectiveness of the flow rate control in
OBS networks, we show that the proposed methods improve the
network utilization and reduce the burst loss probability through
computer simulations.
Abstract: With the increasing number of on-chip components and the critical requirement for processing power, Chip Multiprocessor (CMP) has gained wide acceptance in both academia and industry during the last decade. However, the conventional bus-based onchip communication schemes suffer from very high communication delay and low scalability in large scale systems. Network-on-Chip (NoC) has been proposed to solve the bottleneck of parallel onchip communications by applying different network topologies which separate the communication phase from the computation phase. Observing that the memory bandwidth of the communication between on-chip components and off-chip memory has become a critical problem even in NoC based systems, in this paper, we propose a novel 3D NoC with on-chip Dynamic Random Access Memory (DRAM) in which different layers are dedicated to different functionalities such as processors, cache or memory. Results show that, by using our proposed architecture, average link utilization has reduced by 10.25% for SPLASH-2 workloads. Our proposed design costs 1.12% less execution cycles than the traditional design on average.
Abstract: This research focus on developing a new segmentation method for improving forecasting model which is call trend based segmentation method (TBSM). Generally, the piece-wise linear representation (PLR) can finds some of pair of trading points is well for time series data, but in the complicated stock environment it is not well for stock forecasting because of the stock has more trends of trading. If we consider the trends of trading in stock price for the trading signal which it will improve the precision of forecasting model. Therefore, a TBSM with SVR model used to detect the trading points for various stocks of Taiwanese and America under different trend tendencies. The experimental results show our trading system is more profitable and can be implemented in real time of stock market
Abstract: This paper presents a new approach for automatic
document categorization. Exploiting the logical structure of the
document, our approach assigns a HTML document to one or more
categories (thesis, paper, call for papers, email, ...). Using a set of
training documents, our approach generates a set of rules used to
categorize new documents. The approach flexibility is carried out
with rule weight association representing your importance in the
discrimination between possible categories. This weight is
dynamically modified at each new document categorization. The
experimentation of the proposed approach provides satisfactory
results.
Abstract: Content-based Image Retrieval (CBIR) aims at searching image databases for specific images that are similar to a given query image based on matching of features derived from the image content. This paper focuses on a low-dimensional color based indexing technique for achieving efficient and effective retrieval performance. In our approach, the color features are extracted using the mean shift algorithm, a robust clustering technique. Then the cluster (region) mode is used as representative of the image in 3-D color space. The feature descriptor consists of the representative color of a region and is indexed using a spatial indexing method that uses *R -tree thus avoiding the high-dimensional indexing problems associated with the traditional color histogram. Alternatively, the images in the database are clustered based on region feature similarity using Euclidian distance. Only representative (centroids) features of these clusters are indexed using *R -tree thus improving the efficiency. For similarity retrieval, each representative color in the query image or region is used independently to find regions containing that color. The results of these methods are compared. A JAVA based query engine supporting query-by- example is built to retrieve images by color.
Abstract: In this paper, all variables are supposed to be integer
and positive. In this modern method, objective function is assumed to
be maximized or minimized but constraints are always explained like
less or equal to. In this method, choosing a dual combination of ideal
nonequivalent and omitting one of variables. With continuing this
act, finally, having one nonequivalent with (n-m+1) unknown
quantities in which final nonequivalent, m is counter for constraints,
n is counter for variables of decision.
Abstract: Since the driving speed and control accuracy of
commercial optical disk are increasing significantly, it needs an
efficient controller to monitor the track seeking and following
operations of the servo system for achieving the desired data
extracting response. The nonlinear behaviors of the actuator and servo
system of the optical disk drive will influence the laser spot
positioning. Here, the model-free fuzzy control scheme is employed to
design the track seeking servo controller for a d.c. motor driving
optical disk drive system. In addition, the sliding model control
strategy is introduced into the fuzzy control structure to construct a
1-D adaptive fuzzy rule intelligent controller for simplifying the
implementation problem and improving the control performance. The
experimental results show that the steady state error of the track
seeking by using this fuzzy controller can maintain within the track
width (1.6 μm ). It can be used in the track seeking and track
following servo control operations.
Abstract: Fuzzy fingerprint vault is a recently developed cryptographic construct based on the polynomial reconstruction problem to secure critical data with the fingerprint data. However, the previous researches are not applicable to the fingerprint having a few minutiae since they use a fixed degree of the polynomial without considering the number of fingerprint minutiae. To solve this problem, we use an adaptive degree of the polynomial considering the number of minutiae extracted from each user. Also, we apply multiple polynomials to avoid the possible degradation of the security of a simple solution(i.e., using a low-degree polynomial). Based on the experimental results, our method can make the possible attack difficult 2192 times more than using a low-degree polynomial as well as verify the users having a few minutiae.