Abstract: The kinematics of manipulators is a central problem in the automatic control of robot manipulators. Theoretical background for the analysis of the 5 Dof Lynx-6 educational Robot Arm kinematics is presented in this paper. The kinematics problem is defined as the transformation from the Cartesian space to the joint space and vice versa. The Denavit-Harbenterg (D-H) model of representation is used to model robot links and joints in this study. Both forward and inverse kinematics solutions for this educational manipulator are presented, An effective method is suggested to decrease multiple solutions in inverse kinematics. A visual software package, named MSG, is also developed for testing Motional Characteristics of the Lynx-6 Robot arm. The kinematics solutions of the software package were found to be identical with the robot arm-s physical motional behaviors.
Abstract: In this paper we propose a new approach for flexible document categorization according to the document type or genre instead of topic. Our approach implements two homogenous classifiers: contextual classifier and logical classifier. The contextual classifier is based on the document URL, whereas, the logical classifier use the logical structure of the document to perform the categorization. The final categorization is obtained by combining contextual and logical categorizations. In our approach, each document is assigned to all predefined categories with different membership degrees. Our experiments demonstrate that our approach is best than other genre categorization approaches.
Abstract: This paper proposes a method that discovers sequential patterns corresponding to user-s interests from sequential data. This method expresses the interests as constraint patterns. The constraint patterns can define relationships among attributes of the items composing the data. The method recursively decomposes the constraint patterns into constraint subpatterns. The method evaluates the constraint subpatterns in order to efficiently discover sequential patterns satisfying the constraint patterns. Also, this paper applies the method to the sequential data composed of stock price indexes and verifies its effectiveness through comparing it with a method without using the constraint patterns.
Abstract: Software and applications are subjected to serious and damaging security threats, these threats are increasing as a result of increased number of potential vulnerabilities. Security testing is an indispensable process to validate software security requirements and to identify security related vulnerabilities. In this paper we analyze and compare different available vulnerabilities testing techniques based on a pre defined criteria using analytical hierarchy process (AHP). We have selected five testing techniques which includes Source code analysis, Fault code injection, Robustness, Stress and Penetration testing techniques. These testing techniques have been evaluated against five criteria which include cost, thoroughness, Ease of use, effectiveness and efficiency. The outcome of the study is helpful for researchers, testers and developers to understand effectiveness of each technique in its respective domain. Also the study helps to compare the inner working of testing techniques against a selected criterion to achieve optimum testing results.
Abstract: The inherent flexibilities of XML in both structure
and semantics makes mining from XML data a complex task with
more challenges compared to traditional association rule mining in
relational databases. In this paper, we propose a new model for the
effective extraction of generalized association rules form a XML
document collection. We directly use frequent subtree mining
techniques in the discovery process and do not ignore the tree
structure of data in the final rules. The frequent subtrees based on the
user provided support are split to complement subtrees to form the
rules. We explain our model within multi-steps from data preparation
to rule generation.
Abstract: This paper presents an automatic feature recognition
method based on center-surround difference detecting and fuzzy logic
that can be applied in ground-penetrating radar (GPR) image
processing. Adopted center-surround difference method, the salient
local image regions are extracted from the GPR images as features of
detected objects. And fuzzy logic strategy is used to match the
detected features and features in template database. This way, the
problem of objects detecting, which is the key problem in GPR image
processing, can be converted into two steps, feature extracting and
matching. The contributions of these skills make the system have the
ability to deal with changes in scale, antenna and noises. The results of
experiments also prove that the system has higher ratio of features
sensing in using GPR to image the subsurface structures.
Abstract: Single nucleotide polymorphisms (SNPs) hold much promise as a basis for disease-gene association. However, research is limited by the cost of genotyping the tremendous number of SNPs. Therefore, it is important to identify a small subset of informative SNPs, the so-called tag SNPs. This subset consists of selected SNPs of the genotypes, and accurately represents the rest of the SNPs. Furthermore, an effective evaluation method is needed to evaluate prediction accuracy of a set of tag SNPs. In this paper, a genetic algorithm (GA) is applied to tag SNP problems, and the K-nearest neighbor (K-NN) serves as a prediction method of tag SNP selection. The experimental data used was taken from the HapMap project; it consists of genotype data rather than haplotype data. The proposed method consistently identified tag SNPs with considerably better prediction accuracy than methods from the literature. At the same time, the number of tag SNPs identified was smaller than the number of tag SNPs in the other methods. The run time of the proposed method was much shorter than the run time of the SVM/STSA method when the same accuracy was reached.
Abstract: Serial Analysis of Gene Expression is a powerful
quantification technique for generating cell or tissue gene expression
data. The profile of the gene expression of cell or tissue in several
different states is difficult for biologists to analyze because of the large
number of genes typically involved. However, feature selection in
machine learning can successfully reduce this problem. The method
allows reducing the features (genes) in specific SAGE data, and
determines only relevant genes. In this study, we used a genetic
algorithm to implement feature selection, and evaluate the
classification accuracy of the selected features with the K-nearest
neighbor method. In order to validate the proposed method, we used
two SAGE data sets for testing. The results of this study conclusively
prove that the number of features of the original SAGE data set can be
significantly reduced and higher classification accuracy can be
achieved.
Abstract: This paper deals with dynamic load balancing using PVM. In distributed environment Load Balancing and Heterogeneity are very critical issues and needed to drill down in order to achieve the optimal results and efficiency. Various techniques are being used in order to distribute the load dynamically among different nodes and to deal with heterogeneity. These techniques are using different approaches where Process Migration is basic concept with different optimal flavors. But Process Migration is not an easy job, it impose lot of burden and processing effort in order to track each process in nodes. We will propose a dynamic load balancing technique in which application will intelligently balance the load among different nodes, resulting in efficient use of system and have no overheads of process migration. It would also provide a simple solution to problem of load balancing in heterogeneous environment.
Abstract: Wavelet transforms are multiresolution
decompositions that can be used to analyze signals and images.
Image compression is one of major applications of wavelet
transforms in image processing. It is considered as one of the most
powerful methods that provides a high compression ratio. However,
its implementation is very time-consuming. At the other hand,
parallel computing technologies are an efficient method for image
compression using wavelets. In this paper, we propose a parallel
wavelet compression algorithm based on quadtrees. We implement
the algorithm using MatlabMPI (a parallel, message passing version
of Matlab), and compute its isoefficiency function, and show that it is
scalable. Our experimental results confirm the efficiency of the
algorithm also.
Abstract: This research work proposes a model of network security systems aiming to prevent production system in a data center from being attacked by intrusions. Conceptually, we introduce a decoy system as a part of the security system for luring intrusions, and apply network intrusion detection (NIDS), coupled with the decoy system to perform intrusion prevention. When NIDS detects an activity of intrusions, it will signal a redirection module to redirect all malicious traffics to attack the decoy system instead, and hence the production system is protected and safe. However, in a normal situation, traffic will be simply forwarded to the production system as usual. Furthermore, we assess the performance of the model with various bandwidths, packet sizes and inter-attack intervals (attacking frequencies).
Abstract: Service identification is one of the main activities in
the modeling of a service-oriented solution, and therefore errors
made during identification can flow down through detailed design
and implementation activities that may necessitate multiple
iterations, especially in building composite applications. Different
strategies exist for how to identify candidate services that each of
them has its own benefits and trade offs. The approach presented in
this paper proposes a selective identification of services approach,
based on in depth business process analysis coupled with use cases
and existing assets analysis and goal service modeling. This article
clearly emphasizes the key activities need for the analysis and
service identification to build a optimized service oriented
architecture. In contrast to other approaches this article mentions
some best practices and steps, wherever appropriate, to point out the
vagueness involved in service identification.
Abstract: In this paper, we present an algorithm for computing a
Schur factorization of a real nonsymmetric matrix with ordered diagonal
blocks such that upper left blocks contains the largest magnitude
eigenvalues. Especially in case of multiple eigenvalues, when matrix
is non diagonalizable, we construct an invariant subspaces with few
additional tricks which are heuristic and numerical results shows the
stability and accuracy of the algorithm.
Abstract: This paper presents the theoretical background and
the real implementation of an automated computer system to
introduce machine vision in flower, fruit and vegetable processing
for recollection, cutting, packaging, classification, or fumigation
tasks. The considerations and implementation issues presented in this
work can be applied to a wide range of varieties of flowers, fruits and
vegetables, although some of them are especially relevant due to the
great amount of units that are manipulated and processed each year
over the world. The computer vision algorithms developed in this
work are shown in detail, and can be easily extended to other
applications. A special attention is given to the electromagnetic
compatibility in order to avoid noisy images. Furthermore, real
experimentation has been carried out in order to validate the
developed application. In particular, the tests show that the method
has good robustness and high success percentage in the object
characterization.
Abstract: This paper proposes an auto-classification algorithm
of Web pages using Data mining techniques. We consider the
problem of discovering association rules between terms in a set of
Web pages belonging to a category in a search engine database, and
present an auto-classification algorithm for solving this problem that
are fundamentally based on Apriori algorithm. The proposed
technique has two phases. The first phase is a training phase where
human experts determines the categories of different Web pages, and
the supervised Data mining algorithm will combine these categories
with appropriate weighted index terms according to the highest
supported rules among the most frequent words. The second phase is
the categorization phase where a web crawler will crawl through the
World Wide Web to build a database categorized according to the
result of the data mining approach. This database contains URLs and
their categories.
Abstract: QoS Routing aims to find paths between senders and
receivers satisfying the QoS requirements of the application which
efficiently using the network resources and underlying routing
algorithm to be able to find low-cost paths that satisfy given QoS
constraints. The problem of finding least-cost routing is known to be
NP-hard or complete and some algorithms have been proposed to
find a near optimal solution. But these heuristics or algorithms either
impose relationships among the link metrics to reduce the complexity
of the problem which may limit the general applicability of the
heuristic, or are too costly in terms of execution time to be applicable
to large networks. In this paper, we concentrate an algorithm that
finds a near-optimal solution fast and we named this algorithm as
optimized Delay Constrained Routing (ODCR), which uses an
adaptive path weight function together with an additional constraint
imposed on the path cost, to restrict search space and hence ODCR
finds near optimal solution in much quicker time.
Abstract: Over the past few years, a number of efforts have
been exerted to build parallel processing systems that utilize the idle
power of LAN-s and PC-s available in many homes and corporations.
The main advantage of these approaches is that they provide cheap
parallel processing environments for those who cannot afford the
expenses of supercomputers and parallel processing hardware.
However, most of the solutions provided are not very flexible in the
use of available resources and very difficult to install and setup.
In this paper, a multi-level web-based parallel processing system
(MWPS) is designed (appendix). MWPS is based on the idea of
volunteer computing, very flexible, easy to setup and easy to use.
MWPS allows three types of subscribers: simple volunteers (single
computers), super volunteers (full networks) and end users. All of
these entities are coordinated transparently through a secure web site.
Volunteer nodes provide the required processing power needed by
the system end users. There is no limit on the number of volunteer
nodes, and accordingly the system can grow indefinitely. Both
volunteer and system users must register and subscribe. Once, they
subscribe, each entity is provided with the appropriate MWPS
components. These components are very easy to install.
Super volunteer nodes are provided with special components that
make it possible to delegate some of the load to their inner nodes.
These inner nodes may also delegate some of the load to some other
lower level inner nodes .... and so on. It is the responsibility of the
parent super nodes to coordinate the delegation process and deliver
the results back to the user.
MWPS uses a simple behavior-based scheduler that takes into
consideration the current load and previous behavior of processing
nodes. Nodes that fulfill their contracts within the expected time get a
high degree of trust. Nodes that fail to satisfy their contract get a
lower degree of trust.
MWPS is based on the .NET framework and provides the minimal
level of security expected in distributed processing environments.
Users and processing nodes are fully authenticated. Communications
and messages between nodes are very secure. The system has been
implemented using C#.
MWPS may be used by any group of people or companies to
establish a parallel processing or grid environment.
Abstract: This paper sets forth the possibility and importance about applying Data Mining in Web logs mining and shows some problems in the conventional searching engines. Then it offers an improved algorithm based on the original AprioriAll algorithm which has been used in Web logs mining widely. The new algorithm adds the property of the User ID during the every step of producing the candidate set and every step of scanning the database by which to decide whether an item in the candidate set should be put into the large set which will be used to produce next candidate set. At the meantime, in order to reduce the number of the database scanning, the new algorithm, by using the property of the Apriori algorithm, limits the size of the candidate set in time whenever it is produced. Test results show the improved algorithm has a more lower complexity of time and space, better restrain noise and fit the capacity of memory.
Abstract: The latest Geographic Information System (GIS)
technology makes it possible to administer the spatial components of
daily “business object," in the corporate database, and apply suitable
geographic analysis efficiently in a desktop-focused application. We
can use wireless internet technology for transfer process in spatial
data from server to client or vice versa. However, the problem in
wireless Internet is system bottlenecks that can make the process of
transferring data not efficient. The reason is large amount of spatial
data. Optimization in the process of transferring and retrieving data,
however, is an essential issue that must be considered. Appropriate
decision to choose between R-tree and Quadtree spatial data indexing
method can optimize the process. With the rapid proliferation of
these databases in the past decade, extensive research has been
conducted on the design of efficient data structures to enable fast
spatial searching. Commercial database vendors like Oracle have also
started implementing these spatial indexing to cater to the large and
diverse GIS. This paper focuses on the decisions to choose R-tree
and quadtree spatial indexing using Oracle spatial database in mobile
GIS application. From our research condition, the result of using
Quadtree and R-tree spatial data indexing method in one single
spatial database can save the time until 42.5%.
Abstract: Multiparty voice over IP (MVoIP) systems allows a group of people to freely communicate each other via the internet, which have many applications such as online gaming, teleconferencing, online stock trading etc. Peertalk is a peer to peer multiparty voice over IP system (MVoIP) which is more feasible than existing approaches such as p2p overlay multicast and coupled distributed processing. Since the stream mixing and distribution are done by the peers, it is vulnerable to major security threats like nodes misbehavior, eavesdropping, Sybil attacks, Denial of Service (DoS), call tampering, Man in the Middle attacks etc. To thwart the security threats, a security framework called PEERTS (PEEred Reputed Trustworthy System for peertalk) is implemented so that efficient and secure communication can be carried out between peers.