Abstract: This paper presents the theoretical background and
the real implementation of an automated computer system to
introduce machine vision in flower, fruit and vegetable processing
for recollection, cutting, packaging, classification, or fumigation
tasks. The considerations and implementation issues presented in this
work can be applied to a wide range of varieties of flowers, fruits and
vegetables, although some of them are especially relevant due to the
great amount of units that are manipulated and processed each year
over the world. The computer vision algorithms developed in this
work are shown in detail, and can be easily extended to other
applications. A special attention is given to the electromagnetic
compatibility in order to avoid noisy images. Furthermore, real
experimentation has been carried out in order to validate the
developed application. In particular, the tests show that the method
has good robustness and high success percentage in the object
characterization.
Abstract: In this paper, we present an algorithm for computing a
Schur factorization of a real nonsymmetric matrix with ordered diagonal
blocks such that upper left blocks contains the largest magnitude
eigenvalues. Especially in case of multiple eigenvalues, when matrix
is non diagonalizable, we construct an invariant subspaces with few
additional tricks which are heuristic and numerical results shows the
stability and accuracy of the algorithm.
Abstract: Service identification is one of the main activities in
the modeling of a service-oriented solution, and therefore errors
made during identification can flow down through detailed design
and implementation activities that may necessitate multiple
iterations, especially in building composite applications. Different
strategies exist for how to identify candidate services that each of
them has its own benefits and trade offs. The approach presented in
this paper proposes a selective identification of services approach,
based on in depth business process analysis coupled with use cases
and existing assets analysis and goal service modeling. This article
clearly emphasizes the key activities need for the analysis and
service identification to build a optimized service oriented
architecture. In contrast to other approaches this article mentions
some best practices and steps, wherever appropriate, to point out the
vagueness involved in service identification.
Abstract: This research work proposes a model of network security systems aiming to prevent production system in a data center from being attacked by intrusions. Conceptually, we introduce a decoy system as a part of the security system for luring intrusions, and apply network intrusion detection (NIDS), coupled with the decoy system to perform intrusion prevention. When NIDS detects an activity of intrusions, it will signal a redirection module to redirect all malicious traffics to attack the decoy system instead, and hence the production system is protected and safe. However, in a normal situation, traffic will be simply forwarded to the production system as usual. Furthermore, we assess the performance of the model with various bandwidths, packet sizes and inter-attack intervals (attacking frequencies).
Abstract: This paper deals with dynamic load balancing using PVM. In distributed environment Load Balancing and Heterogeneity are very critical issues and needed to drill down in order to achieve the optimal results and efficiency. Various techniques are being used in order to distribute the load dynamically among different nodes and to deal with heterogeneity. These techniques are using different approaches where Process Migration is basic concept with different optimal flavors. But Process Migration is not an easy job, it impose lot of burden and processing effort in order to track each process in nodes. We will propose a dynamic load balancing technique in which application will intelligently balance the load among different nodes, resulting in efficient use of system and have no overheads of process migration. It would also provide a simple solution to problem of load balancing in heterogeneous environment.
Abstract: Serial Analysis of Gene Expression is a powerful
quantification technique for generating cell or tissue gene expression
data. The profile of the gene expression of cell or tissue in several
different states is difficult for biologists to analyze because of the large
number of genes typically involved. However, feature selection in
machine learning can successfully reduce this problem. The method
allows reducing the features (genes) in specific SAGE data, and
determines only relevant genes. In this study, we used a genetic
algorithm to implement feature selection, and evaluate the
classification accuracy of the selected features with the K-nearest
neighbor method. In order to validate the proposed method, we used
two SAGE data sets for testing. The results of this study conclusively
prove that the number of features of the original SAGE data set can be
significantly reduced and higher classification accuracy can be
achieved.
Abstract: Single nucleotide polymorphisms (SNPs) hold much promise as a basis for disease-gene association. However, research is limited by the cost of genotyping the tremendous number of SNPs. Therefore, it is important to identify a small subset of informative SNPs, the so-called tag SNPs. This subset consists of selected SNPs of the genotypes, and accurately represents the rest of the SNPs. Furthermore, an effective evaluation method is needed to evaluate prediction accuracy of a set of tag SNPs. In this paper, a genetic algorithm (GA) is applied to tag SNP problems, and the K-nearest neighbor (K-NN) serves as a prediction method of tag SNP selection. The experimental data used was taken from the HapMap project; it consists of genotype data rather than haplotype data. The proposed method consistently identified tag SNPs with considerably better prediction accuracy than methods from the literature. At the same time, the number of tag SNPs identified was smaller than the number of tag SNPs in the other methods. The run time of the proposed method was much shorter than the run time of the SVM/STSA method when the same accuracy was reached.
Abstract: This paper presents an automatic feature recognition
method based on center-surround difference detecting and fuzzy logic
that can be applied in ground-penetrating radar (GPR) image
processing. Adopted center-surround difference method, the salient
local image regions are extracted from the GPR images as features of
detected objects. And fuzzy logic strategy is used to match the
detected features and features in template database. This way, the
problem of objects detecting, which is the key problem in GPR image
processing, can be converted into two steps, feature extracting and
matching. The contributions of these skills make the system have the
ability to deal with changes in scale, antenna and noises. The results of
experiments also prove that the system has higher ratio of features
sensing in using GPR to image the subsurface structures.
Abstract: The inherent flexibilities of XML in both structure
and semantics makes mining from XML data a complex task with
more challenges compared to traditional association rule mining in
relational databases. In this paper, we propose a new model for the
effective extraction of generalized association rules form a XML
document collection. We directly use frequent subtree mining
techniques in the discovery process and do not ignore the tree
structure of data in the final rules. The frequent subtrees based on the
user provided support are split to complement subtrees to form the
rules. We explain our model within multi-steps from data preparation
to rule generation.
Abstract: Software and applications are subjected to serious and damaging security threats, these threats are increasing as a result of increased number of potential vulnerabilities. Security testing is an indispensable process to validate software security requirements and to identify security related vulnerabilities. In this paper we analyze and compare different available vulnerabilities testing techniques based on a pre defined criteria using analytical hierarchy process (AHP). We have selected five testing techniques which includes Source code analysis, Fault code injection, Robustness, Stress and Penetration testing techniques. These testing techniques have been evaluated against five criteria which include cost, thoroughness, Ease of use, effectiveness and efficiency. The outcome of the study is helpful for researchers, testers and developers to understand effectiveness of each technique in its respective domain. Also the study helps to compare the inner working of testing techniques against a selected criterion to achieve optimum testing results.
Abstract: This paper proposes a method that discovers sequential patterns corresponding to user-s interests from sequential data. This method expresses the interests as constraint patterns. The constraint patterns can define relationships among attributes of the items composing the data. The method recursively decomposes the constraint patterns into constraint subpatterns. The method evaluates the constraint subpatterns in order to efficiently discover sequential patterns satisfying the constraint patterns. Also, this paper applies the method to the sequential data composed of stock price indexes and verifies its effectiveness through comparing it with a method without using the constraint patterns.
Abstract: In this paper we propose a new approach for flexible document categorization according to the document type or genre instead of topic. Our approach implements two homogenous classifiers: contextual classifier and logical classifier. The contextual classifier is based on the document URL, whereas, the logical classifier use the logical structure of the document to perform the categorization. The final categorization is obtained by combining contextual and logical categorizations. In our approach, each document is assigned to all predefined categories with different membership degrees. Our experiments demonstrate that our approach is best than other genre categorization approaches.
Abstract: The kinematics of manipulators is a central problem in the automatic control of robot manipulators. Theoretical background for the analysis of the 5 Dof Lynx-6 educational Robot Arm kinematics is presented in this paper. The kinematics problem is defined as the transformation from the Cartesian space to the joint space and vice versa. The Denavit-Harbenterg (D-H) model of representation is used to model robot links and joints in this study. Both forward and inverse kinematics solutions for this educational manipulator are presented, An effective method is suggested to decrease multiple solutions in inverse kinematics. A visual software package, named MSG, is also developed for testing Motional Characteristics of the Lynx-6 Robot arm. The kinematics solutions of the software package were found to be identical with the robot arm-s physical motional behaviors.
Abstract: Increasing growth of information volume in the
internet causes an increasing need to develop new (semi)automatic
methods for retrieval of documents and ranking them according to
their relevance to the user query. In this paper, after a brief review
on ranking models, a new ontology based approach for ranking
HTML documents is proposed and evaluated in various
circumstances. Our approach is a combination of conceptual,
statistical and linguistic methods. This combination reserves the
precision of ranking without loosing the speed. Our approach
exploits natural language processing techniques for extracting
phrases and stemming words. Then an ontology based conceptual
method will be used to annotate documents and expand the query.
To expand a query the spread activation algorithm is improved so
that the expansion can be done in various aspects. The annotated
documents and the expanded query will be processed to compute
the relevance degree exploiting statistical methods. The outstanding
features of our approach are (1) combining conceptual, statistical
and linguistic features of documents, (2) expanding the query with
its related concepts before comparing to documents, (3) extracting
and using both words and phrases to compute relevance degree, (4)
improving the spread activation algorithm to do the expansion based
on weighted combination of different conceptual relationships and
(5) allowing variable document vector dimensions. A ranking
system called ORank is developed to implement and test the
proposed model. The test results will be included at the end of the
paper.
Abstract: Data Warehousing tools have become very popular and currently many of them have moved to Web-based user interfaces to make it easier to access and use the tools. The next step is to enable these tools to be used within a portal framework. The portal framework consists of pages having several small windows that contain individual data warehouse query results. There are several issues that need to be considered when designing the architecture for a portal enabled data warehouse query tool. Some issues need special techniques that can overcome the limitations that are imposed by the nature of data warehouse queries. Issues such as single sign-on, query result caching and sharing, customization, scheduling and authorization need to be considered. This paper discusses such issues and suggests an architecture to support data warehouse queries within Web portal frameworks.
Abstract: Fault-proneness of a software module is the
probability that the module contains faults. To predict faultproneness
of modules different techniques have been proposed which
includes statistical methods, machine learning techniques, neural
network techniques and clustering techniques. The aim of proposed
study is to explore whether metrics available in the early lifecycle
(i.e. requirement metrics), metrics available in the late lifecycle (i.e.
code metrics) and metrics available in the early lifecycle (i.e.
requirement metrics) combined with metrics available in the late
lifecycle (i.e. code metrics) can be used to identify fault prone
modules using Genetic Algorithm technique. This approach has been
tested with real time defect C Programming language datasets of
NASA software projects. The results show that the fusion of
requirement and code metric is the best prediction model for
detecting the faults as compared with commonly used code based
model.
Abstract: A novel method of individual level adaptive mutation rate control called the rank-scaled mutation rate for genetic algorithms is introduced. The rank-scaled mutation rate controlled genetic algorithm varies the mutation parameters based on the rank of each individual within the population. Thereby the distribution of the fitness of the papulation is taken into consideration in forming the new mutation rates. The best fit mutate at the lowest rate and the least fit mutate at the highest rate. The complexity of the algorithm is of the order of an individual adaptation scheme and is lower than that of a self-adaptation scheme. The proposed algorithm is tested on two common problems, namely, numerical optimization of a function and the traveling salesman problem. The results show that the proposed algorithm outperforms both the fixed and deterministic mutation rate schemes. It is best suited for problems with several local optimum solutions without a high demand for excessive mutation rates.
Abstract: The one of best robust search technique on large scale
search area is heuristic and meta heuristic approaches. Especially in
issue that the exploitation of combinatorial status in the large scale
search area prevents the solution of the problem via classical
calculating methods, so such problems is NP-complete. in this
research, the problem of winner determination in combinatorial
auctions have been formulated and by assessing older heuristic
functions, we solve the problem by using of genetic algorithm and
would show that this new method would result in better performance
in comparison to other heuristic function such as simulated annealing
greedy approach.
Abstract: Due to memory leaks, often-valuable system memory
gets wasted and denied for other processes thereby affecting the
computational performance. If an application-s memory usage
exceeds virtual memory size, it can leads to system crash. Current
memory leak detection techniques for clusters are reactive and
display the memory leak information after the execution of the
process (they detect memory leak only after it occur).
This paper presents a Dynamic Memory Monitoring Agent
(DMMA) technique. DMMA framework is a dynamic memory leak
detection, that detects the memory leak while application is in
execution phase, when memory leak in any process in the cluster is
identified by DMMA it gives information to the end users to enable
them to take corrective actions and also DMMA submit the affected
process to healthy node in the system. Thus provides reliable service
to the user. DMMA maintains information about memory
consumption of executing processes and based on this information
and critical states, DMMA can improve reliability and
efficaciousness of cluster computing.
Abstract: Resource Discovery in Grids is critical for efficient
resource allocation and management. Heterogeneous nature and
dynamic availability of resources make resource discovery a
challenging task. As numbers of nodes are increasing from tens to
thousands, scalability is essentially desired. Peer-to-Peer (P2P)
techniques, on the other hand, provide effective implementation of
scalable services and applications. In this paper we propose a model
for resource discovery in Condor Middleware by using the four axis
framework defined in P2P approach. The proposed model enhances
Condor to incorporate functionality of a P2P system, thus aim to
make Condor more scalable, flexible, reliable and robust.