Abstract: Real-time hand tracking is a challenging task in many
computer vision applications such as gesture recognition. This paper
proposes a robust method for hand tracking in a complex environment
using Mean-shift analysis and Kalman filter in conjunction with 3D
depth map. The depth information solve the overlapping problem
between hands and face, which is obtained by passive stereo measuring
based on cross correlation and the known calibration data of
the cameras. Mean-shift analysis uses the gradient of Bhattacharyya
coefficient as a similarity function to derive the candidate of the hand
that is most similar to a given hand target model. And then, Kalman
filter is used to estimate the position of the hand target. The results
of hand tracking, tested on various video sequences, are robust to
changes in shape as well as partial occlusion.
Abstract: Sequential pattern mining is a challenging task in data mining area with large applications. One among those applications is mining patterns from weblog. Recent times, weblog is highly dynamic and some of them may become absolute over time. In addition, users may frequently change the threshold value during the data mining process until acquiring required output or mining interesting rules. Some of the recently proposed algorithms for mining weblog, build the tree with two scans and always consume large time and space. In this paper, we build Revised PLWAP with Non-frequent Items (RePLNI-tree) with single scan for all items. While mining sequential patterns, the links related to the nonfrequent items are not considered. Hence, it is not required to delete or maintain the information of nodes while revising the tree for mining updated transactions. The algorithm supports both incremental and interactive mining. It is not required to re-compute the patterns each time, while weblog is updated or minimum support changed. The performance of the proposed tree is better, even the size of incremental database is more than 50% of existing one. For evaluation purpose, we have used the benchmark weblog dataset and found that the performance of proposed tree is encouraging compared to some of the recently proposed approaches.
Abstract: Exploring an autistic child in Elementary school is a
difficult task that must be fully thought out and the teachers should
be aware of the many challenges they face raising their child
especially the behavioral problems of autistic children. Hence there
arises a need for developing Artificial intelligence (AI)
Contemporary Techniques to help diagnosis to discover autistic
people.
In this research, we suggest designing architecture of expert
system that combine Cognitive Maps (CM) with Case Based
Reasoning technique (CBR) in order to reduce time and costs of
traditional diagnosis process for the early detection to discover
autistic children. The teacher is supposed to enter child's information
for analyzing by CM module. Then, the reasoning processor would
translate the output into a case to be solved a current problem by
CBR module. We will implement a prototype for the model as a
proof of concept using java and MYSQL.
This will be provided a new hybrid approach that will achieve new
synergies and improve problem solving capabilities in AI. And we
will predict that will reduce time, costs, the number of human errors
and make expertise available to more people who want who want to
serve autistic children and their families.
Abstract: The fault detection and diagnosis of complicated
production processes is one of essential tasks needed to run the process
safely with good final product quality. Unexpected events occurred in
the process may have a serious impact on the process. In this work,
triangular representation of process measurement data obtained in an
on-line basis is evaluated using simulation process. The effect of using
linear and nonlinear reduced spaces is also tested. Their diagnosis
performance was demonstrated using multivariate fault data. It has
shown that the nonlinear technique based diagnosis method produced
more reliable results and outperforms linear method. The use of
appropriate reduced space yielded better diagnosis performance. The
presented diagnosis framework is different from existing ones in that it
attempts to extract the fault pattern in the reduced space, not in the
original process variable space. The use of reduced model space helps
to mitigate the sensitivity of the fault pattern to noise.
Abstract: This paper develops an unscented grid-based filter
and a smoother for accurate nonlinear modeling and analysis
of time series. The filter uses unscented deterministic sampling
during both the time and measurement updating phases, to approximate
directly the distributions of the latent state variable. A
complementary grid smoother is also made to enable computing
of the likelihood. This helps us to formulate an expectation
maximisation algorithm for maximum likelihood estimation of
the state noise and the observation noise. Empirical investigations
show that the proposed unscented grid filter/smoother compares
favourably to other similar filters on nonlinear estimation tasks.
Abstract: This paper is concerned with the application of the vision control algorithm for robot's point placement task in discontinuous trajectory caused by obstacle. The presented vision control algorithm consists of four models, which are the robot kinematic model, vision system model, parameters estimation model, and robot joint angle estimation model.When the robot moves toward a target along discontinuous trajectory, several types of obstacles appear in two obstacle regions. Then, this study is to investigate how these changes will affect the presented vision control algorithm.Thus, the practicality of the vision control algorithm is demonstrated experimentally by performing the robot's point placement task in discontinuous trajectory by obstacle.
Abstract: This paper presents an algorithm which extends the rapidly-exploring random tree (RRT) framework to deal with change of the task environments. This algorithm called the Retrieval RRT Strategy (RRS) combines a support vector machine (SVM) and RRT and plans the robot motion in the presence of the change of the surrounding environment. This algorithm consists of two levels. At the first level, the SVM is built and selects a proper path from the bank of RRTs for a given environment. At the second level, a real path is planned by the RRT planners for the given environment. The suggested method is applied to the control of KUKA™,, a commercial 6 DOF robot manipulator, and its feasibility and efficiency are demonstrated via the cosimulatation of MatLab™, and RecurDyn™,.
Abstract: Liver segmentation is the first significant process for
liver diagnosis of the Computed Tomography. It segments the liver
structure from other abdominal organs. Sophisticated filtering techniques
are indispensable for a proper segmentation. In this paper, we
employ a 3D anisotropic diffusion as a preprocessing step. While
removing image noise, this technique preserve the significant parts
of the image, typically edges, lines or other details that are important
for the interpretation of the image. The segmentation task is done
by using thresholding with automatic threshold values selection and
finally the false liver region is eliminated using 3D connected component.
The result shows that by employing the 3D anisotropic filtering,
better liver segmentation results could be achieved eventhough simple
segmentation technique is used.
Abstract: A product goes through various processes in a production flow which is also known as assembly line in manufacturing process management. Toyota created a new concept which is known as lean concept in manufacturing industry. Today it is the leading model in manufacturing plants through the globe. The linear walking worker assembly line is a flexible assembly system where each worker travels down the line carrying out each assembly task at each station; and each worker accomplishes the assembly of a unit from start to finish. This paper attempts to combine the flexibility of the walking worker and lean in order to quantify the benefits from applying the shop floor principles of lean management.
Abstract: LabVIEW and SIMULINK are two most widely used
graphical programming environments for designing digital signal
processing and control systems. Unlike conventional text-based
programming languages such as C, Cµ and MATLAB, graphical
programming involves block-based code developments, allowing a
more efficient mechanism to build and analyze control systems. In
this paper a LabVIEW environment has been employed as a
graphical user interface for monitoring the operation of a controlled
distillation column, by visualizing both the closed loop performance
and the user selected control conditions, while the column dynamics
has been modeled under the SIMULINK environment. This tool has
been applied to the PID based decoupled control of a binary
distillation column. By means of such integrated environments the
control designer is able to monitor and control the plant behavior and
optimize the response when both, the quality improvement of
distillation products and the operation efficiency tasks, are
considered.
Abstract: This research presents a fuzzy multi-objective model
for a machine selection problem in a flexible manufacturing system
of a tire company. Two main objectives are minimization of an
average machine error and minimization of the total setup time.
Conventionally, the working team uses trial and error in selecting a
pressing machine for each task due to the complexity and constraints
of the problem. So, both objectives may not satisfy. Moreover, trial
and error takes a lot of time to get the final decision. Therefore, in
this research preemptive fuzzy goal programming model is developed
for solving this multi-objective problem. The proposed model can
obtain the appropriate results that the Decision Making (DM) is
satisfied for both objectives. Besides, alternative choice can be easily
generated by varying the satisfaction level. Additionally, decision
time can be reduced by using the model, which includes all
constraints of the system to generate the solutions. A numerical
example is also illustrated to show the effectiveness of the proposed
model.
Abstract: Text categorization - the assignment of natural language documents to one or more predefined categories based on their semantic content - is an important component in many information organization and management tasks. Performance of neural networks learning is known to be sensitive to the initial weights and architecture. This paper discusses the use multilayer neural network initialization with decision tree classifier for improving text categorization accuracy. An adaptation of the algorithm is proposed in which a decision tree from root node until a final leave is used for initialization of multilayer neural network. The experimental evaluation demonstrates this approach provides better classification accuracy with Reuters-21578 corpus, one of the standard benchmarks for text categorization tasks. We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.
Abstract: This article describes a Web pages automatic filtering system. It is an open and dynamic system based on multi agents architecture. This system is built up by a set of agents having each a quite precise filtering task of to carry out (filtering process broken up into several elementary treatments working each one a partial solution). New criteria can be added to the system without stopping its execution or modifying its environment. We want to show applicability and adaptability of the multi-agents approach to the networks information automatic filtering. In practice, most of existing filtering systems are based on modular conception approaches which are limited to centralized applications which role is to resolve static data flow problems. Web pages filtering systems are characterized by a data flow which varies dynamically.
Abstract: Deciding the numerous parameters involved in
designing a competent artificial neural network is a complicated task.
The existence of several options for selecting an appropriate
architecture for neural network adds to this complexity, especially
when different applications of heterogeneous natures are concerned.
Two completely different applications in engineering and medical
science were selected in the present study including prediction of
workpiece's surface roughness in ultrasonic-vibration assisted turning
and papilloma viruses oncogenicity. Several neural network
architectures with different parameters were developed for each
application and the results were compared. It was illustrated in this
paper that some applications such as the first one mentioned above
are apt to be modeled by a single network with sufficient accuracy,
whereas others such as the second application can be best modeled
by different expert networks for different ranges of output.
Development of knowledge about the essentials of neural networks
for different applications is regarded as the cornerstone of
multidisciplinary network design programs to be developed as a
means of reducing inconsistencies and the burden of the user
intervention.
Abstract: Processing the data by computers and performing
reasoning tasks is an important aim in Computer Science. Semantic
Web is one step towards it. The use of ontologies to enhance the
information by semantically is the current trend. Huge amount of
domain specific, unstructured on-line data needs to be expressed in
machine understandable and semantically searchable format.
Currently users are often forced to search manually in the results
returned by the keyword-based search services. They also want to use
their native languages to express what they search. In this paper, an
ontology-based automated question answering system on software
test documents domain is presented. The system allows users to enter
a question about the domain by means of natural language and
returns exact answer of the questions. Conversion of the natural
language question into the ontology based query is the challenging
part of the system. To be able to achieve this, a new algorithm
regarding free text to ontology based search engine query conversion
is proposed. The algorithm is based on investigation of suitable
question type and parsing the words of the question sentence.
Abstract: A new approach to determine the machine layout in flexible manufacturing cell, and to find the feasible robot configuration of the robot to achieve minimum cycle time is presented in this paper. The location of the input/output location and the optimal robot configuration is obtained for all sequences of work tasks of the robot within a specified period of time. A more realistic approach has been presented to model the problem using the robot joint space. The problem is formulated as a nonlinear optimization problem and solved using Sequential Quadratic Programming algorithm.
Abstract: The authors have been developing several models
based on artificial neural networks, linear regression models, Box-
Jenkins methodology and ARIMA models to predict the time series
of tourism. The time series consist in the “Monthly Number of Guest
Nights in the Hotels" of one region. Several comparisons between the
different type models have been experimented as well as the features
used at the entrance of the models. The Artificial Neural Network
(ANN) models have always had their performance at the top of the
best models. Usually the feed-forward architecture was used due to
their huge application and results. In this paper the author made a
comparison between different architectures of the ANNs using
simply the same input. Therefore, the traditional feed-forward
architecture, the cascade forwards, a recurrent Elman architecture and
a radial based architecture were discussed and compared based on the
task of predicting the mentioned time series.
Abstract: Human skull is shown to exhibit numerous sexually dimorphic traits. Estimation of sex is a challenging task especially when a part of skull is brought for medicolegal investigation. The present research was planned to evaluate the sexing potential of the dimensions of foramen magnum in forensic identification by craniometric analysis. Length and breadth of the foramen magnum was measured using Vernier calipers and the area of foramen magnum was calculated. The length, breadth, and area of foramen magnum were found to be larger in males than females. Sexual dimorphism index was calculated to estimate the sexing potential of each variable. The study observations are suggestive of the limited utility of the craniometric analysis of foramen magnum during the examination of skull and its parts in estimation of sex.
Abstract: The paper describes the futures trading and aims to
design the speculators trading strategy. The problem is formulated as
the decision making task and such as is solved. The solution of the
task leads to complex mathematical problems and the approximations
of the decision making is demanded. Two kind of approximation are
used in the paper: Monte Carlo for the multi-step prediction and
iteration spread in time for the optimization. The solution is applied to the real-market data and the results of the off-line experiments are
presented.
Abstract: Prospective readers can quickly determine whether a document is relevant to their information need if the significant phrases (or keyphrases) in this document are provided. Although keyphrases are useful, not many documents have keyphrases assigned to them, and manually assigning keyphrases to existing documents is costly. Therefore, there is a need for automatic keyphrase extraction. This paper introduces a new domain independent keyphrase extraction algorithm. The algorithm approaches the problem of keyphrase extraction as a classification task, and uses a combination of statistical and computational linguistics techniques, a new set of attributes, and a new machine learning method to distinguish keyphrases from non-keyphrases. The experiments indicate that this algorithm performs better than other keyphrase extraction tools and that it significantly outperforms Microsoft Word 2000-s AutoSummarize feature. The domain independence of this algorithm has also been confirmed in our experiments.