Abstract: In this paper, a vision based system has been used for
controlling an industrial 3P Cartesian robot. The vision system will
recognize the target and control the robot by obtaining images from
environment and processing them. At the first stage, images from
environment are changed to a grayscale mode then it can diverse and
identify objects and noises by using a threshold objects which are
stored in different frames and then the main object will be
recognized. This will control the robot to achieve the target. A vision
system can be an appropriate tool for measuring errors of a robot in a
situation where the experimental test is conducted for a 3P robot.
Finally, the international standard ANSI/RIA R15.05-2 is used for
evaluating the path-related characteristics of the robot. To evaluate
the performance of the proposed method experimental test is carried
out.
Abstract: An increasingly dynamic and complex environment poses huge challenges to production enterprises, especially with regards to logistics. The Logistic Operating Curve Theory, developed at the Institute of Production Systems and Logistics (IFA) of the Leibniz University of Hanover, is a recognized approach to describing logistic interactions, nevertheless, it reaches its limits when it comes to the dynamic aspects. In order to facilitate a timely and optimal Logistic Positioning a method is developed for quickly and reliably identifying dynamic processing states.
Abstract: The medical studies often require different methods
for parameters selection, as a second step of processing, after the
database-s designing and filling with information. One common
task is the selection of fields that act as risk factors using wellknown
methods, in order to find the most relevant risk factors and
to establish a possible hierarchy between them. Different methods
are available in this purpose, one of the most known being the
binary logistic regression. We will present the mathematical
principles of this method and a practical example of using it in the
analysis of the influence of 10 different psychiatric diagnostics
over 4 different types of offences (in a database made from 289
psychiatric patients involved in different types of offences).
Finally, we will make some observations about the relation
between the risk factors hierarchy established through binary
logistic regression and the individual risks, as well as the results of
Chi-squared test. We will show that the hierarchy built using the
binary logistic regression doesn-t agree with the direct order of risk
factors, even if it was naturally to assume this hypothesis as being
always true.
Abstract: World has entered in 21st century. The technology of
computer graphics and digital cameras is prevalent. High resolution
display and printer are available. Therefore high resolution images
are needed in order to produce high quality display images and high
quality prints. However, since high resolution images are not usually
provided, there is a need to magnify the original images. One
common difficulty in the previous magnification techniques is that of
preserving details, i.e. edges and at the same time smoothing the data
for not introducing the spurious artefacts. A definitive solution to this
is still an open issue. In this paper an image magnification using
adaptive interpolation by pixel level data-dependent geometrical
shapes is proposed that tries to take into account information about
the edges (sharp luminance variations) and smoothness of the image.
It calculate threshold, classify interpolation region in the form of
geometrical shapes and then assign suitable values inside
interpolation region to the undefined pixels while preserving the
sharp luminance variations and smoothness at the same time.
The results of proposed technique has been compared qualitatively
and quantitatively with five other techniques. In which the qualitative
results show that the proposed method beats completely the Nearest
Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The
quantitative results are competitive and consistent with NN, BL, BC
and others.
Abstract: Two algorithms are proposed to reduce the storage requirements for mammogram images. The input image goes through a shrinking process that converts the 16-bit images to 8-bits by using pixel-depth conversion algorithm followed by enhancement process. The performance of the algorithms is evaluated objectively and subjectively. A 50% reduction in size is obtained with no loss of significant data at the breast region.
Abstract: Computing and maintaining network structures for efficient
data aggregation incurs high overhead for dynamic events
where the set of nodes sensing an event changes with time. Moreover,
structured approaches are sensitive to the waiting time that is used
by nodes to wait for packets from their children before forwarding
the packet to the sink. An optimal routing and data aggregation
scheme for wireless sensor networks is proposed in this paper. We
propose Tree on DAG (ToD), a semistructured approach that uses
Dynamic Forwarding on an implicitly constructed structure composed
of multiple shortest path trees to support network scalability. The key
principle behind ToD is that adjacent nodes in a graph will have
low stretch in one of these trees in ToD, thus resulting in early
aggregation of packets. Based on simulations on a 2,000-node Mica2-
based network, we conclude that efficient aggregation in large-scale
networks can be achieved by our semistructured approach.
Abstract: Field mapping activity for an active volcano mainly in
the Torrid Zone is usually hampered by several problems such as steep
terrain and bad atmosphere conditions. In this paper we present a
simple solution for such problem by a combination Synthetic Aperture
Radar (SAR) and geostatistical methods. By this combination, we
could reduce the speckle effect from the SAR data and then estimate
roughness distribution of the pyroclastic flow deposits. The main
purpose of this study is to detect spatial distribution of new pyroclastic
flow deposits termed as P-zone accurately using the β°data from two
RADARSAT-1 SAR level-0 data. Single scene of Hyperion data and
field observation were used for cross-validation of the SAR results.
Mt. Merapi in central Java, Indonesia, was chosen as a study site and
the eruptions in May-June 2006 were examined. The P-zones were
found in the western and southern flanks. The area size and the longest
flow distance were calculated as 2.3 km2 and 6.8 km, respectively. The
grain size variation of the P-zone was mapped in detail from fine to
coarse deposits regarding the C-band wavelength of 5.6 cm.
Abstract: One of the main environmental problems which affect extensive areas in the world is soil salinity. Traditional data collection methods are neither enough for considering this important environmental problem nor accurate for soil studies. Remote sensing data could overcome most of these problems. Although satellite images are commonly used for these studies, however there are still needs to find the best calibration between the data and real situations in each specified area. Neyshaboor area, North East of Iran was selected as a field study of this research. Landsat satellite images for this area were used in order to prepare suitable learning samples for processing and classifying the images. 300 locations were selected randomly in the area to collect soil samples and finally 273 locations were reselected for further laboratory works and image processing analysis. Electrical conductivity of all samples was measured. Six reflective bands of ETM+ satellite images taken from the study area in 2002 were used for soil salinity classification. The classification was carried out using common algorithms based on the best composition bands. The results showed that the reflective bands 7, 3, 4 and 1 are the best band composition for preparing the color composite images. We also found out, that hybrid classification is a suitable method for identifying and delineation of different salinity classes in the area.
Abstract: Microwave energy is a superior alternative to several other thermal treatments. Extraction techniques are widely employed for the isolation of bioactive compounds and vegetable oils from oil seeds. Among the different and new available techniques, microwave pretreatment of seeds is a simple and desirable method for production of high quality vegetable oils. Microwave pretreatment for oil extraction has many advantages as follow: improving oil extraction yield and quality, direct extraction capability, lower energy consumption, faster processing time and reduced solvent levels compared with conventional methods. It allows also for better retention and availability of desirable nutraceuticals, such as phytosterols and tocopherols, canolol and phenolic compounds in the extracted oil such as rapeseed oil. This can be a new step to produce nutritional vegetable oils with improved shelf life because of high antioxidant content.
Abstract: Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.
Abstract: Residues are produced in all stages of human activities
in terms of composition and volume which vary according to
consumption practices and to production methods. Forms of
significant harm to the environment are associated to volume of
generated material as well as to improper disposal of solid wastes,
whose negative effects are noticed more frequently in the long term.
The solution to this problem constitutes a challenge to the
government, industry and society, because they involve economic,
social, environmental and, especially, awareness of the population in
general. The main concerns are focused on the impact it can have on
human health and on the environment (soil, water, air and sights).
The hazardous waste produced mainly by industry, are particularly
worrisome because, when improperly managed, they become a
serious threat to the environment. In view of this issue, this study
aimed to evaluate the management system of solid waste of a coprocessing
industrial waste company, to propose improvements to the
rejects generation management in a specific step of the Blending
production process.
Abstract: This paper presents a formant-tracking linear prediction
(FTLP) model for speech processing in noise. The main focus of this
work is the detection of formant trajectory based on Hidden Markov
Models (HMM), for improved formant estimation in noise. The
approach proposed in this paper provides a systematic framework for
modelling and utilization of a time- sequence of peaks which satisfies
continuity constraints on parameter; the within peaks are modelled
by the LP parameters. The formant tracking LP model estimation
is composed of three stages: (1) a pre-cleaning multi-band spectral
subtraction stage to reduce the effect of residue noise on formants
(2) estimation stage where an initial estimate of the LP model of
speech for each frame is obtained (3) a formant classification using
probability models of formants and Viterbi-decoders. The evaluation
results for the estimation of the formant tracking LP model tested
in Gaussian white noise background, demonstrate that the proposed
combination of the initial noise reduction stage with formant tracking
and LPC variable order analysis, results in a significant reduction in
errors and distortions. The performance was evaluated with noisy
natual vowels extracted from international french and English vocabulary
speech signals at SNR value of 10dB. In each case, the
estimated formants are compared to reference formants.
Abstract: This work presents a matched field processing (MFP)
algorithm based on Dopplerlet transform for estimating the motion
parameters of a sound source moving along a straight line and with a
constant speed by using a piecewise strategy, which can significantly
reduce the computational burden. Monte Carlo simulation results and
an experimental result are presented to verify the effectiveness of the
algorithm advocated.
Abstract: The Multi-Layered Perceptron (MLP) Neural
networks have been very successful in a number of signal processing
applications. In this work we have studied the possibilities and the
met difficulties in the application of the MLP neural networks for the
prediction of daily solar radiation data. We have used the Polack-Ribière algorithm for training the neural networks. A comparison, in
term of the statistical indicators, with a linear model most used in
literature, is also performed, and the obtained results show that the
neural networks are more efficient and gave the best results.
Abstract: The users are now expecting higher level of
DSP(Digital Signal Processing) software quality than ever before.
Prevention and detection of defect are critical elements of software
quality assurance. In this paper, principles and rules for prevention and
detection of defect are suggested, which are not universal guidelines,
but are useful for both novice and experienced DSP software
developers.
Abstract: The main aim of this research is to investigate a novel technique for implementing a more natural and intelligent conversation system. Conversation systems are designed to converse like a human as much as their intelligent allows. Sometimes, we can think that they are the embodiment of Turing-s vision. It usually to return a predetermined answer in a predetermined order, but conversations abound with uncertainties of various kinds. This research will focus on an integrated natural language processing approach. This approach includes an integrated knowledge-base construction module, a conversation understanding and generator module, and a state manager module. We discuss effectiveness of this approach based on an experiment.
Abstract: Mobile agent has motivated the creation of a new
methodology for parallel computing. We introduce a methodology
for the creation of parallel applications on the network. The proposed
Mobile-Agent parallel processing framework uses multiple Javamobile
Agents. Each mobile agent can travel to the specified
machine in the network to perform its tasks. We also introduce the
concept of master agent, which is Java object capable of
implementing a particular task of the target application. Master agent
is dynamically assigns the task to mobile agents. We have developed
and tested a prototype application: Mobile Agent Based Parallel
Computing. Boosted by the inherited benefits of using Java and
Mobile Agents, our proposed methodology breaks the barriers
between the environments, and could potentially exploit in a parallel
manner all the available computational resources on the network.
This paper elaborates performance issues of a mobile agent for
parallel computing.
Abstract: Increasing growth of information volume in the
internet causes an increasing need to develop new (semi)automatic
methods for retrieval of documents and ranking them according to
their relevance to the user query. In this paper, after a brief review
on ranking models, a new ontology based approach for ranking
HTML documents is proposed and evaluated in various
circumstances. Our approach is a combination of conceptual,
statistical and linguistic methods. This combination reserves the
precision of ranking without loosing the speed. Our approach
exploits natural language processing techniques to extract phrases
from documents and the query and doing stemming on words. Then
an ontology based conceptual method will be used to annotate
documents and expand the query. To expand a query the spread
activation algorithm is improved so that the expansion can be done
flexible and in various aspects. The annotated documents and the
expanded query will be processed to compute the relevance degree
exploiting statistical methods. The outstanding features of our
approach are (1) combining conceptual, statistical and linguistic
features of documents, (2) expanding the query with its related
concepts before comparing to documents, (3) extracting and using
both words and phrases to compute relevance degree, (4) improving
the spread activation algorithm to do the expansion based on
weighted combination of different conceptual relationships and (5)
allowing variable document vector dimensions. A ranking system
called ORank is developed to implement and test the proposed
model. The test results will be included at the end of the paper.
Abstract: Due to the ever growing amount of publications about
protein-protein interactions, information extraction from text is
increasingly recognized as one of crucial technologies in
bioinformatics. This paper presents a Protein Interaction Extraction
System using a Link Grammar Parser from biomedical abstracts
(PIELG). PIELG uses linkage given by the Link Grammar Parser to
start a case based analysis of contents of various syntactic roles as
well as their linguistically significant and meaningful combinations.
The system uses phrasal-prepositional verbs patterns to overcome
preposition combinations problems. The recall and precision are
74.4% and 62.65%, respectively. Experimental evaluations with two
other state-of-the-art extraction systems indicate that PIELG system
achieves better performance. For further evaluation, the system is
augmented with a graphical package (Cytoscape) for extracting
protein interaction information from sequence databases. The result
shows that the performance is remarkably promising.
Abstract: In this paper a new approach to face recognition is
presented that achieves double dimension reduction, making the
system computationally efficient with better recognition results and
out perform common DCT technique of face recognition. In pattern
recognition techniques, discriminative information of image
increases with increase in resolution to a certain extent, consequently
face recognition results change with change in face image resolution
and provide optimal results when arriving at a certain resolution
level. In the proposed model of face recognition, initially image
decimation algorithm is applied on face image for dimension
reduction to a certain resolution level which provides best
recognition results. Due to increased computational speed and feature
extraction potential of Discrete Cosine Transform (DCT), it is
applied on face image. A subset of coefficients of DCT from low to
mid frequencies that represent the face adequately and provides best
recognition results is retained. A tradeoff between decimation factor,
number of DCT coefficients retained and recognition rate with
minimum computation is obtained. Preprocessing of the image is
carried out to increase its robustness against variations in poses and
illumination level. This new model has been tested on different
databases which include ORL , Yale and EME color database.