Abstract: The indistinctness of the manufacturing processes makes that a parts cannot be realized in an absolutely exact way towards the specifications on the dimensions. It is thus necessary to assume that the effectively realized product has to belong in a very strict way to compatible intervals with a correct functioning of the parts. In this paper we present an approach based on mixing tow different characteristics theories, the fuzzy system and Petri net system. This tool has been proposed to model and control the quality in an assembly system. A robust command of a mechanical assembly process is presented as an application. This command will then have to maintain the specifications interval of parts in front of the variations. It also illustrates how the technique reacts when the product quality is high, medium, or low.
Abstract: Because of excellent properties, people has paid more
attention to SPIHI algorithm, which is based on the traditional wavelet
transformation theory, but it also has its shortcomings. Combined the
progress in the present wavelet domain and the human's visual
characteristics, we propose an improved algorithm based on human
visual characteristics of SPIHT in the base of analysis of SPIHI
algorithm. The experiment indicated that the coding speed and quality
has been enhanced well compared to the original SPIHT algorithm,
moreover improved the quality of the transmission cut off.
Abstract: Efficient preprocessing is very essential for automatic
recognition of handwritten documents. In this paper, techniques on
segmenting words in handwritten Arabic text are presented. Firstly,
connected components (ccs) are extracted, and distances among
different components are analyzed. The statistical distribution of this
distance is then obtained to determine an optimal threshold for words
segmentation. Meanwhile, an improved projection based method is
also employed for baseline detection. The proposed method has been
successfully tested on IFN/ENIT database consisting of 26459
Arabic words handwritten by 411 different writers, and the results
were promising and very encouraging in more accurate detection of
the baseline and segmentation of words for further recognition.
Abstract: Image Compression using Artificial Neural Networks
is a topic where research is being carried out in various directions
towards achieving a generalized and economical network.
Feedforward Networks using Back propagation Algorithm adopting
the method of steepest descent for error minimization is popular and
widely adopted and is directly applied to image compression.
Various research works are directed towards achieving quick
convergence of the network without loss of quality of the restored
image. In general the images used for compression are of different
types like dark image, high intensity image etc. When these images
are compressed using Back-propagation Network, it takes longer
time to converge. The reason for this is, the given image may
contain a number of distinct gray levels with narrow difference with
their neighborhood pixels. If the gray levels of the pixels in an image
and their neighbors are mapped in such a way that the difference in
the gray levels of the neighbors with the pixel is minimum, then
compression ratio as well as the convergence of the network can be
improved. To achieve this, a Cumulative distribution function is
estimated for the image and it is used to map the image pixels. When
the mapped image pixels are used, the Back-propagation Neural
Network yields high compression ratio as well as it converges
quickly.
Abstract: Sickness absence represents a major economic and
social issue. Analysis of sick leave data is a recurrent challenge to analysts because of the complexity of the data structure which is
often time dependent, highly skewed and clumped at zero. Ignoring these features to make statistical inference is likely to be inefficient
and misguided. Traditional approaches do not address these problems. In this study, we discuss model methodologies in terms of statistical techniques for addressing the difficulties with sick leave data. We also introduce and demonstrate a new method by performing a longitudinal assessment of long-term absenteeism using
a large registration dataset as a working example available from the Helsinki Health Study for municipal employees from Finland during the period of 1990-1999. We present a comparative study on model
selection and a critical analysis of the temporal trends, the occurrence
and degree of long-term sickness absences among municipal employees. The strengths of this working example include the large
sample size over a long follow-up period providing strong evidence in supporting of the new model. Our main goal is to propose a way to
select an appropriate model and to introduce a new methodology for analysing sickness absence data as well as to demonstrate model
applicability to complicated longitudinal data.
Abstract: In this paper, a novel multi join algorithm to join
multiple relations will be introduced. The novel algorithm is based
on a hashed-based join algorithm of two relations to produce a double index. This is done by scanning the two relations once. But
instead of moving the records into buckets, a double index will be built. This will eliminate the collision that can happen from a complete hash algorithm. The double index will be divided into join
buckets of similar categories from the two relations. The algorithm then joins buckets with similar keys to produce joined buckets. This
will lead at the end to a complete join index of the two relations. without actually joining the actual relations. The time complexity
required to build the join index of two categories is Om log m where m is the size of each category. Totaling time complexity to O n log m
for all buckets. The join index will be used to materialize the joined relation if required. Otherwise, it will be used along with other join
indices of other relations to build a lattice to be used in multi-join operations with minimal I/O requirements. The lattice of the join indices can be fitted into the main memory to reduce time complexity of the multi join algorithm.
Abstract: This article concerns the presentation of an integrated
method for detection of steganographic content embedded by new
unknown programs. The method is based on data mining and
aggregated hypothesis testing. The article contains the theoretical
basics used to deploy the proposed detection system and the
description of improvement proposed for the basic system idea.
Further main results of experiments and implementation details are
collected and described. Finally example results of the tests are
presented.
Abstract: The design of a pattern classifier includes an attempt
to select, among a set of possible features, a minimum subset of
weakly correlated features that better discriminate the pattern classes.
This is usually a difficult task in practice, normally requiring the
application of heuristic knowledge about the specific problem
domain. The selection and quality of the features representing each
pattern have a considerable bearing on the success of subsequent
pattern classification. Feature extraction is the process of deriving
new features from the original features in order to reduce the cost of
feature measurement, increase classifier efficiency, and allow higher
classification accuracy. Many current feature extraction techniques
involve linear transformations of the original pattern vectors to new
vectors of lower dimensionality. While this is useful for data
visualization and increasing classification efficiency, it does not
necessarily reduce the number of features that must be measured
since each new feature may be a linear combination of all of the
features in the original pattern vector. In this paper a new approach is
presented to feature extraction in which feature selection, feature
extraction, and classifier training are performed simultaneously using
a genetic algorithm. In this approach each feature value is first
normalized by a linear equation, then scaled by the associated weight
prior to training, testing, and classification. A knn classifier is used to
evaluate each set of feature weights. The genetic algorithm optimizes
a vector of feature weights, which are used to scale the individual
features in the original pattern vectors in either a linear or a nonlinear
fashion. By this approach, the number of features used in classifying
can be finely reduced.
Abstract: This paper presents the methodology from machine
learning approaches for short-term rain forecasting system. Decision
Tree, Artificial Neural Network (ANN), and Support Vector Machine
(SVM) were applied to develop classification and prediction models
for rainfall forecasts. The goals of this presentation are to
demonstrate (1) how feature selection can be used to identify the
relationships between rainfall occurrences and other weather
conditions and (2) what models can be developed and deployed for
predicting the accurate rainfall estimates to support the decisions to
launch the cloud seeding operations in the northeastern part of
Thailand. Datasets collected during 2004-2006 from the
Chalermprakiat Royal Rain Making Research Center at Hua Hin,
Prachuap Khiri khan, the Chalermprakiat Royal Rain Making
Research Center at Pimai, Nakhon Ratchasima and Thai
Meteorological Department (TMD). A total of 179 records with 57
features was merged and matched by unique date. There are three
main parts in this work. Firstly, a decision tree induction algorithm
(C4.5) was used to classify the rain status into either rain or no-rain.
The overall accuracy of classification tree achieves 94.41% with the
five-fold cross validation. The C4.5 algorithm was also used to
classify the rain amount into three classes as no-rain (0-0.1 mm.),
few-rain (0.1- 10 mm.), and moderate-rain (>10 mm.) and the overall
accuracy of classification tree achieves 62.57%. Secondly, an ANN
was applied to predict the rainfall amount and the root mean square
error (RMSE) were used to measure the training and testing errors of
the ANN. It is found that the ANN yields a lower RMSE at 0.171 for
daily rainfall estimates, when compared to next-day and next-2-day
estimation. Thirdly, the ANN and SVM techniques were also used to
classify the rain amount into three classes as no-rain, few-rain, and
moderate-rain as above. The results achieved in 68.15% and 69.10%
of overall accuracy of same-day prediction for the ANN and SVM
models, respectively. The obtained results illustrated the comparison
of the predictive power of different methods for rainfall estimation.
Abstract: The huge development of new technologies and the
apparition of open communication system more and more
sophisticated create a new challenge to protect digital content from
piracy. Digital watermarking is a recent research axis and a new
technique suggested as a solution to these problems. This technique
consists in inserting identification information (watermark) into
digital data (audio, video, image, databases...) in an invisible and
indelible manner and in such a way not to degrade original medium-s
quality. Moreover, we must be able to correctly extract the
watermark despite the deterioration of the watermarked medium (i.e
attacks). In this paper we propose a system for watermarking satellite
images. We chose to embed the watermark into frequency domain,
precisely the discrete wavelet transform (DWT). We applied our
algorithm on satellite images of Tunisian center. The experiments
show satisfying results. In addition, our algorithm showed an
important resistance facing different attacks, notably the compression
(JEPG, JPEG2000), the filtering, the histogram-s manipulation and
geometric distortions such as rotation, cropping, scaling.
Abstract: Localization is one of the critical issues in the field of
robot navigation. With an accurate estimate of the robot pose, robots will be capable of navigating in the environment autonomously and efficiently. In this paper, a hybrid Distributed Vision System (DVS)
for robot localization is presented. The presented approach integrates
odometry data from robot and images captured from overhead cameras
installed in the environment to help reduce possibilities of fail
localization due to effects of illumination, encoder accumulated errors,
and low quality range data. An odometry-based motion model is applied to predict robot poses, and robot images captured by overhead
cameras are then used to update pose estimates with HSV histogram-based measurement model. Experiment results show the
presented approach could localize robots in a global world coordinate system with localization errors within 100mm.
Abstract: Skin color based tracking techniques often assume a
static skin color model obtained either from an offline set of library
images or the first few frames of a video stream. These models
can show a weak performance in presence of changing lighting or
imaging conditions. We propose an adaptive skin color model based
on the Gaussian mixture model to handle the changing conditions.
Initial estimation of the number and weights of skin color clusters
are obtained using a modified form of the general Expectation
maximization algorithm, The model adapts to changes in imaging
conditions and refines the model parameters dynamically using spatial
and temporal constraints. Experimental results show that the method
can be used in effectively tracking of hand and face regions.
Abstract: Most scientific programs have large input and output
data sets that require out-of-core programming or use virtual memory
management (VMM). Out-of-core programming is very error-prone
and tedious; as a result, it is generally avoided. However, in many
instance, VMM is not an effective approach because it often results
in substantial performance reduction. In contrast, compiler driven I/O
management will allow a program-s data sets to be retrieved in parts,
called blocks or tiles. Comanche (COmpiler MANaged caCHE) is a
compiler combined with a user level runtime system that can be used
to replace standard VMM for out-of-core programs. We describe
Comanche and demonstrate on a number of representative problems
that it substantially out-performs VMM. Significantly our system
does not require any special services from the operating system and
does not require modification of the operating system kernel.
Abstract: This paper presents a new approach to tackle the problem of recognizing machine-printed Arabic texts. Because of the difficulty of recognizing cursive Arabic words, the text has to be normalized and segmented to be ready for the recognition stage. The new scheme for recognizing Arabic characters depends on multiple parallel neural networks classifier. The classifier has two phases. The first phase categories the input character into one of eight groups. The second phase classifies the character into one of the Arabic character classes in the group. The system achieved high recognition rate.
Abstract: Panoramic view generation has always offered
novel and distinct challenges in the field of image processing.
Panoramic view generation is nothing but construction of bigger
view mosaic image from set of partial images of the desired view.
The paper presents a solution to one of the problems of image
seascape formation where some of the partial images are color and
others are grayscale. The simplest solution could be to convert all
image parts into grayscale images and fusing them to get grayscale
image panorama. But in the multihued world, obtaining the colored
seascape will always be preferred. This could be achieved by picking
colors from the color parts and squirting them in grayscale parts of
the seascape. So firstly the grayscale image parts should be colored
with help of color image parts and then these parts should be fused to
construct the seascape image.
The problem of coloring grayscale images has no exact solution.
In the proposed technique of panoramic view generation, the job of
transferring color traits from reference color image to grayscale
image is done by palette based method. In this technique, the color
palette is prepared using pixel windows of some degrees taken from
color image parts. Then the grayscale image part is divided into pixel
windows with same degrees. For every window of grayscale image
part the palette is searched and equivalent color values are found,
which could be used to color grayscale window. For palette
preparation we have used RGB color space and Kekre-s LUV color
space. Kekre-s LUV color space gives better quality of coloring. The
searching time through color palette is improved over the exhaustive
search using Kekre-s fast search technique.
After coloring the grayscale image pieces the next job is fusion of
all these pieces to obtain panoramic view. For similarity estimation
between partial images correlation coefficient is used.
Abstract: This paper reviews various approaches that have been
used for the modeling and simulation of large-scale engineering
systems and determines their appropriateness in the development of a
RICS modeling and simulation tool. Bond graphs, linear graphs,
block diagrams, differential and difference equations, modeling
languages, cellular automata and agents are reviewed. This tool
should be based on linear graph representation and supports symbolic
programming, functional programming, the development of noncausal
models and the incorporation of decentralized approaches.
Abstract: Here, a new idea to speed up the operation of
complex valued time delay neural networks is presented. The whole
data are collected together in a long vector and then tested as a one
input pattern. The proposed fast complex valued time delay neural
networks uses cross correlation in the frequency domain between the
tested data and the input weights of neural networks. It is proved
mathematically that the number of computation steps required for
the presented fast complex valued time delay neural networks is less
than that needed by classical time delay neural networks. Simulation
results using MATLAB confirm the theoretical computations.
Abstract: This paper presents a useful sub-pixel image
registration method using line segments and a sub-pixel edge detector.
In this approach, straight line segments are first extracted from gray
images at the pixel level before applying the sub-pixel edge detector.
Next, all sub-pixel line edges are mapped onto the orientation-distance
parameter space to solve for line correspondence between images.
Finally, the registration parameters with sub-pixel accuracy are
analytically solved via two linear least-square problems. The present
approach can be applied to various fields where fast registration with
sub-pixel accuracy is required. To illustrate, the present approach is
applied to the inspection of printed circuits on a flat panel. Numerical
example shows that the present approach is effective and accurate
when target images contain a sufficient number of line segments,
which is true in many industrial problems.
Abstract: An application framework provides a reusable design
and implementation for a family of software systems. If the
framework contains defects, the defects will be passed on to the
applications developed from the framework. Framework defects are
hard to discover at the time the framework is instantiated. Therefore,
it is important to remove all defects before instantiating the
framework. In this paper, two measures for the adequacy of an
object-oriented system-based testing technique are introduced. The
measures assess the usefulness and uniqueness of the testing
technique. The two measures are applied to experimentally compare
the adequacy of two testing techniques introduced to test objectoriented
frameworks at the system level. The two considered testing
techniques are the New Framework Test Approach and Testing
Frameworks Through Hooks (TFTH). The techniques are also
compared analytically in terms of their coverage power of objectoriented
aspects. The comparison study results show that the TFTH
technique is better than the New Framework Test Approach in terms
of usefulness degree, uniqueness degree, and coverage power.
Abstract: None of the processing models in the software
development has explained the software systems performance
evaluation and modeling; likewise, there exist uncertainty in the
information systems because of the natural essence of requirements,
and this may cause other challenges in the processing of software
development. By definition an extended version of UML (Fuzzy-
UML), the functional requirements of the software defined
uncertainly would be supported. In this study, the behavioral
description of uncertain information systems by the aid of fuzzy-state
diagram is crucial; moreover, the introduction of behavioral diagrams
role in F-UML is investigated in software performance modeling
process. To get the aim, a fuzzy sub-profile is used.