Abstract: This paper proposes an innovative approach for the Connection Admission Control (CAC) problem. Starting from an abstract network modelling, the CAC problem is formulated in a technology independent fashion allowing the proposed concepts to be applied to any wireless and wired domain. The proposed CAC is decoupled from the other Resource Management procedures, but cooperates with them in order to guarantee the desired QoS requirements. Moreover, it is based on suitable performance measurements which, by using proper predictors, allow to forecast the domain dynamics in the next future. Finally, the proposed CAC control scheme is based on a feedback loop aiming at maximizing a suitable performance index accounting for the domain throughput, whilst respecting a set of constraints accounting for the QoS requirements.
Abstract: In this study a clustering technique has been implemented which is K-Means like with hierarchical initial set (HKM). The goal of this study is to prove that clustering document sets do enhancement precision on information retrieval systems, since it was proved by Bellot & El-Beze on French language. A comparison is made between the traditional information retrieval system and the clustered one. Also the effect of increasing number of clusters on precision is studied. The indexing technique is Term Frequency * Inverse Document Frequency (TF * IDF). It has been found that the effect of Hierarchical K-Means Like clustering (HKM) with 3 clusters over 242 Arabic abstract documents from the Saudi Arabian National Computer Conference has significant results compared with traditional information retrieval system without clustering. Additionally it has been found that it is not necessary to increase the number of clusters to improve precision more.
Abstract: This paper investigates the inverse problem of determining
the unknown time-dependent leading coefficient in the parabolic
equation using the usual conditions of the direct problem and an additional
condition. An algorithm is developed for solving numerically
the inverse problem using the technique of space decomposition in a
reproducing kernel space. The leading coefficients can be solved by a
lower triangular linear system. Numerical experiments are presented
to show the efficiency of the proposed methods.
Abstract: The NGN (Next Generation Network), which can
provide advanced multimedia services over an all-IP based network, has been the subject of much attention for years. While there have
been tremendous efforts to develop its architecture and protocols, especially for IMS, which is a key technology of the NGN, it is far
from being widely deployed. However, efforts to create an advanced
signaling infrastructure realizing many requirements have resulted in a
large number of functional components and interactions between those
components. Thus, the carriers are trying to explore effective ways to
deploy IMS while offering value-added services. As one such
approach, we have proposed a self-organizing IMS. A self-organizing
IMS enables IMS functional components and corresponding physical
nodes to adapt dynamically and automatically based on situation such
as network load and available system resources while continuing IMS
operation. To realize this, service continuity for users is an important
requirement when a reconfiguration occurs during operation. In this
paper, we propose a mechanism that will provide service continuity to
users and focus on the implementation and describe performance
evaluation in terms of number of control signaling and processing time
during reconfiguration
Abstract: Fair share objective has been included into the goaloriented
parallel computer job scheduling policy recently. However,
the previous work only presented the overall scheduling performance.
Thus, the per-user performance of the policy is still lacking. In this
work, the details of per-user fair share performance under the
Tradeoff-fs(Tx:avgX) policy will be further evaluated. A basic fair
share priority backfill policy namely RelShare(1d) is also studied.
The performance of all policies is collected using an event-driven
simulator with three real job traces as input. The experimental results
show that the high demand users are usually benefited under most
policies because their jobs are large or they have a lot of jobs. In the
large job case, one job executed may result in over-share during that
period. In the other case, the jobs may be backfilled for
performances. However, the users with a mixture of jobs may suffer
because if the smaller jobs are executing the priority of the remaining
jobs from the same user will be lower. Further analysis does not show
any significant impact of users with a lot of jobs or users with a large
runtime approximation error.
Abstract: Cloud Computing (CC) has become one of the most
talked about emerging technologies that provides powerful
computing and large storage environments through the use of the
Internet. Cloud computing provides different dynamically scalable
computing resources as a service. It brings economic benefits to
individuals and businesses that adopt the technology. In theory
adoption of cloud computing reduces capital and operational
expenditure on information technology. For this to be a reality there
is need to solve some challenges and at the same time addressing
concerns that consumers have about cloud computing. This paper
looks at Cloud Computing in general then highlights the challenges
of Cloud Computing and finally suggests solutions to some of the
challenges.
Abstract: Knowing the geometrical object pose of products in manufacturing line before robot manipulation is required and less time consuming for overall shape measurement. In order to perform it, the information of shape representation and matching of objects is become required. Objects are compared with its descriptor that conceptually subtracted from each other to form scalar metric. When the metric value is smaller, the object is considered closed to each other. Rotating the object from static pose in some direction introduce the change of value in scalar metric value of boundary information after feature extraction of related object. In this paper, a proposal method for indexing technique for retrieval of 3D geometrical models based on similarity between boundaries shapes in order to measure 3D CAD object pose using object shape feature matching for Computer Aided Testing (CAT) system in production line is proposed. In experimental results shows the effectiveness of proposed method.
Abstract: In this paper we present an efficient system for
independent speaker speech recognition based on neural network
approach. The proposed architecture comprises two phases: a
preprocessing phase which consists in segmental normalization and
features extraction and a classification phase which uses neural
networks based on nonparametric density estimation namely the
general regression neural network (GRNN). The relative
performances of the proposed model are compared to the similar
recognition systems based on the Multilayer Perceptron (MLP), the
Recurrent Neural Network (RNN) and the well known Discrete
Hidden Markov Model (HMM-VQ) that we have achieved also.
Experimental results obtained with Arabic digits have shown that the
use of nonparametric density estimation with an appropriate
smoothing factor (spread) improves the generalization power of the
neural network. The word error rate (WER) is reduced significantly
over the baseline HMM method. GRNN computation is a successful
alternative to the other neural network and DHMM.
Abstract: Glaucoma diagnosis involves extracting three features
of the fundus image; optic cup, optic disc and vernacular. Present
manual diagnosis is expensive, tedious and time consuming. A
number of researches have been conducted to automate this process.
However, the variability between the diagnostic capability of an
automated system and ophthalmologist has yet to be established. This
paper discusses the efficiency and variability between
ophthalmologist opinion and digital technique; threshold. The
efficiency and variability measures are based on image quality
grading; poor, satisfactory or good. The images are separated into
four channels; gray, red, green and blue. A scientific investigation
was conducted on three ophthalmologists who graded the images
based on the image quality. The images are threshold using multithresholding
and graded as done by the ophthalmologist. A
comparison of grade from the ophthalmologist and threshold is made.
The results show there is a small variability between result of
ophthalmologists and digital threshold.
Abstract: Integration of system process information obtained
through an image processing system with an evolving knowledge
database to improve the accuracy and predictability of wear debris
analysis is the main focus of the paper. The objective is to automate
intelligently the analysis process of wear particle using classification
via self-organizing maps. This is achieved using relationship
measurements among corresponding attributes of various
measurements for wear debris. Finally, visualization technique is
proposed that helps the viewer in understanding and utilizing these
relationships that enable accurate diagnostics.
Abstract: Integration of system process information obtained
through an image processing system with an evolving knowledge
database to improve the accuracy and predictability of wear particle
analysis is the main focus of the paper. The objective is to automate
intelligently the analysis process of wear particle using classification
via self organizing maps. This is achieved using relationship
measurements among corresponding attributes of various
measurements for wear particle. Finally, visualization technique is
proposed that helps the viewer in understanding and utilizing these
relationships that enable accurate diagnostics.
Abstract: In this paper, a new technique for fast painting with
different colors is presented. The idea of painting relies on applying
masks with different colors to the background. Fast painting is
achieved by applying these masks in the frequency domain instead of
spatial (time) domain. New colors can be generated automatically as a
result from the cross correlation operation. This idea was applied
successfully for faster specific data (face, object, pattern, and code)
detection using neural algorithms. Here, instead of performing cross
correlation between the input input data (e.g., image, or a stream of
sequential data) and the weights of neural networks, the cross
correlation is performed between the colored masks and the
background. Furthermore, this approach is developed to reduce the
computation steps required by the painting operation. The principle of
divide and conquer strategy is applied through background
decomposition. Each background is divided into small in size subbackgrounds
and then each sub-background is processed separately by
using a single faster painting algorithm. Moreover, the fastest painting
is achieved by using parallel processing techniques to paint the
resulting sub-backgrounds using the same number of faster painting
algorithms. In contrast to using only faster painting algorithm, the
speed up ratio is increased with the size of the background when using
faster painting algorithm and background decomposition. Simulation
results show that painting in the frequency domain is faster than that in
the spatial domain.
Abstract: Subdivision is a method to create a smooth surface from a coarse mesh by subdividing the entire mesh. The conventional ways to compute and render surfaces are inconvenient both in terms of memory and computational time as the number of meshes will increase exponentially. An adaptive subdivision is the way to reduce the computational time and memory by subdividing only certain selected areas. In this paper, a new adaptive subdivision method for triangle meshes is introduced. This method defines a new adaptive subdivision rules by considering the properties of each triangle's neighbors and is embedded in a traditional Loop's subdivision. It prevents some undesirable side effects that appear in the conventional adaptive ways. Models that were subdivided by our method are compared with other adaptive subdivision methods
Abstract: In this paper, we propose a novel improvement for the generalized Lloyd Algorithm (GLA). Our algorithm makes use of an M-tree index built on the codebook which makes it possible to reduce the number of distance computations when the nearest code words are searched. Our method does not impose the use of any specific distance function, but works with any metric distance, making it more general than many other fast GLA variants. Finally, we present the positive results of our performance experiments.
Abstract: Deformable active contours are widely used in
computer vision and image processing applications for image
segmentation, especially in biomedical image analysis. The active
contour or “snake" deforms towards a target object by controlling the
internal, image and constraint forces. However, if the contour
initialized with a lesser number of control points, there is a high
probability of surpassing the sharp corners of the object during
deformation of the contour. In this paper, a new technique is
proposed to construct the initial contour by incorporating prior
knowledge of significant corners of the object detected using the
Harris operator. This new reconstructed contour begins to deform, by
attracting the snake towards the targeted object, without missing the
corners. Experimental results with several synthetic images show the
ability of the new technique to deal with sharp corners with a high
accuracy than traditional methods.
Abstract: This paper presents a new data oriented model of image. Then a representation of it, ADBT, is introduced. The ability of ADBT is clustering, segmentation, measuring similarity of images etc, with desired precision and corresponding speed.
Abstract: In this paper, we propose a new approach to query-by-humming, focusing on MP3 songs database. Since MP3 songs are much more difficult in melody representation than symbolic performance data, we adopt to extract feature descriptors from the vocal sounds part of the songs. Our approach is based on signal filtering, sub-band spectral processing, MDCT coefficients analysis and peak energy detection by ignorance of the background music as much as possible. Finally, we apply dual dynamic programming algorithm for feature similarity matching. Experiments will show us its online performance in precision and efficiency.
Abstract: We propose a fast and robust hierarchical face detection system which finds and localizes face images with a cascade of classifiers. Three modules contribute to the efficiency of our detector. First, heterogeneous feature descriptors are exploited to enrich feature types and feature numbers for face representation. Second, a PSO-Adaboost algorithm is proposed to efficiently select discriminative features from a large pool of available features and reinforce them into the final ensemble classifier. Compared with the standard exhaustive Adaboost for feature selection, the new PSOAdaboost algorithm reduces the training time up to 20 times. Finally, a three-stage hierarchical classifier framework is developed for rapid background removal. In particular, candidate face regions are detected more quickly by using a large size window in the first stage. Nonlinear SVM classifiers are used instead of decision stump functions in the last stage to remove those remaining complex nonface patterns that can not be rejected in the previous two stages. Experimental results show our detector achieves superior performance on the CMU+MIT frontal face dataset.
Abstract: Software testing is important stage of software development cycle. Current testing process involves tester and electronic documents with test case scenarios. In this paper we focus on new approach to testing process using automated test case generation and tester guidance through the system based on the model of the system. Test case generation and model-based testing is not possible without proper system model. We aim on providing better feedback from the testing process thus eliminating the unnecessary paper work.
Abstract: Extensive use of the Internet coupled with the
marvelous growth in e-commerce and m-commerce has created a
huge demand for information security. The Secure Socket Layer
(SSL) protocol is the most widely used security protocol in the
Internet which meets this demand. It provides protection against
eaves droppings, tampering and forgery. The cryptographic
algorithms RC4 and HMAC have been in use for achieving security
services like confidentiality and authentication in the SSL. But recent
attacks against RC4 and HMAC have raised questions in the
confidence on these algorithms. Hence two novel cryptographic
algorithms MAJE4 and MACJER-320 have been proposed as
substitutes for them. The focus of this work is to demonstrate the
performance of these new algorithms and suggest them as dependable
alternatives to satisfy the need of security services in SSL. The
performance evaluation has been done by using practical
implementation method.