Abstract: The residue number system (RNS), due to its
properties, is used in applications in which high performance
computation is needed. The carry free nature, which makes the
arithmetic, carry bounded as well as the paralleling facility is the
reason of its capability of high speed rendering. Since carry is not
propagated between the moduli in this system, the performance is
only restricted by the speed of the operations in each modulus. In this
paper a novel method of number representation by use of redundancy
is suggested in which {rn- 2,rn-1,rn} is the reference moduli set
where r=2k+1 and k =1, 2,3,.. This method achieves fast
computations and conversions and makes the circuits of them much
simpler.
Abstract: Our work is part of the heterogeneous data
integration, with the definition of a structural and semantic mediation
model. Our aim is to propose architecture for the heterogeneous
sources metadata mediation, represented by XML, RDF and RuleML
models, providing to the user the metadata transparency. This, by
including data structures, of natures fundamentally different, and
allowing the decomposition of a query involving multiple sources, to
queries specific to these sources, then recompose the result.
Abstract: This paper discusses about an intelligent system to be
installed in ambulances providing professional support to the paramedics on board. A video conferencing device over mobile 4G services enables specialists virtually attending the patient being transferred to the hospital. The data centre holds detailed databases
on the patients past medical history and hospitals with the specialists. It also hosts various software modules that compute the shortest traffic –less path to the closest hospital with the required facilities, on inputting the symptoms of the patient, on a real time basis.
Abstract: Neural processors have shown good results for
detecting a certain character in a given input matrix. In this paper, a
new idead to speed up the operation of neural processors for character
detection is presented. Such processors are designed based on cross
correlation in the frequency domain between the input matrix and the
weights of neural networks. This approach is developed to reduce the
computation steps required by these faster neural networks for the
searching process. The principle of divide and conquer strategy is
applied through image decomposition. Each image is divided into
small in size sub-images and then each one is tested separately by
using a single faster neural processor. Furthermore, faster character
detection is obtained by using parallel processing techniques to test the
resulting sub-images at the same time using the same number of faster
neural networks. In contrast to using only faster neural processors, the
speed up ratio is increased with the size of the input image when using
faster neural processors and image decomposition. Moreover, the
problem of local subimage normalization in the frequency domain is
solved. The effect of image normalization on the speed up ratio of
character detection is discussed. Simulation results show that local
subimage normalization through weight normalization is faster than
subimage normalization in the spatial domain. The overall speed up
ratio of the detection process is increased as the normalization of
weights is done off line.
Abstract: Biologically human brain processes information in both unimodal and multimodal approaches. In fact, information is progressively abstracted and seamlessly fused. Subsequently, the fusion of multimodal inputs allows a holistic understanding of a problem. The proliferation of technology has exponentially produced various sources of data, which could be likened to being the state of multimodality in human brain. Therefore, this is an inspiration to develop a methodology for exploring multimodal data and further identifying multi-view patterns. Specifically, we propose a brain inspired conceptual model that allows exploration and identification of patterns at different levels of granularity, different types of hierarchies and different types of modalities. A structurally adaptive neural network is deployed to implement the proposed model. Furthermore, the acquisition of multi-view patterns with the proposed model is demonstrated and discussed with some experimental results.
Abstract: We considered repeated-root cyclic codes whose
block length is divisible by the characteristic of the underlying field.
Cyclic self dual codes are also the repeated root cyclic codes. It is
known about the one-level squaring construction for binary repeated
root cyclic codes. In this correspondence, we introduced of two
level squaring construction for binary repeated root cyclic codes of
length 2a b , a > 0, b is odd.
Abstract: In this paper we present the deep study about the Bio-
Medical Images and tag it with some basic extracting features (e.g.
color, pixel value etc). The classification is done by using a nearest
neighbor classifier with various distance measures as well as the
automatic combination of classifier results. This process selects a
subset of relevant features from a group of features of the image. It
also helps to acquire better understanding about the image by
describing which the important features are. The accuracy can be
improved by increasing the number of features selected. Various
types of classifications were evolved for the medical images like
Support Vector Machine (SVM) which is used for classifying the
Bacterial types. Ant Colony Optimization method is used for optimal
results. It has high approximation capability and much faster
convergence, Texture feature extraction method based on Gabor
wavelets etc..
Abstract: Segmenting the lungs in medical images is a
challenging and important task for many applications. In particular,
automatic segmentation of lung cavities from multiple magnetic
resonance (MR) images is very useful for oncological applications
such as radiotherapy treatment planning. However, distinguishing of
the lung areas is not trivial due to largely changing lung shapes, low
contrast and poorly defined boundaries. In this paper, we address
lung segmentation problem from pulmonary magnetic resonance
images and propose an automated method based on a robust regionaided
geometric snake with a modified diffused region force into the
standard geometric model definition. The extra region force gives the
snake a global complementary view of the lung boundary
information within the image which along with the local gradient
flow, helps detect fuzzy boundaries. The proposed method has been
successful in segmenting the lungs in every slice of 30 magnetic
resonance images with 80 consecutive slices in each image. We
present results by comparing our automatic method to manually
segmented lung cavities provided by an expert radiologist and with
those of previous works, showing encouraging results and high
robustness of our approach.
Abstract: This paper presents an ESN-based Arabic phoneme
recognition system trained with supervised, forced and combined
supervised/forced supervised learning algorithms. Mel-Frequency
Cepstrum Coefficients (MFCCs) and Linear Predictive Code (LPC)
techniques are used and compared as the input feature extraction
technique. The system is evaluated using 6 speakers from the King
Abdulaziz Arabic Phonetics Database (KAPD) for Saudi Arabia
dialectic and 34 speakers from the Center for Spoken Language
Understanding (CSLU2002) database of speakers with different
dialectics from 12 Arabic countries. Results for the KAPD and
CSLU2002 Arabic databases show phoneme recognition
performances of 72.31% and 38.20% respectively.
Abstract: The increase on the demand of IT resources diverts
the enterprises to use the cloud as a cheap and scalable solution.
Cloud computing promises achieved by using the virtual machine as a
basic unite of computation. However, the virtual machine pre-defined
settings might be not enough to handle jobs QoS requirements. This
paper addresses the problem of mapping jobs have critical start
deadlines to virtual machines that have predefined specifications.
These virtual machines hosted by physical machines and shared a
fixed amount of bandwidth. This paper proposed an algorithm that
uses the idle virtual machines bandwidth to increase the quote of other
virtual machines nominated as executors to urgent jobs. An algorithm
with empirical study have been given to evaluate the impact of the
proposed model on impatient jobs. The results show the importance
of dynamic bandwidth allocation in virtualized environment and its
affect on throughput metric.
Abstract: A new method, based on the normal shrink and
modified version of Katssagelous and Lay, is proposed for multiscale
blind image restoration. The method deals with the noise and blur in
the images. It is shown that the normal shrink gives the highest S/N
(signal to noise ratio) for image denoising process. The multiscale
blind image restoration is divided in two sections. The first part of
this paper proposes normal shrink for image denoising and the
second part of paper proposes modified version of katssagelous and
Lay for blur estimation and the combination of both methods to reach
a multiscale blind image restoration.
Abstract: The lack of security obstructs a large scale de- ployment of the multicast communication model. There- fore, a host of research works have been achieved in order to deal with several issues relating to securing the multicast, such as confidentiality, authentication, non-repudiation, in- tegrity and access control. Many applications require au- thenticating the source of the received traffic, such as broadcasting stock quotes and videoconferencing and hence source authentication is a required component in the whole multicast security architecture. In this paper, we propose a new and efficient source au- thentication protocol which guarantees non-repudiation for multicast flows, and tolerates packet loss. We have simu- lated our protocol using NS-2, and the simulation results show that the protocol allows to achieve improvements over protocols fitting into the same category.
Abstract: Graph partitioning is a NP-hard problem with multiple
conflicting objectives. The graph partitioning should minimize the
inter-partition relationship while maximizing the intra-partition
relationship. Furthermore, the partition load should be evenly
distributed over the respective partitions. Therefore this is a multiobjective
optimization problem (MOO). One of the approaches to
MOO is Pareto optimization which has been used in this paper. The
proposed methods of this paper used to improve the performance are
injecting best solutions of previous runs into the first generation of
next runs and also storing the non-dominated set of previous
generations to combine with later generation's non-dominated set.
These improvements prevent the GA from getting stuck in the local
optima and increase the probability of finding more optimal
solutions. Finally, a simulation research is carried out to investigate
the effectiveness of the proposed algorithm. The simulation results
confirm the effectiveness of the proposed method.
Abstract: This paper presents an alternate approach that uses
artificial neural network to simulate the flood level dynamics in a
river basin. The algorithm was developed in a decision support
system environment in order to enable users to process the data. The
decision support system is found to be useful due to its interactive
nature, flexibility in approach and evolving graphical feature and can
be adopted for any similar situation to predict the flood level. The
main data processing includes the gauging station selection, input
generation, lead-time selection/generation, and length of prediction.
This program enables users to process the flood level data, to
train/test the model using various inputs and to visualize results. The
program code consists of a set of files, which can as well be modified
to match other purposes. This program may also serve as a tool for
real-time flood monitoring and process control. The running results
indicate that the decision support system applied to the flood level
seems to have reached encouraging results for the river basin under
examination. The comparison of the model predictions with the
observed data was satisfactory, where the model is able to forecast
the flood level up to 5 hours in advance with reasonable prediction
accuracy. Finally, this program may also serve as a tool for real-time
flood monitoring and process control.
Abstract: This paper presents an approach for repairing word order errors in English text by reordering words in a sentence and choosing the version that maximizes the number of trigram hits according to a language model. A possible way for reordering the words is to use all the permutations. The problem is that for a sentence with length N words the number of all permutations is N!. The novelty of this method concerns the use of an efficient confusion matrix technique for reordering the words. The confusion matrix technique has been designed in order to reduce the search space among permuted sentences. The limitation of search space is succeeded using the statistical inference of N-grams. The results of this technique are very interesting and prove that the number of permuted sentences can be reduced by 98,16%. For experimental purposes a test set of TOEFL sentences was used and the results show that more than 95% can be repaired using the proposed method.
Abstract: This paper proposes a novel methodology for enabling
debugging and tracing of production web applications without
affecting its normal flow and functionality. This method of debugging
enables developers and maintenance engineers to replace a set of
existing resources such as images, server side scripts, cascading
style sheets with another set of resources per web session. The new
resources will only be active in the debug session and other sessions
will not be affected. This methodology will help developers in tracing
defects, especially those that appear only in production environments
and in exploring the behaviour of the system. A realization of the
proposed methodology has been implemented in Java.
Abstract: As far as the latest technological improvements are concerned, digital systems more become popular than the past. Despite this growing demand to the digital systems, content copy and attack against the digital cinema contents becomes a serious problem. To solve the above security problem, we propose “traceable watermarking using Hash functions for digital cinema system. Digital Cinema is a great application for traceable watermarking since it uses watermarking technology during content play as well as content transmission. The watermark is embedded into the randomly selected movie frames using CRC-32 techniques. CRC-32 is a Hash function. Using it, the embedding position is distributed by Hash Function so that any party cannot break off the watermarking or will not be able to change. Finally, our experimental results show that proposed DWT watermarking method using CRC-32 is much better than the convenient watermarking techniques in terms of robustness, image quality and its simple but unbreakable algorithm.
Abstract: In this paper the optimality of the solution of an
existing real word assignment problem known as the seat assignment
problem using Seat Assignment Method (SAM) is discussed. SAM is
the newly driven method from three existing methods, Hungarian
Method, Northwest Corner Method and Least Cost Method in a
special way that produces the easiness & fairness among all methods
that solve the seat assignment problem.
Abstract: The development of Artificial Neural Networks
(ANNs) is usually a slow process in which the human expert has to
test several architectures until he finds the one that achieves best
results to solve a certain problem. This work presents a new
technique that uses Genetic Programming (GP) for automatically
generating ANNs. To do this, the GP algorithm had to be changed in
order to work with graph structures, so ANNs can be developed. This
technique also allows the obtaining of simplified networks that solve
the problem with a small group of neurons. In order to measure the
performance of the system and to compare the results with other
ANN development methods by means of Evolutionary Computation
(EC) techniques, several tests were performed with problems based
on some of the most used test databases. The results of those
comparisons show that the system achieves good results comparable
with the already existing techniques and, in most of the cases, they
worked better than those techniques.
Abstract: This paper investigates the use of mobile phones and
tablets for learning purposes among university students in Saudi
Arabia. For this purpose, an extended Technology Acceptance Model
(TAM) is proposed to analyze the adoption of mobile devices and
smart phones by Saudi university students for accessing course
materials, searching the web for information related to their
discipline, sharing knowledge, conducting assignments etc.