Abstract: In April 2009, a new variant of Influenza A virus
subtype H1N1 emerged in Mexico and spread all over the world. The
influenza has three subtypes in human (H1N1, H1N2 and H3N2)
Types B and C influenza tend to be associated with local or regional
epidemics. Preliminary genetic characterization of the influenza
viruses has identified them as swine influenza A (H1N1) viruses.
Nucleotide sequence analysis of the Haemagglutinin (HA) and
Neuraminidase (NA) are similar to each other and the majority of
their genes of swine influenza viruses, two genes coding for the
neuraminidase (NA) and matrix (M) proteins are similar to
corresponding genes of swine influenza. Sequence similarity between
the 2009 A (H1N1) virus and its nearest relatives indicates that its
gene segments have been circulating undetected for an extended
period. Nucleic acid sequence Maximum Likelihood (MCL) and
DNA Empirical base frequencies, Phylogenetic relationship amongst
the HA genes of H1N1 virus isolated in Genbank having high
nucleotide sequence homology.
In this paper we used 16 HA nucleotide sequences from NCBI for
computing sequence relationships similarity of swine influenza A
virus using the following method MCL the result is 28%, 36.64% for
Optimal tree with the sum of branch length, 35.62% for Interior
branch phylogeny Neighber – Join Tree, 1.85% for the overall
transition/transversion, and 8.28% for Overall mean distance.
Abstract: Mobile devices, which are progressively surrounded
in our everyday life, have created a new paradigm where they
interconnect, interact and collaborate with each other. This network
can be used for flexible and secure coordinated sharing. On the other
hand Grid computing provides dependable, consistent, pervasive, and
inexpensive access to high-end computational capabilities. In this
paper, efforts are made to map the concepts of Grid on Ad-Hoc
networks because both exhibit similar kind of characteristics like
Scalability, Dynamism and Heterogeneity. In this context we
propose “Mobile Ad-Hoc Services Grid – MASGRID".
Abstract: From a set of shifted, blurred, and decimated image , super-resolution image reconstruction can get a high-resolution image. So it has become an active research branch in the field of image restoration. In general, super-resolution image restoration is an ill-posed problem. Prior knowledge about the image can be combined to make the problem well-posed, which contributes to some regularization methods. In the regularization methods at present, however, regularization parameter was selected by experience in some cases and other techniques have too heavy computation cost for computing the parameter. In this paper, we construct a new super-resolution algorithm by transforming the solving of the System stem Є=An into the solving of the equations X+A*X-1A=I , and propose an inverse iterative method.
Abstract: In digital signal processing it is important to
approximate multi-dimensional data by the method called rank
reduction, in which we reduce the rank of multi-dimensional data from
higher to lower. For 2-dimennsional data, singular value
decomposition (SVD) is one of the most known rank reduction
techniques. Additional, outer product expansion expanded from SVD
was proposed and implemented for multi-dimensional data, which has
been widely applied to image processing and pattern recognition.
However, the multi-dimensional outer product expansion has behavior
of great computation complex and has not orthogonally between the
expansion terms. Therefore we have proposed an alterative method,
Third-order Orthogonal Tensor Product Expansion short for 3-OTPE.
3-OTPE uses the power method instead of nonlinear optimization
method for decreasing at computing time. At the same time the group
of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is
also developed with SVD extensions for multi-dimensional data.
3-OTPE and HOSVD are similarly on the rank reduction of
multi-dimensional data. Using these two methods we can obtain
computation results respectively, some ones are the same while some
ones are slight different. In this paper, we compare 3-OTPE to
HOSVD in accuracy of calculation and computing time of resolution,
and clarify the difference between these two methods.
Abstract: Advent enhancements in the field of computing have
increased massive use of web based electronic documents. Current
Copyright protection laws are inadequate to prove the ownership for
electronic documents and do not provide strong features against
copying and manipulating information from the web. This has
opened many channels for securing information and significant
evolutions have been made in the area of information security.
Digital Watermarking has developed into a very dynamic area of
research and has addressed challenging issues for digital content.
Watermarking can be visible (logos or signatures) and invisible
(encoding and decoding). Many visible watermarking techniques
have been studied for text documents but there are very few for web
based text. XML files are used to trade information on the internet
and contain important information. In this paper, two invisible
watermarking techniques using Synonyms and Acronyms are
proposed for XML files to prove the intellectual ownership and to
achieve the security. Analysis is made for different attacks and
amount of capacity to be embedded in the XML file is also noticed.
A comparative analysis for capacity is also made for both methods.
The system has been implemented using C# language and all tests are
made practically to get the results.
Abstract: The analysis to detect arrhythmias and life-threatening
conditions are highly essential in today world and this analysis
can be accomplished by advanced non-linear processing methods
for accurate analysis of the complex signals of heartbeat dynamics.
In this perspective, recent developments in the field of multiscale
information content have lead to the Microcanonical Multiscale
Formalism (MMF). We show that such framework provides several
signal analysis techniques that are especially adapted to the
study of heartbeat dynamics. In this paper, we just show first hand
results of whether the considered heartbeat dynamics signals have
the multiscale properties by computing local preticability exponents
(LPEs) and the Unpredictable Points Manifold (UPM), and thereby
computing the singularity spectrum.
Abstract: Finding synchronizing sequences for the finite automata is a very important problem in many practical applications (part orienters in industry, reset problem in biocomputing theory, network issues etc). Problem of finding the shortest synchronizing sequence is NP-hard, so polynomial algorithms probably can work only as heuristic ones. In this paper we propose two versions of polynomial algorithms which work better than well-known Eppstein-s Greedy and Cycle algorithms.
Abstract: The rapid improvement of the microprocessor and network has made it possible for the PC cluster to compete with conventional supercomputers. Lots of high throughput type of applications can be satisfied by using the current desktop PCs, especially for those in PC classrooms, and leave the supercomputers for the demands from large scale high performance parallel computations. This paper presents our development on enabling an automated deployment mechanism for cluster computing to utilize the computing power of PCs such as reside in PC classroom. After well deployment, these PCs can be transformed into a pre-configured cluster computing resource immediately without touching the existing education/training environment installed on these PCs. Thus, the training activities will not be affected by this additional activity to harvest idle computing cycles. The time and manpower required to build and manage a computing platform in geographically distributed PC classrooms also can be reduced by this development.
Abstract: Market based models are frequently used in the resource
allocation on the computational grid. However, as the size of
the grid grows, it becomes difficult for the customer to negotiate
directly with all the providers. Middle agents are introduced to
mediate between the providers and customers and facilitate the
resource allocation process. The most frequently deployed middle
agents are the matchmakers and the brokers. The matchmaking agent
finds possible candidate providers who can satisfy the requirements
of the consumers, after which the customer directly negotiates with
the candidates. The broker agents are mediating the negotiation with
the providers in real time.
In this paper we present a new type of middle agent, the marketmaker.
Its operation is based on two parallel operations - through
the investment process the marketmaker is acquiring resources and
resource reservations in large quantities, while through the resale process
it sells them to the customers. The operation of the marketmaker
is based on the fact that through its global view of the grid it can
perform a more efficient resource allocation than the one possible in
one-to-one negotiations between the customers and providers.
We present the operation and algorithms governing the operation
of the marketmaker agent, contrasting it with the matchmaker and
broker agents. Through a series of simulations in the task oriented
domain we compare the operation of the three agents types. We find
that the use of marketmaker agent leads to a better performance in the
allocation of large tasks and a significant reduction of the messaging
overhead.
Abstract: The work reported in this paper proposes
Swarm-Array computing, a novel technique inspired by swarm
robotics, and built on the foundations of autonomic and parallel
computing. The approach aims to apply autonomic computing
constructs to parallel computing systems and in effect achieve the
self-ware objectives that describe self-managing systems. The
constitution of swarm-array computing comprising four constituents,
namely the computing system, the problem/task, the swarm and the
landscape is considered. Approaches that bind these constituents
together are proposed. Space applications employing FPGAs are
identified as a potential area for applying swarm-array computing for
building reliable systems. The feasibility of a proposed approach is
validated on the SeSAm multi-agent simulator and landscapes are
generated using the MATLAB toolkit.
Abstract: Autoregressive Moving average (ARMA) is a parametric based method of signal representation. It is suitable for problems in which the signal can be modeled by explicit known source functions with a few adjustable parameters. Various methods have been suggested for the coefficients determination among which are Prony, Pade, Autocorrelation, Covariance and most recently, the use of Artificial Neural Network technique. In this paper, the method of using Artificial Neural network (ANN) technique is compared with some known and widely acceptable techniques. The comparisons is entirely based on the value of the coefficients obtained. Result obtained shows that the use of ANN also gives accurate in computing the coefficients of an ARMA system.
Abstract: This paper is concerned with an improved algorithm
based on the piecewise-smooth Mumford and Shah (MS) functional
for an efficient and reliable segmentation. In order to speed up
convergence, an additional force, at each time step, is introduced
further to drive the evolution of the curves instead of only driven by
the extensions of the complementary functions u + and u - . In our
scheme, furthermore, the piecewise-constant MS functional is
integrated to generate the extra force based on a temporary image that
is dynamically created by computing the union of u + and u - during
segmenting. Therefore, some drawbacks of the original algorithm,
such as smaller objects generated by noise and local minimal problem
also are eliminated or improved. The resulting algorithm has been
implemented in Matlab and Visual Cµ, and demonstrated efficiently
by several cases.
Abstract: This paper is concerned with the design and implementation of MICOSim, an event-driven simulator written in Java for evaluating the performance of Grid entities (users, brokers and resources) under different scenarios such as varying the numbers of users, resources and brokers and varying their specifications and employed strategies.
Abstract: The world is moving rapidly toward the deployment
of information and communication systems. Nowadays, computing
systems with their fast growth are found everywhere and one of the main challenges for these systems is increasing attacks and security threats against them. Thus, capturing, analyzing and verifying security requirements becomes a very important activity in
development process of computing systems, specially in developing
systems such as banking, military and e-business systems. For
developing every system, a process model which includes a process,
methods and tools is chosen. The Rational Unified Process (RUP) is
one of the most popular and complete process models which is used
by developers in recent years. This process model should be extended to be used in developing secure software systems. In this
paper, the Requirement Discipline of RUP is extended to improve RUP for developing secure software systems. These proposed extensions are adding and integrating a number of Activities, Roles,
and Artifacts to RUP in order to capture, document and model threats
and security requirements of system. These extensions introduce a
group of clear and stepwise activities to developers. By following these activities, developers assure that security requirements are
captured and modeled. These models are used in design, implementation and test activitie
Abstract: In this paper we introduce new data oriented modeling
of uniform random variable well-matched with computing systems. Due to this conformity with current computers structure, this modeling will be efficiently used in statistical inference.
Abstract: Generation system reliability assessment is an
important task which can be performed using deterministic or
probabilistic techniques. The probabilistic approaches have
significant advantages over the deterministic methods. However,
more complicated modeling is required by the probabilistic
approaches. Power generation model is a basic requirement for this
assessment. One form of the generation models is the well known
capacity outage probability table (COPT). Different analytical
techniques have been used to construct the COPT. These approaches
require considerable mathematical modeling of the generating units.
The unit-s models are combined to build the COPT which will add
more burdens on the process of creating the COPT. Decimal to
Binary Conversion (DBC) technique is widely and commonly applied
in electronic systems and computing This paper proposes a novel
utilization of the DBC to create the COPT without engaging in
analytical modeling or time consuming simulations. The simple
binary representation , “0 " and “1 " is used to model the states o f
generating units. The proposed technique is proven to be an effective
approach to build the generation model.
Abstract: Message Passing Interface is widely used for Parallel
and Distributed Computing. MPICH and LAM are popular open
source MPIs available to the parallel computing community also
there are commercial MPIs, which performs better than MPICH etc.
In this paper, we discuss a commercial Message Passing Interface, CMPI
(C-DAC Message Passing Interface). C-MPI is an optimized
MPI for CLUMPS. It is found to be faster and more robust compared
to MPICH. We have compared performance of C-MPI and MPICH
on Gigabit Ethernet network.
Abstract: One of the major problems in genomic field is to perform sequence comparison on DNA and protein sequences. Executing sequence comparison on the DNA and protein data is a computationally intensive task. Sequence comparison is the basic step for all algorithms in protein sequences similarity. Parallel computing is an attractive solution to provide the computational power needed to speedup the lengthy process of the sequence comparison. Our main research is to enhance the protein sequence algorithm using dynamic programming method. In our approach, we parallelize the dynamic programming algorithm using multithreaded program to perform the sequence comparison and also developed a distributed protein database among many PCs using Remote Method Interface (RMI). As a result, we showed how different sizes of protein sequences data and computation of scoring matrix of these protein sequence on different number of processors affected the processing time and speed, as oppose to sequential processing.
Abstract: The massive proliferation of affordable computers, Internet broadband connectivity and rich education content has created a global phenomenon in which information and communication technology (ICT) is being used to transform education. Therefore, there is a need to redesign the educational system to meet the needs better. The advent of computers with sophisticated software has made it possible to solve many complex problems very fast and at a lower cost. This paper introduces the characteristics of the current E-Learning and then analyses the concept of cloud computing and describes the architecture of cloud computing platform by combining the features of E-Learning. The authors have tried to introduce cloud computing to e-learning, build an e-learning cloud, and make an active research and exploration for it from the following aspects: architecture, construction method and external interface with the model.
Abstract: This paper is to present context-aware sensor grid
framework for agriculture and its design challenges. Use of sensor
networks in the domain of agriculture is not new. However, due to
the unavailability of any common framework, solutions that are
developed in this domain are location, environment and problem
dependent. Keeping the need of common framework for agriculture,
Context-Aware Sensor Grid Framework is proposed. It will be
helpful in developing solutions for majority of the problems related
to irrigation, pesticides spray, use of fertilizers, regular monitoring of
plot and yield etc. due to the capability of adjusting according to
location and environment. The proposed framework is composed of
three layer architecture including context-aware application layer,
grid middleware layer and sensor network layer.