Abstract: Edge is variation of brightness in an image. Edge
detection is useful in many application areas such as finding forests,
rivers from a satellite image, detecting broken bone in a medical
image etc. The paper discusses about finding edge of multiple aerial
images in parallel. The proposed work tested on 38 images 37
colored and one monochrome image. The time taken to process N
images in parallel is equivalent to time taken to process 1 image in
sequential. Message Passing Interface (MPI) and Open Computing
Language (OpenCL) is used to achieve task and pixel level
parallelism respectively.
Abstract: There are two major variants of the Simplex
Algorithm: the revised method and the standard, or tableau method.
Today, all serious implementations are based on the revised method
because it is more efficient for sparse linear programming problems.
Moreover, there are a number of applications that lead to dense linear
problems so our aim in this paper is to present some computational
results on parallel implementation of dense Simplex Method. Our
implementation is implemented on a SMP cluster using C
programming language and the Message Passing Interface MPI.
Preliminary computational results on randomly generated dense
linear programs support our results.
Abstract: Grobner basis calculation forms a key part of computational
commutative algebra and many other areas. One important
ramification of the theory of Grobner basis provides a means to solve
a system of non-linear equations. This is why it has become very
important in the areas where the solution of non-linear equations is
needed, for instance in algebraic cryptanalysis and coding theory. This
paper explores on a parallel-distributed implementation for Grobner
basis calculation over GF(2). For doing so Buchberger algorithm is
used. OpenMP and MPI-C language constructs have been used to
implement the scheme. Some relevant results have been furnished
to compare the performances between the standalone and hybrid
(parallel-distributed) implementation.
Abstract: Use of the Internet and the World-Wide-Web
(WWW) has become widespread in recent years and mobile agent
technology has proliferated at an equally rapid rate. In this scenario
load balancing becomes important for P2P systems. Beside P2P
systems can be highly heterogeneous, i.e., they may consists of peers
that range from old desktops to powerful servers connected to
internet through high-bandwidth lines. There are various loads
balancing policies came into picture. Primitive one is Message
Passing Interface (MPI). Its wide availability and portability make it
an attractive choice; however the communication requirements are
sometimes inefficient when implementing the primitives provided by
MPI. In this scenario we use the concept of mobile agent because
Mobile agent (MA) based approach have the merits of high
flexibility, efficiency, low network traffic, less communication
latency as well as highly asynchronous. In this study we present
decentralized load balancing scheme using mobile agent technology
in which when a node is overloaded, task migrates to less utilized
nodes so as to share the workload. However, the decision of which
nodes receive migrating task is made in real-time by defining certain
load balancing policies. These policies are executed on PMADE (A
Platform for Mobile Agent Distribution and Execution) in
decentralized manner using JuxtaNet and various load balancing
metrics are discussed.
Abstract: Sorting appears the most attention among all computational tasks over the past years because sorted data is at the heart of many computations. Sorting is of additional importance to parallel computing because of its close relation to the task of routing data among processes, which is an essential part of many parallel algorithms. Many parallel sorting algorithms have been investigated for a variety of parallel computer architectures. In this paper, three parallel sorting algorithms have been implemented and compared in terms of their overall execution time. The algorithms implemented are the odd-even transposition sort, parallel merge sort and parallel rank sort. Cluster of Workstations or Windows Compute Cluster has been used to compare the algorithms implemented. The C# programming language is used to develop the sorting algorithms. The MPI (Message Passing Interface) library has been selected to establish the communication and synchronization between processors. The time complexity for each parallel sorting algorithm will also be mentioned and analyzed.