Abstract: Distributed Computing Systems are usually considered the most suitable model for practical solutions of many parallel algorithms. In this paper an enhanced distributed system is presented to improve the time complexity of Binary Indexed Trees (BIT). The proposed system uses multi-uniform processors with identical architectures and a specially designed distributed memory system. The analysis of this system has shown that it has reduced the time complexity of the read query to O(Log(Log(N))), and the update query to constant complexity, while the naive solution has a time complexity of O(Log(N)) for both queries. The system was implemented and simulated using VHDL and Verilog Hardware Description Languages, with xilinx ISE 10.1, as the development environment and ModelSim 6.1c, similarly as the simulation tool. The simulation has shown that the overhead resulting by the wiring and communication between the system fragments could be fairly neglected, which makes it applicable to practically reach the maximum speed up offered by the proposed model.
Abstract: This paper deals with the combination of OSGi and
cloud computing. Both technologies are mainly placed in the field of
distributed computing. Therefore, it is discussed how different
approaches from different institutions work. In addition, the
approaches are compared to each other.
Abstract: Scheduling of diversified service requests in
distributed computing is a critical design issue. Cloud is a type of
parallel and distributed system consisting of a collection of
interconnected and virtual computers. It is not only the clusters and
grid but also it comprises of next generation data centers. The paper
proposes an initial heuristic algorithm to apply modified ant colony
optimization approach for the diversified service allocation and
scheduling mechanism in cloud paradigm. The proposed optimization
method is aimed to minimize the scheduling throughput to service all
the diversified requests according to the different resource allocator
available under cloud computing environment.
Abstract: Decrease in hardware costs and advances in computer
networking technologies have led to increased interest in the use of
large-scale parallel and distributed computing systems. One of the
biggest issues in such systems is the development of effective
techniques/algorithms for the distribution of the processes/load of a
parallel program on multiple hosts to achieve goal(s) such as
minimizing execution time, minimizing communication delays,
maximizing resource utilization and maximizing throughput.
Substantive research using queuing analysis and assuming job
arrivals following a Poisson pattern, have shown that in a multi-host
system the probability of one of the hosts being idle while other host
has multiple jobs queued up can be very high. Such imbalances in
system load suggest that performance can be improved by either
transferring jobs from the currently heavily loaded hosts to the lightly
loaded ones or distributing load evenly/fairly among the hosts .The
algorithms known as load balancing algorithms, helps to achieve the
above said goal(s). These algorithms come into two basic categories -
static and dynamic. Whereas static load balancing algorithms (SLB)
take decisions regarding assignment of tasks to processors based on
the average estimated values of process execution times and
communication delays at compile time, Dynamic load balancing
algorithms (DLB) are adaptive to changing situations and take
decisions at run time.
The objective of this paper work is to identify qualitative
parameters for the comparison of above said algorithms. In future this
work can be extended to develop an experimental environment to
study these Load balancing algorithms based on comparative
parameters quantitatively.
Abstract: The main mission of Ezilla is to provide a friendly
interface to access the virtual machine and quickly deploy the high
performance computing environment. Ezilla has been developed by
Pervasive Computing Team at National Center for High-performance
Computing (NCHC). Ezilla integrates the Cloud middleware,
virtualization technology, and Web-based Operating System (WebOS)
to form a virtual computer in distributed computing environment. In
order to upgrade the dataset and speedup, we proposed the sensor
observation system to deal with a huge amount of data in the
Cassandra database. The sensor observation system is based on the
Ezilla to store sensor raw data into distributed database. We adopt the
Ezilla Cloud service to create virtual machines and login into virtual
machine to deploy the sensor observation system. Integrating the
sensor observation system with Ezilla is to quickly deploy experiment
environment and access a huge amount of data with distributed
database that support the replication mechanism to protect the data
security.
Abstract: The problem of mapping tasks onto a computational grid with the aim to minimize the power consumption and the makespan subject to the constraints of deadlines and architectural requirements is considered in this paper. To solve this problem, we propose a solution from cooperative game theory based on the concept of Nash Bargaining Solution. The proposed game theoretical technique is compared against several traditional techniques. The experimental results show that when the deadline constraints are tight, the proposed technique achieves superior performance and reports competitive performance relative to the optimal solution.
Abstract: This paper proposes a novel system for monitoring the
health of underground pipelines. Some of these pipelines transport
dangerous contents and any damage incurred might have catastrophic
consequences. However, most of these damage are unintentional and
usually a result of surrounding construction activities. In order to
prevent these potential damages, monitoring systems are
indispensable. This paper focuses on acoustically recognizing road
cutters since they prelude most construction activities in modern
cities. Acoustic recognition can be easily achieved by installing a
distributed computing sensor network along the pipelines and using
smart sensors to “listen" for potential threat; if there is a real threat,
raise some form of alarm. For efficient pipeline monitoring, a novel
monitoring approach is proposed. Principal Component Analysis
(PCA) was studied and applied. Eigenvalues were regarded as the
special signature that could characterize a sound sample, and were
thus used for the feature vector for sound recognition. The denoising
ability of PCA could make it robust to noise interference. One class
SVM was used for classifier. On-site experiment results show that the
proposed PCA and SVM based acoustic recognition system will be
very effective with a low tendency for raising false alarms.
Abstract: A real time distributed computing has
heterogeneously networked computers to solve a single problem. So
coordination of activities among computers is a complex task and
deadlines make more complex. The performances depend on many
factors such as traffic workloads, database system architecture,
underlying processors, disks speeds, etc. Simulation study have been
performed to analyze the performance under different transaction
scheduling: different workloads, arrival rate, priority policies,
altering slack factors and Preemptive Policy. The performance metric
of the experiments is missed percent that is the percentage of
transaction that the system is unable to complete. The throughput of
the system is depends on the arrival rate of transaction. The
performance can be enhanced with altering the slack factor value.
Working on slack value for the transaction can helps to avoid some
of transactions from killing or aborts. Under the Preemptive Policy,
many extra executions of new transactions can be carried out.
Abstract: Message Passing Interface is widely used for Parallel
and Distributed Computing. MPICH and LAM are popular open
source MPIs available to the parallel computing community also
there are commercial MPIs, which performs better than MPICH etc.
In this paper, we discuss a commercial Message Passing Interface, CMPI
(C-DAC Message Passing Interface). C-MPI is an optimized
MPI for CLUMPS. It is found to be faster and more robust compared
to MPICH. We have compared performance of C-MPI and MPICH
on Gigabit Ethernet network.
Abstract: Designing, implementing, and debugging concurrency
control algorithms in a real system is a complex, tedious, and errorprone
process. Further, understanding concurrency control
algorithms and distributed computations is itself a difficult task.
Visualization can help with both of these problems. Thus, we have
developed an exploratory environment in which people can prototype
and test various versions of concurrency control algorithms, study
and debug distributed computations, and view performance statistics
of distributed systems. In this paper, we describe the exploratory
environment and show how it can be used to explore concurrency
control algorithms for the interactive steering of distributed
computations.