Abstract: Resilience is an important system property that relies
on the ability of a system to automatically recover from a degraded
state so as to continue providing its services. Resilient systems have
the means of detecting faults and failures with the added capability of
automatically restoring their normal operations. Mastering resilience
in the domain of Cyber-Physical Systems is challenging due to the
interdependence of hybrid hardware and software components, along
with physical limitations, laws, regulations and standards, among
others. In order to overcome these challenges, this paper presents a
modeling approach, based on the concept of Dynamic Cells, tailored
to the management of Smart Grids. Additionally, a heuristic algorithm
that works on top of the proposed modeling approach, to find resilient
configurations, has been defined and implemented. More specifically,
the model supports a flexible representation of Smart Grids and
the algorithm is able to manage, at different abstraction levels, the
resource consumption of individual grid elements on the presence of
failures and faults. Finally, the proposal is evaluated in a test scenario
where the effectiveness of such approach, when dealing with complex
scenarios where adequate solutions are difficult to find, is shown.
Abstract: Grid of computing nodes has emerged as a
representative means of connecting distributed computers or
resources scattered all over the world for the purpose of computing
and distributed storage. Since fault tolerance becomes complex due
to the availability of resources in decentralized grid environment,
it can be used in connection with replication in data grids. The
objective of our work is to present fault tolerance in data grids
with data replication-driven model based on clustering. The
performance of the protocol is evaluated with Omnet++ simulator.
The computational results show the efficiency of our protocol in
terms of recovery time and the number of process in rollbacks.
Abstract: Data replication in data grid systems is one of the important solutions that improve availability, scalability, and fault tolerance. However, this technique can also bring some involved issues such as maintaining replica consistency. Moreover, as grid environment are very dynamic some nodes can be more uploaded than the others to become eventually a bottleneck. The main idea of our work is to propose a complementary solution between replica consistency maintenance and dynamic load balancing strategy to improve access performances under a simulated grid environment.
Abstract: Grid environments consist of the volatile integration
of discrete heterogeneous resources. The notion of the Grid is to
unite different users and organisations and pool their resources into
one large computing platform where they can harness, inter-operate,
collaborate and interact. If the Grid Community is to achieve this
objective, then participants (Users and Organisations) need to be
willing to donate or share their resources and permit other
participants to use their resources. Resources do not have to be
shared at all times, since it may result in users not having access to
their own resource. The idea of reward-based computing was
developed to address the sharing problem in a pragmatic manner.
Participants are offered a reward to donate their resources to the
Grid. A reward may include monetary recompense or a pro rata share
of available resources when constrained. This latter point may imply
a quality of service, which in turn may require some globally agreed
reservation mechanism. This paper presents a platform for economybased
computing using the WebCom Grid middleware. Using this
middleware, participants can configure their resources at times and
priority levels to suit their local usage policy. The WebCom system
accounts for processing done on individual participants- resources
and rewards them accordingly.
Abstract: This paper describes the authorization system
architecture for Pervasive Grid environment. It discusses the
characteristics of classical authorization system and requirements of
the authorization system in pervasive grid environment as well.
Based on our analysis of current systems and taking into account the
main requirements of such pervasive environment, we propose new
authorization system architecture as an extension of the existing grid
authorization mechanisms. This architecture not only supports user
attributes but also context attributes which act as a key concept for
context-awareness thought. The architecture allows authorization of
users dynamically when there are changes in the pervasive grid
environment. For this, we opt for hybrid authorization method that
integrates push and pull mechanisms to combine the existing grid
authorization attributes with dynamic context assertions. We will
investigate the proposed architecture using a real testing environment
that includes heterogeneous pervasive grid infrastructures mapped
over multiple virtual organizations. Various scenarios are described
in the last section of the article to strengthen the proposed mechanism
with different facilities for the authorization procedure.
Abstract: Simulation is a very powerful method used for highperformance
and high-quality design in distributed system, and now
maybe the only one, considering the heterogeneity, complexity and
cost of distributed systems. In Grid environments, foe example, it is
hard and even impossible to perform scheduler performance
evaluation in a repeatable and controllable manner as resources and
users are distributed across multiple organizations with their own
policies. In addition, Grid test-beds are limited and creating an
adequately-sized test-bed is expensive and time consuming.
Scalability, reliability and fault-tolerance become important
requirements for distributed systems in order to support distributed
computation. A distributed system with such characteristics is called
dependable. Large environments, like Cloud, offer unique
advantages, such as low cost, dependability and satisfy QoS for all
users. Resource management in large environments address
performant scheduling algorithm guided by QoS constrains. This
paper presents the performance evaluation of scheduling heuristics
guided by different optimization criteria. The algorithms for
distributed scheduling are analyzed in order to satisfy users
constrains considering in the same time independent capabilities of
resources. This analysis acts like a profiling step for algorithm
calibration. The performance evaluation is based on simulation. The
simulator is MONARC, a powerful tool for large scale distributed
systems simulation. The novelty of this paper consists in synthetic
analysis results that offer guidelines for scheduler service
configuration and sustain the empirical-based decision. The results
could be used in decisions regarding optimizations to existing Grid
DAG Scheduling and for selecting the proper algorithm for DAG
scheduling in various actual situations.
Abstract: Over the past few years, a number of efforts have
been exerted to build parallel processing systems that utilize the idle
power of LAN-s and PC-s available in many homes and corporations.
The main advantage of these approaches is that they provide cheap
parallel processing environments for those who cannot afford the
expenses of supercomputers and parallel processing hardware.
However, most of the solutions provided are not very flexible in the
use of available resources and very difficult to install and setup.
In this paper, a multi-level web-based parallel processing system
(MWPS) is designed (appendix). MWPS is based on the idea of
volunteer computing, very flexible, easy to setup and easy to use.
MWPS allows three types of subscribers: simple volunteers (single
computers), super volunteers (full networks) and end users. All of
these entities are coordinated transparently through a secure web site.
Volunteer nodes provide the required processing power needed by
the system end users. There is no limit on the number of volunteer
nodes, and accordingly the system can grow indefinitely. Both
volunteer and system users must register and subscribe. Once, they
subscribe, each entity is provided with the appropriate MWPS
components. These components are very easy to install.
Super volunteer nodes are provided with special components that
make it possible to delegate some of the load to their inner nodes.
These inner nodes may also delegate some of the load to some other
lower level inner nodes .... and so on. It is the responsibility of the
parent super nodes to coordinate the delegation process and deliver
the results back to the user.
MWPS uses a simple behavior-based scheduler that takes into
consideration the current load and previous behavior of processing
nodes. Nodes that fulfill their contracts within the expected time get a
high degree of trust. Nodes that fail to satisfy their contract get a
lower degree of trust.
MWPS is based on the .NET framework and provides the minimal
level of security expected in distributed processing environments.
Users and processing nodes are fully authenticated. Communications
and messages between nodes are very secure. The system has been
implemented using C#.
MWPS may be used by any group of people or companies to
establish a parallel processing or grid environment.
Abstract: In the current Grid environment, efficient workload
management presents a significant challenge, for which there are
exorbitant de facto standards encompassing resource discovery,
brokerage, and data transfer, among others. In addition, the real-time
resource status, essential for an optimal resource allocation strategy,
is often not readily accessible. To address these issues and provide a
cleaner abstraction of the Grid with the potential of generalizing into
arbitrary resource-sharing environment, this paper proposes a new
Condor-based pilot mechanism applied in the PanDA architecture,
PanDA-PF WMS, with the goal of providing a more generic yet
efficient resource allocating strategy. In this architecture, the PanDA
server primarily acts as a repository of user jobs, responding to pilot
requests from distributed, remote resources. Scheduling decisions are
subsequently made according to the real-time resource information
reported by pilots. Pilot Factory is a Condor-inspired solution for a
scalable pilot dissemination and effectively functions as a resource
provisioning mechanism through which the user-job server, PanDA,
reaches out to the candidate resources only on demand.
Abstract: Grid computing is a group of clusters connected over
high-speed networks that involves coordinating and sharing
computational power, data storage and network resources operating
across dynamic and geographically dispersed locations. Resource
management and job scheduling are critical tasks in grid computing.
Resource selection becomes challenging due to heterogeneity and
dynamic availability of resources. Job scheduling is a NP-complete
problem and different heuristics may be used to reach an optimal or
near optimal solution. This paper proposes a model for resource and
job scheduling in dynamic grid environment. The main focus is to
maximize the resource utilization and minimize processing time of
jobs. Grid resource selection strategy is based on Max Heap Tree
(MHT) that best suits for large scale application and root node of
MHT is selected for job submission. Job grouping concept is used to
maximize resource utilization for scheduling of jobs in grid
computing. Proposed resource selection model and job grouping
concept are used to enhance scalability, robustness, efficiency and
load balancing ability of the grid.
Abstract: We discuss the application of matching in the area of resource discovery and resource allocation in grid computing. We present a formal definition of matchmaking, overview algorithms to evaluate different matchmaking expressions, and develop a matchmaking service for an intelligent grid environment.
Abstract: Grid computing is a form of distributed computing
that involves coordinating and sharing computational power, data
storage and network resources across dynamic and geographically
dispersed organizations. Scheduling onto the Grid is NP-complete,
so there is no best scheduling algorithm for all grid computing
systems. An alternative is to select an appropriate scheduling
algorithm to use in a given grid environment because of the
characteristics of the tasks, machines and network connectivity. Job
and resource scheduling is one of the key research area in grid
computing. The goal of scheduling is to achieve highest possible
system throughput and to match the application need with the
available computing resources. Motivation of the survey is to
encourage the amateur researcher in the field of grid computing, so
that they can understand easily the concept of scheduling and can
contribute in developing more efficient scheduling algorithm. This
will benefit interested researchers to carry out further work in this
thrust area of research.
Abstract: Large-scale systems such as Grids offer
infrastructures for both data distribution and parallel processing. The
use of Grid infrastructures is a more recent issue that is already
impacting the Distributed Database Management System industry. In
DBMS, distributed query processing has emerged as a fundamental
technique for ensuring high performance in distributed databases.
Database placement is particularly important in large-scale systems
because it reduces communication costs and improves resource
usage. In this paper, we propose a dynamic database placement
policy that depends on query patterns and Grid sites capabilities. We
evaluate the performance of the proposed database placement policy
using simulations. The obtained results show that dynamic database
placement can significantly improve the performance of distributed
query processing.
Abstract: Grid computing is growing rapidly in the distributed
heterogeneous systems for utilizing and sharing large-scale resources
to solve complex scientific problems. Scheduling is the most recent
topic used to achieve high performance in grid environments. It aims
to find a suitable allocation of resources for each job. A typical
problem which arises during this task is the decision of scheduling. It
is about an effective utilization of processor to minimize tardiness
time of a job, when it is being scheduled. This paper, therefore,
addresses the problem by developing a general framework of grid
scheduling using dynamic information and an ant colony
optimization algorithm to improve the decision of scheduling. The
performance of various dispatching rules such as First Come First
Served (FCFS), Earliest Due Date (EDD), Earliest Release Date
(ERD), and an Ant Colony Optimization (ACO) are compared.
Moreover, the benefit of using an Ant Colony Optimization for
performance improvement of the grid Scheduling is also discussed. It
is found that the scheduling system using an Ant Colony
Optimization algorithm can efficiently and effectively allocate jobs
to proper resources.
Abstract: Grid environments include aggregation of
geographical distributed resources. Grid is put forward in three types
of computational, data and storage. This paper presents a research on
data grid. Data grid is used for covering and securing accessibility to
data from among many heterogeneous sources. Users are not worry
on the place where data is located in it, provided that, they should get
access to the data. Metadata is used for getting access to data in data
grid. Presently, application metadata catalogue and SRB middle-ware
package are used in data grids for management of metadata. At this
paper, possibility of updating, streamlining and searching is provided
simultaneously and rapidly through classified table of preserving
metadata and conversion of each table to numerous tables.
Meanwhile, with regard to the specific application, the most
appropriate and best division is set and determined. Concurrency of
implementation of some of requests and execution of pipeline is
adaptability as a result of this technique.