Abstract: Cloud computing is the innovative and leading
information technology model for enabling convenient, on-demand
network access to a shared pool of configurable computing resources
that can be rapidly provisioned and released with minimal
management effort. In this paper, we aim at the development of
workflow management system for cloud computing platforms based
on our previous research on the dynamic allocation of the cloud
computing resources and its workflow process. We took advantage of
the HTML5 technology and developed web-based workflow interface.
In order to enable the combination of many tasks running on the cloud
platform in sequence, we designed a mechanism and developed an
execution engine for workflow management on clouds. We also
established a prediction model which was integrated with job queuing
system to estimate the waiting time and cost of the individual tasks on
different computing nodes, therefore helping users achieve maximum
performance at lowest payment. This proposed effort has the potential
to positively provide an efficient, resilience and elastic environment
for cloud computing platform. This development also helps boost user
productivity by promoting a flexible workflow interface that lets users
design and control their tasks' flow from anywhere.
Abstract: In a highly competitive environment, it becomes more
important to shorten the whole business process while delivering or
even enhancing the business value to the customers and suppliers.
Although the workflow management systems receive much attention
for its capacity to practically support the business process enactment,
the effective workflow modeling method remain still challenging and
the high degree of process complexity makes it more difficult to gain
the short lead time. This paper presents a workflow structuring method
in a holistic way that can reduce the process complexity using
activity-needs and formal concept analysis, which eventually enhances
the key performance such as quality, delivery, and cost in business
process.
Abstract: Workflow Management Systems (WfMS) alloworganizations to streamline and automate business processes and reengineer their structure. One important requirement for this type of system is the management and computation of the Quality of Service(QoS) of processes and workflows. Currently, a range of Web processes and workflow languages exist. Each language can be characterized by the set of patterns they support. Developing andimplementing a suitable and generic algorithm to compute the QoSof processes that have been designed using different languages is a difficult task. This is because some patterns are specific to particular process languages and new patterns may be introduced in future versions of a language. In this paper, we describe an adaptive algorithm implemented to cope with these two problems. The algorithm is called adaptive since it can be dynamically changed as the patterns of a process language also change.
Abstract: There is wide range of scientific workflow systems
today, each one designed to resolve problems at a specific level. In
large collaborative projects, it is often necessary to recognize the
heterogeneous workflow systems already in use by various partners
and any potential collaboration between these systems requires
workflow interoperability. Publish/Subscribe Scientific Workflow
Interoperability Framework (PS-SWIF) approach was proposed to
achieve workflow interoperability among workflow systems. This
paper evaluates the PS-SWIF approach and its system to achieve
workflow interoperability using Web Services with asynchronous
notification messages represented by WS-Eventing standard. This
experiment covers different types of communication models provided
by Workflow Management Coalition (WfMC). These models are:
Chained processes, Nested synchronous sub-processes, Event
synchronous sub-processes, and Nested sub-processes
(Polling/Deferred Synchronous). Also, this experiment shows the
flexibility and simplicity of the PS-SWIF approach when applied to a
variety of workflow systems (Triana, Taverna, Kepler) in local and
remote environments.
Abstract: Nowadays, HPC, Grid and Cloud systems are evolving
very rapidly. However, the development of infrastructure solutions
related to HPC is lagging behind. While the existing infrastructure is
sufficient for simple cases, many computational problems have more
complex requirements.Such computational experiments use different
resources simultaneously to start a large number of computational
jobs.These resources are heterogeneous. They have different
purposes, architectures, performance and used software.Users need a
convenient tool that allows to describe and to run complex
computational experiments under conditions of HPC environment.
This paper introduces a modularworkflow system called SEGL
which makes it possible to run complex computational experiments
under conditions of a real HPC organization. The system can be used
in a great number of organizations, which provide HPC power.
Significant requirements to this system are high efficiency and
interoperability with the existing HPC infrastructure of the
organization without any changes.