Distributed Cost-Based Scheduling in Cloud Computing Environment

Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc.  Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively.  Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.

Optimal Grid Scheduling Using Improved Artificial Bee Colony Algorithm

Job Scheduling plays an important role for efficient utilization of grid resources available across different domains and geographical zones. Scheduling of jobs is challenging and NPcomplete. Evolutionary / Swarm Intelligence algorithms have been extensively used to address the NP problem in grid scheduling. Artificial Bee Colony (ABC) has been proposed for optimization problems based on foraging behaviour of bees. This work proposes a modified ABC algorithm, Cluster Heterogeneous Earliest First Min- Min Artificial Bee Colony (CHMM-ABC), to optimally schedule jobs for the available resources. The proposed model utilizes a novel Heterogeneous Earliest Finish Time (HEFT) Heuristic Algorithm along with Min-Min algorithm to identify the initial food source. Simulation results show the performance improvement of the proposed algorithm over other swarm intelligence techniques.

Impact of Fair Share and its Configurations on Parallel Job Scheduling Algorithms

To provide a better understanding of fair share policies supported by current production schedulers and their impact on scheduling performance, A relative fair share policy supported in four well-known production job schedulers is evaluated in this study. The experimental results show that fair share indeed reduces heavy-demand users from dominating the system resources. However, the detailed per-user performance analysis show that some types of users may suffer unfairness under fair share, possibly due to priority mechanisms used by the current production schedulers. These users typically are not heavy-demands users but they have mixture of jobs that do not spread out.

Achieving Fair Share Objectives via Goal-Oriented Parallel Computer Job Scheduling Policies

Fair share is one of the scheduling objectives supported on many production systems. However, fair share has been shown to cause performance problems for some users, especially the users with difficult jobs. This work is focusing on extending goaloriented parallel computer job scheduling policies to cover the fair share objective. Goal-oriented parallel computer job scheduling policies have been shown to achieve good scheduling performances when conflicting objectives are required. Goal-oriented policies achieve such good performance by using anytime combinatorial search techniques to find a good compromised schedule within a time limit. The experimental results show that the proposed goal-oriented parallel computer job scheduling policy (namely Tradeofffs( Tw:avgX)) achieves good scheduling performances and also provides good fair share performance.

An Agent Based Dynamic Resource Scheduling Model with FCFS-Job Grouping Strategy in Grid Computing

Grid computing is a group of clusters connected over high-speed networks that involves coordinating and sharing computational power, data storage and network resources operating across dynamic and geographically dispersed locations. Resource management and job scheduling are critical tasks in grid computing. Resource selection becomes challenging due to heterogeneity and dynamic availability of resources. Job scheduling is a NP-complete problem and different heuristics may be used to reach an optimal or near optimal solution. This paper proposes a model for resource and job scheduling in dynamic grid environment. The main focus is to maximize the resource utilization and minimize processing time of jobs. Grid resource selection strategy is based on Max Heap Tree (MHT) that best suits for large scale application and root node of MHT is selected for job submission. Job grouping concept is used to maximize resource utilization for scheduling of jobs in grid computing. Proposed resource selection model and job grouping concept are used to enhance scalability, robustness, efficiency and load balancing ability of the grid.

A Survey of Job Scheduling and Resource Management in Grid Computing

Grid computing is a form of distributed computing that involves coordinating and sharing computational power, data storage and network resources across dynamic and geographically dispersed organizations. Scheduling onto the Grid is NP-complete, so there is no best scheduling algorithm for all grid computing systems. An alternative is to select an appropriate scheduling algorithm to use in a given grid environment because of the characteristics of the tasks, machines and network connectivity. Job and resource scheduling is one of the key research area in grid computing. The goal of scheduling is to achieve highest possible system throughput and to match the application need with the available computing resources. Motivation of the survey is to encourage the amateur researcher in the field of grid computing, so that they can understand easily the concept of scheduling and can contribute in developing more efficient scheduling algorithm. This will benefit interested researchers to carry out further work in this thrust area of research.

Grouping-Based Job Scheduling Model In Grid Computing

Grid computing is a high performance computing environment to solve larger scale computational applications. Grid computing contains resource management, job scheduling, security problems, information management and so on. Job scheduling is a fundamental and important issue in achieving high performance in grid computing systems. However, it is a big challenge to design an efficient scheduler and its implementation. In Grid Computing, there is a need of further improvement in Job Scheduling algorithm to schedule the light-weight or small jobs into a coarse-grained or group of jobs, which will reduce the communication time, processing time and enhance resource utilization. This Grouping strategy considers the processing power, memory-size and bandwidth requirements of each job to realize the real grid system. The experimental results demonstrate that the proposed scheduling algorithm efficiently reduces the processing time of jobs in comparison to others.

Evaluating per-user Fairness of Goal-Oriented Parallel Computer Job Scheduling Policies

Fair share objective has been included into the goaloriented parallel computer job scheduling policy recently. However, the previous work only presented the overall scheduling performance. Thus, the per-user performance of the policy is still lacking. In this work, the details of per-user fair share performance under the Tradeoff-fs(Tx:avgX) policy will be further evaluated. A basic fair share priority backfill policy namely RelShare(1d) is also studied. The performance of all policies is collected using an event-driven simulator with three real job traces as input. The experimental results show that the high demand users are usually benefited under most policies because their jobs are large or they have a lot of jobs. In the large job case, one job executed may result in over-share during that period. In the other case, the jobs may be backfilled for performances. However, the users with a mixture of jobs may suffer because if the smaller jobs are executing the priority of the remaining jobs from the same user will be lower. Further analysis does not show any significant impact of users with a lot of jobs or users with a large runtime approximation error.