Abstract: In the study, we present a conceptual framework for developing a scheduling system that can generate self-explanatory and easy-understanding schedules. To this end, a user interface is conceived to help planners record factors that are considered crucial in scheduling, as well as internal and external sources relating to such factors. A hybrid approach combining machine learning and constraint programming is developed to generate schedules and the corresponding factors, and accordingly display them on the user interface. Effects of the proposed system on scheduling are discussed, and it is expected that scheduling efficiency and system understandability will be improved, compared with previous scheduling systems.
Abstract: This paper presented a technique to solve one of the transportation problems that faces us in real life which is the Bus Scheduling Problem. Most of the countries using buses in schools, companies and traveling offices as an example to transfer multiple passengers from many places to specific place and vice versa. This transferring process can cost time and money, so we build a decision support system that can solve this problem. In this paper, a genetic algorithm with the shortest path technique is used to generate a competitive solution to other well-known techniques. It also presents a comparison between our solution and other solutions for this problem.
Abstract: Liquid level control of conical tank system is known to be a great challenge in many industries such as food processing, hydrometallurgical industries and wastewater treatment plant due to its highly nonlinear characteristics. In this research, an adaptive fuzzy PID control scheme is applied to the problem of liquid level control in a nonlinear tank process. A conical tank process is first modeled and primarily simulated. A PID controller is then applied to the plant model as a suitable benchmark for comparison and the dynamic responses of the control system to different step inputs were investigated. It is found that the conventional PID controller is not able to fulfill the controller design criteria such as desired time constant due to highly nonlinear characteristics of the plant model. Consequently, a nonlinear control strategy based on gain-scheduling adaptive control incorporating a fuzzy logic observer is proposed to accurately control the nonlinear tank system. The simulation results clearly demonstrated the superiority of the proposed adaptive fuzzy control method over the conventional PID controller.
Abstract: Cloud computing is the outcome of rapid growth of internet. Due to elastic nature of cloud computing and unpredictable behavior of user, load balancing is the major issue in cloud computing paradigm. An efficient load balancing technique can improve the performance in terms of efficient resource utilization and higher customer satisfaction. Load balancing can be implemented through task scheduling, resource allocation and task migration. Various parameters to analyze the performance of load balancing approach are response time, cost, data processing time and throughput. This paper demonstrates a two level load balancer approach by combining join idle queue and join shortest queue approach. Authors have used cloud analyst simulator to test proposed two level load balancer approach. The results are analyzed and compared with the existing algorithms and as observed, proposed work is one step ahead of existing techniques.
Abstract: During aircraft maintenance scheduling, operator calculates the budget of the maintenance. Usually, this calculation includes only the costs that are directly related to the maintenance process such as cost of labor, material, and equipment. In some cases, overhead cost is also included. However, in some of those, downtime cost is neglected claiming that grounding is a natural fact of maintenance; therefore, it is not considered as part of the analytical decision-making process. Based on the normalized data, we introduce downtime cost with its monetary value and add its seasonal character. We envision that the rest of the model, which works together with the downtime cost, could be checked with the real life cases, through the review of MRO cost and airline spending in the particular and scheduled maintenance events.
Abstract: The emergence of Cloud data centers has revolutionized
the IT industry. Private Clouds in specific provide Cloud services
for certain group of customers/businesses. In a real-time private
Cloud each task that is given to the system has a deadline that
desirably should not be violated. Scheduling tasks in a real-time
private CLoud determine the way available resources in the system
are shared among incoming tasks. The aim of the scheduling policy is
to optimize the system outcome which for a real-time private Cloud
can include: energy consumption, deadline violation, execution time
and the number of host switches. Different scheduling policies can be
used for scheduling. Each lead to a sub-optimal outcome in a certain
settings of the system. A Bayesian Scheduling strategy is proposed
for scheduling to further improve the system outcome. The Bayesian
strategy showed to outperform all selected policies. It also has the
flexibility in dealing with complex pattern of incoming task and has
the ability to adapt.
Abstract: In this paper, we present a binary cat swarm
optimization for solving the Set covering problem. The set covering
problem is a well-known NP-hard problem with many practical
applications, including those involving scheduling, production
planning and location problems. Binary cat swarm optimization
is a recent swarm metaheuristic technique based on the behavior
of discrete cats. Domestic cats show the ability to hunt and are
curious about moving objects. The cats have two modes of behavior:
seeking mode and tracing mode. We illustrate this approach with
65 instances of the problem from the OR-Library. Moreover, we
solve this problem with 40 new binarization techniques and we select
the technical with the best results obtained. Finally, we make a
comparison between results obtained in previous studies and the new
binarization technique, that is, with roulette wheel as transfer function
and V3 as discretization technique.
Abstract: Large scale computing infrastructures have been widely
developed with the core objective of providing a suitable platform
for high-performance and high-throughput computing. These systems
are designed to support resource-intensive and complex applications,
which can be found in many scientific and industrial areas. Currently,
large scale data-intensive applications are hindered by the high
latencies that result from the access to vastly distributed data.
Recent works have suggested that improving data locality is key to
move towards exascale infrastructures efficiently, as solutions to this
problem aim to reduce the bandwidth consumed in data transfers, and
the overheads that arise from them. There are several techniques that
attempt to move computations closer to the data. In this survey we
analyse the different mechanisms that have been proposed to provide
data locality for large scale high-performance and high-throughput
systems. This survey intends to assist scientific computing community
in understanding the various technical aspects and strategies that
have been reported in recent literature regarding data locality. As a
result, we present an overview of locality-oriented techniques, which
are grouped in four main categories: application development, task
scheduling, in-memory computing and storage platforms. Finally, the
authors include a discussion on future research lines and synergies
among the former techniques.
Abstract: Due to the increasing growth of internet users, the emerging applications of multicast are growing day by day and there is a requisite for the design of high-speed switches/routers. Huge amounts of effort have been done into the research area of multicast switch fabric design and algorithms. Different traffic scenarios are the influencing factor which affect the throughput and delay of the switch. The pointer based multicast scheduling algorithms are not performed well under non-uniform traffic conditions. In this work, performance of the switch has been analyzed by applying the advanced multicast scheduling algorithm OQSMS (Optimal Queue Selection Based Multicast Scheduling Algorithm), MDDR (Multicast Due Date Round-Robin Scheduling Algorithm) and MDRR (Multicast Dual Round-Robin Scheduling Algorithm). The results show that OQSMS achieves better switching performance than other algorithms under the uniform, non-uniform and bursty traffic conditions and it estimates optimal queue in each time slot so that it achieves maximum possible throughput.
Abstract: Projects managers in construction industry usually face a difficult organizational environment especially if the project is unique. The organization lacks the processes to practice construction management correctly, and the executive’s technical managers who have lack of experience in playing their role and responsibilities correctly. Project managers need to adopt best practices that allow them to do things effectively to make sure that the project can be delivered without any delay even though the executive’s technical managers should follow a certain process to avoid any factor might cause any delay during the project life cycle. The purpose of the paper is to examine the awareness level of projects managers about construction management processes, tools, techniques and implications to complete projects on time. The outcome and the results of the study are prepared based on the designed questionnaires and interviews conducted with many project managers. The method used in this paper is a quantitative study. A survey with a sample of 100 respondents was prepared and distributed in a construction company in Dubai, which includes nine questions to examine the level of their awareness. This research will also identify the necessary benefits of processes of construction management that has to be adopted by projects managers to mitigate the maximum potential problems which might cause any delay to the project life cycle.
Abstract: This paper discusses the simulation and experimental work of small Smart Grid containing ten consumers. Smart Grid is characterized by a two-way flow of real-time information and energy. RTP (Real Time Pricing) based tariff is implemented in this work to reduce peak demand, PAR (peak to average ratio) and cost of energy consumed. In the experimental work described here, working of Smart Plug, HEC (Home Energy Controller), HAN (Home Area Network) and communication link between consumers and utility server are explained. Algorithms for Smart Plug, HEC, and utility server are presented and explained in this work. After receiving the Real Time Price for different time slots of the day, HEC interacts automatically by running an algorithm which is based on Linear Programming Problem (LPP) method to find the optimal energy consumption schedule. Algorithm made for utility server can handle more than one off-peak time period during the day. Simulation and experimental work are carried out for different cases. At the end of this work, comparison between simulation results and experimental results are presented to show the effectiveness of the minimization method adopted.
Abstract: Fault diagnosis of Linear Parameter-Varying (LPV)
system using an adaptive Kalman filter is proposed. The LPV model
is comprised of scheduling parameters, and the emulator parameters.
The scheduling parameters are chosen such that they are capable of
tracking variations in the system model as a result of changes in the
operating regimes. The emulator parameters, on the other hand,
simulate variations in the subsystems during the identification phase
and have negligible effect during the operational phase. The nominal
model and the influence vectors, which are the gradient of the feature
vector respect to the emulator parameters, are identified off-line from
a number of emulator parameter perturbed experiments. A Kalman
filter is designed using the identified nominal model. As the system
varies, the Kalman filter model is adapted using the scheduling
variables. The residual is employed for fault diagnosis. The
proposed scheme is successfully evaluated on simulated system as
well as on a physical process control system.
Abstract: Mumbai, being traditionally the epicenter of India's
trade and commerce, the existing major ports such as Mumbai and
Jawaharlal Nehru Ports (JN) situated in Thane estuary are also
developing its waterfront facilities. Various developments over the
passage of decades in this region have changed the tidal flux
entering/leaving the estuary. The intake at Pir-Pau is facing the
problem of shortage of water in view of advancement of shoreline,
while jetty near Ulwe faces the problem of ship scheduling due to
existence of shallower depths between JN Port and Ulwe Bunder. In
order to solve these problems, it is inevitable to have information
about tide levels over a long duration by field measurements.
However, field measurement is a tedious and costly affair;
application of artificial intelligence was used to predict water levels
by training the network for the measured tide data for one lunar tidal
cycle. The application of two layered feed forward Artificial Neural
Network (ANN) with back-propagation training algorithms such as
Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to
predict the yearly tide levels at waterfront structures namely at Ulwe
Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe,
and Vashi for a period of lunar tidal cycle (2013) was used to train,
validate and test the neural networks. These trained networks having
high co-relation coefficients (R= 0.998) were used to predict the tide
at Ulwe, and Vashi for its verification with the measured tide for the
year 2000 & 2013. The results indicate that the predicted tide levels
by ANN give reasonably accurate estimation of tide. Hence, the
trained network is used to predict the yearly tide data (2015) for
Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was
predicted by using the neural network which was trained with the
help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The
measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is
maximum amplification of tide by about 10-20 cm with a phase lag
of 10-20 minutes with reference to the tide at Apollo Bunder
(Mumbai). LM training algorithm is faster than GD and with increase
in number of neurons in hidden layer and the performance of the
network increases. The predicted tide levels by ANN at Pir-Pau and
Ulwe provides valuable information about the occurrence of high and
low water levels to plan the operation of pumping at Pir-Pau and
improve ship schedule at Ulwe.
Abstract: Concurrent planning of project scheduling and
material ordering has been increasingly addressed within last decades
as an approach to improve the project execution costs. Therefore, we
have taken the problem into consideration in this paper, aiming to
maximize schedules quality robustness, in addition to minimize the
relevant costs. In this regard, a bi-objective mathematical model is
developed to formulate the problem. Moreover, it is possible to
utilize the all-unit discount for materials purchasing. The problem is
then solved by the E-constraint method, and the Pareto front is
obtained for a variety of robustness values. The applicability and
efficiency of the proposed model is tested by different numerical
instances, finally.
Abstract: Multiprocessor task scheduling problem for dependent
and independent tasks is computationally complex problem. Many
methods are proposed to achieve optimal running time. As the
multiprocessor task scheduling is NP hard in nature, therefore, many
heuristics are proposed which have improved the makespan of the
problem. But due to problem specific nature, the heuristic method
which provide best results for one problem, might not provide good
results for another problem. So, Simulated Annealing which is meta
heuristic approach is considered. It can be applied on all types of
problems. However, due to many runs, meta heuristic approach takes
large computation time. Hence, the hybrid approach is proposed by
combining the Duplication Scheduling Heuristic and Simulated
Annealing (SA) and the makespan results of Simple Simulated
Annealing and Hybrid approach are analyzed.
Abstract: The agenda of showing the scheduled time for
performing certain tasks is known as timetabling. It is widely used in
many departments such as transportation, education, and production.
Some difficulties arise to ensure all tasks happen in the time and
place allocated. Therefore, many researchers invented various
programming models to solve the scheduling problems from several
fields. However, the studies in developing the general integer
programming model for many timetabling problems are still
questionable. Meanwhile, this thesis describes about creating a
general model which solves different types of timetabling problems
by considering the basic constraints. Initially, the common basic
constraints from five different fields are selected and analyzed. A
general basic integer programming model was created and then
verified by using the medium set of data obtained randomly which is
much similar to realistic data. The mathematical software, AIMMS
with CPLEX as a solver has been used to solve the model. The model
obtained is significant in solving many timetabling problems easily
since it is modifiable to all types of scheduling problems which have
same basic constraints.
Abstract: Concurrent planning of project scheduling and
material ordering can provide more flexibility to the project
scheduling problem, as the project execution costs can be enhanced.
Hence, the issue has been taken into account in this paper. To do so, a
mixed-integer mathematical model is developed which considers the
aforementioned flexibility, in addition to the materials quantity
discount and space availability restrictions. Moreover, the activities
duration has been treated as decision variables. Finally, the efficiency
of the proposed model is tested by different instances. Additionally,
the influence of the aforementioned parameters is investigated on the
model performance.
Abstract: Round addition differential fault analysis using
operation skipping for lightweight block ciphers with on-the-fly key
scheduling is presented. For 64-bit KLEIN, it is shown that only a pair
of correct and faulty ciphertexts can be used to derive the secret master
key. For PRESENT, one correct ciphertext and two faulty ciphertexts
are required to reconstruct the secret key. Furthermore, secret key
extraction is demonstrated for the LBlock Feistel-type lightweight
block cipher.
Abstract: In this paper, we propose two algorithms to optimally
solve makespan and total completion time scheduling problems with
learning effect and job dependent delivery times in a single machine
environment. The delivery time is the extra time to eliminate adverse
effect between the main processing and delivery to the customer. In
this paper, we introduce the job dependent delivery times for some
single machine scheduling problems with position dependent learning
effect, which are makespan are total completion. The results with
respect to two algorithms proposed for solving of the each problem
are compared with LINGO solutions for 50-jobs, 100-jobs and 150-
jobs problems. The proposed algorithms can find the same results in
shorter time.
Abstract: This paper evaluates the accrual based scheduling for
cloud in single and multi-resource system. Numerous organizations
benefit from Cloud computing by hosting their applications. The
cloud model provides needed access to computing with potentially
unlimited resources. Scheduling is tasks and resources mapping to a
certain optimal goal principle. Scheduling, schedules tasks to virtual
machines in accordance with adaptable time, in sequence under
transaction logic constraints. A good scheduling algorithm improves
CPU use, turnaround time, and throughput. In this paper, three realtime
cloud services scheduling algorithm for single resources and
multiple resources are investigated. Experimental results show
Resource matching algorithm performance to be superior for both
single and multi-resource scheduling when compared to benefit first
scheduling, Migration, Checkpoint algorithms.