Abstract: The adoption of modern lightweight virtualization often comes with new threats and network vulnerabilities. This paper seeks to assess this with a different approach studying the behavior of a testbed built with tools such as Kernel-based Virtual Machine (KVM), LinuX Containers (LXC) and Docker, by performing stress tests within a platform where students experiment simultaneously with cyber-attacks, and thus observe the impact on the campus network and also find the best solution for cyber-security learning. Interesting outcomes can be found in the literature comparing these technologies. It is, however, difficult to find results of the effects on the global network where experiments are carried out. Our work shows that other physical hosts and the faculty network were impacted while performing these trials. The problems found are discussed, as well as security solutions and the adoption of new network policies.
Abstract: Current server systems are responsible for critical applications that run in different infrastructures, such as the cloud, physical machines, and virtual machines. A common challenge that these systems face are the various hardware faults that may occur due to the high load, among other reasons, which translates to errors resulting in malfunctions or even server downtime. The most important hardware parts, that are causing most of the errors, are the CPU, RAM, and the hard drive - HDD. In this work, we investigate selected CPU, RAM, and HDD errors, observed or simulated in kernel ring buffer log files from GNU/Linux servers. Moreover, a severity characterization is given for each error type. Understanding these errors is crucial for the efficient analysis of kernel logs that are usually utilized for monitoring servers and diagnosing faults. In addition, to support the previous analysis, we present possible ways of simulating hardware errors in RAM and HDD, aiming to facilitate the testing of methods for detecting and tackling the above issues in a server running on GNU/Linux.
Abstract: Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc. Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively. Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.
Abstract: Distributed applications deployed on LEO satellites
and ground stations require substantial communication between
different members in a constellation to overcome the earth
coverage barriers imposed by GEOs. Applications running on LEO
constellations suffer the earth line-of-sight blockage effect. They
need adequate lab testing before launching to space. We propose
a scalable cloud-based network simulation framework to simulate
problems created by the earth line-of-sight blockage. The framework
utilized cloud IaaS virtual machines to simulate LEO satellites
and ground stations distributed software. A factorial ANOVA
statistical analysis is conducted to measure simulator overhead on
overall communication performance. The results showed a very low
simulator communication overhead. Consequently, the simulation
framework is proposed as a candidate for testing LEO constellations
with distributed software in the lab before space launch.
Abstract: In this paper, we propose an automatic verification
technology of software patches for user virtual environments on IaaS
Cloud to decrease verification costs of patches. In these days, IaaS
services have been spread and many users can customize virtual
machines on IaaS Cloud like their own private servers. Regarding to
software patches of OS or middleware installed on virtual machines,
users need to adopt and verify these patches by themselves. This task
increases operation costs of users. Our proposed method replicates
user virtual environments, extracts verification test cases for user
virtual environments from test case DB, distributes patches to virtual
machines on replicated environments and conducts those test cases
automatically on replicated environments. We have implemented the
proposed method on OpenStack using Jenkins and confirmed the
feasibility. Using the implementation, we confirmed the effectiveness
of test case creation efforts by our proposed idea of 2-tier abstraction
of software functions and test cases. We also evaluated the automatic
verification performance of environment replications, test cases
extractions and test cases conductions.
Abstract: Virtualization and high performance computing have been discussed from a performance perspective in recent publications. We present and discuss a flexible and efficient approach to the management of virtual clusters. A virtual machine management tool is extended to function as a fabric for cluster deployment and management. We show how features such as saving the state of a running cluster can be used to avoid disruption. We also compare our approach to the traditional methods of cluster deployment and present benchmarks which illustrate the efficiency of our approach.
Abstract: The increase on the demand of IT resources diverts
the enterprises to use the cloud as a cheap and scalable solution.
Cloud computing promises achieved by using the virtual machine as a
basic unite of computation. However, the virtual machine pre-defined
settings might be not enough to handle jobs QoS requirements. This
paper addresses the problem of mapping jobs have critical start
deadlines to virtual machines that have predefined specifications.
These virtual machines hosted by physical machines and shared a
fixed amount of bandwidth. This paper proposed an algorithm that
uses the idle virtual machines bandwidth to increase the quote of other
virtual machines nominated as executors to urgent jobs. An algorithm
with empirical study have been given to evaluate the impact of the
proposed model on impatient jobs. The results show the importance
of dynamic bandwidth allocation in virtualized environment and its
affect on throughput metric.
Abstract: A virtualized and virtual approach is presented on
academically preparing students to successfully engage at a strategic
perspective to understand those concerns and measures that are both
structured and not structured in the area of cyber security and
information assurance. The Master of Science in Cyber Security and
Information Assurance (MSCSIA) is a professional degree for those
who endeavor through technical and managerial measures to ensure
the security, confidentiality, integrity, authenticity, control,
availability and utility of the world-s computing and information
systems infrastructure. The National University Cyber Security and
Information Assurance program is offered as a Master-s degree. The
emphasis of the MSCSIA program uniquely includes hands-on
academic instruction using virtual computers. This past year, 2011,
the NU facility has become fully operational using system
architecture to provide a Virtual Education Laboratory (VEL)
accessible to both onsite and online students. The first student cohort
completed their MSCSIA training this past March 2, 2012 after
fulfilling 12 courses, for a total of 54 units of college credits. The
rapid pace scheduling of one course per month is immensely
challenging, perpetually changing, and virtually multifaceted. This
paper analyses these descriptive terms in consideration of those
globalization penetration breaches as present in today-s world of
cyber security. In addition, we present current NU practices to
mitigate risks.
Abstract: Determining how many virtual machines a Linux host
could run can be a challenge. One of tough missions is to find the
balance among performance, density and usability. Now KVM
hypervisor has become the most popular open source full
virtualization solution. It supports several ways of running guests with
more memory than host really has. Due to large differences between
minimum and maximum guest memory requirements, this paper
presents initial results on same-page merging, ballooning and live
migration techniques that aims at optimum memory usage on
KVM-based cloud platform. Given the design of initial experiments,
the results data is worth reference for system administrators. The
results from these experiments concluded that each method offers
different reliability tradeoff.
Abstract: The main mission of Ezilla is to provide a friendly
interface to access the virtual machine and quickly deploy the high
performance computing environment. Ezilla has been developed by
Pervasive Computing Team at National Center for High-performance
Computing (NCHC). Ezilla integrates the Cloud middleware,
virtualization technology, and Web-based Operating System (WebOS)
to form a virtual computer in distributed computing environment. In
order to upgrade the dataset and speedup, we proposed the sensor
observation system to deal with a huge amount of data in the
Cassandra database. The sensor observation system is based on the
Ezilla to store sensor raw data into distributed database. We adopt the
Ezilla Cloud service to create virtual machines and login into virtual
machine to deploy the sensor observation system. Integrating the
sensor observation system with Ezilla is to quickly deploy experiment
environment and access a huge amount of data with distributed
database that support the replication mechanism to protect the data
security.
Abstract: We present the development of a system of programs designed for the compilation and execution of applications for handheld computers. In introduction we describe the purpose of the project and its components. The next two paragraphs present the first two components of the project (the scanner and parser generators). Then we describe the Object Pascal compiler and the virtual machines for Windows and Palm OS. In conclusion we emphasize the ways in which the project can be extended.
Abstract: This paper proposes a new approach to offer a private
cloud service in HPC clusters. In particular, our approach relies on
automatically scheduling users- customized environment request as a
normal job in batch system. After finishing virtualization request jobs,
those guest operating systems will dismiss so that compute nodes will
be released again for computing. We present initial work on the
innovative integration of HPC batch system and virtualization tools
that aims at coexistence such that they suffice for meeting the
minimizing interference required by a traditional HPC cluster. Given
the design of initial infrastructure, the proposed effort has the potential
to positively impact on synergy model. The results from the
experiment concluded that goal for provisioning customized cluster
environment indeed can be fulfilled by using virtual machines, and
efficiency can be improved with proper setup and arrangements.