Abstract: The 5th generation of mobile networks is term used in
various research papers and projects to identify the next major phase
of mobile telecommunications standards. 5G wireless networks will
support higher peak data rate, lower latency and provide best
connections with QoS guarantees.
In this article, we discuss various promising technologies for 5G
wireless communication systems, such as IPv6 support, World Wide
Wireless Web (WWWW), Dynamic Adhoc Wireless Networks
(DAWN), BEAM DIVISION MULTIPLE ACCESS (BDMA), Cloud
Computing, cognitive radio technology and FBMC/OQAM.
This paper is organized as follows: First, we will give introduction
to 5G systems, present some goals and requirements of 5G. In the
next, basic differences between 4G and 5G are given, after we talk
about key technology innovations of 5G systems and finally we will
conclude in last Section.
Abstract: Rapid growth of Information Technologies (IT) has
had huge influence on enterprises, and it has contributed to its
promotion and increasingly extensive use in enterprises. Information
Technologies have to a large extent determined the processes taking
place in an enterprise; what is more, IT development has brought the
need to adopt a brand new approach to human resources management
in an enterprise. The use of IT in human resource management
(HRM) is of high importance due to the growing role of information
and information technologies. The aim of this paper is to evaluate the
use of information technologies in human resources management in
enterprises. These practices will be presented in the following areas:
recruitment and selection, development and training, employee
assessment, motivation, talent management, personnel service.
Results of conducted survey show diversity of solutions applied in
particular areas of human resource management. In the future, further
development in this area should be expected, as well as integration of
individual HRM areas, growing mobile-enabled HR processes and
their transfer into the cloud. Presented IT solutions applied in HRM
are highly innovative, which is of great significance due to their
possible implementation in other enterprises.
Abstract: Cloud computing is the innovative and leading
information technology model for enabling convenient, on-demand
network access to a shared pool of configurable computing resources
that can be rapidly provisioned and released with minimal
management effort. In this paper, we aim at the development of
workflow management system for cloud computing platforms based
on our previous research on the dynamic allocation of the cloud
computing resources and its workflow process. We took advantage of
the HTML5 technology and developed web-based workflow interface.
In order to enable the combination of many tasks running on the cloud
platform in sequence, we designed a mechanism and developed an
execution engine for workflow management on clouds. We also
established a prediction model which was integrated with job queuing
system to estimate the waiting time and cost of the individual tasks on
different computing nodes, therefore helping users achieve maximum
performance at lowest payment. This proposed effort has the potential
to positively provide an efficient, resilience and elastic environment
for cloud computing platform. This development also helps boost user
productivity by promoting a flexible workflow interface that lets users
design and control their tasks' flow from anywhere.
Abstract: Moving into a new era of healthcare, new tools and
devices are developed to extend and improve health services, such as
remote patient monitoring and risk prevention. In this concept,
Internet of Things (IoT) and Cloud Computing present great
advantages by providing remote and efficient services, as well as
cooperation between patients, clinicians, researchers and other health
professionals. This paper focuses on patients suffering from bipolar
disorder, a brain disorder that belongs to a group of conditions
called affective disorders, which is characterized by great mood
swings. We exploit the advantages of Semantic Web and Cloud
Technologies to develop a patient monitoring system to support
clinicians. Based on intelligently filtering of evidence-knowledge and
individual-specific information we aim to provide treatment
notifications and recommended function tests at appropriate times or
concluding into alerts for serious mood changes and patient’s nonresponse
to treatment. We propose an architecture as the back-end
part of a cloud platform for IoT, intertwining intelligence devices
with patients’ daily routine and clinicians’ support.
Abstract: In order to protect data privacy, image with sensitive or
private information needs to be encrypted before being outsourced to
the cloud. However, this causes difficulties in image retrieval and data
management. A secure image retrieval method based on orthogonal
decomposition is proposed in the paper. The image is divided into two
different components, for which encryption and feature extraction are
executed separately. As a result, cloud server can extract features from
an encrypted image directly and compare them with the features of the
queried images, so that the user can thus obtain the image. Different
from other methods, the proposed method has no special requirements
to encryption algorithms. Experimental results prove that the proposed
method can achieve better security and better retrieval precision.
Abstract: These days customer satisfaction plays vital role in
any business. When customer searches for a product, significantly a
junk of irrelevant information is what is given, leading to customer
dissatisfaction. To provide exactly relevant information on the
searched product, we are proposing a model of KaaS (Knowledge as
a Service), which pre-processes the information using decision
making paradigm using Multi-agents.
Information obtained from various sources is taken to derive
knowledge and they are linked to Cloud to capture new idea. The
main focus of this work is to acquire relevant information
(knowledge) related to product, then convert this knowledge into a
service for customer satisfaction and deploy on cloud.
For achieving these objectives we are have opted to use multi
agents. They are communicating and interacting with each other,
manipulate information, provide knowledge, to take decisions. The
paper discusses about KaaS as an intelligent approach for Knowledge
acquisition.
Abstract: Parabolic solar trough systems have seen limited
deployments in cold northern climates as they are more suitable for
electricity production in southern latitudes. A numerical dynamic
model is developed to simulate troughs installed in cold climates and
validated using a parabolic solar trough facility in Winnipeg. The
model is developed in Simulink and will be utilized to simulate a trigeneration
system for heating, cooling and electricity generation in
remote northern communities. The main objective of this simulation
is to obtain operational data of solar troughs in cold climates and use
the model to determine ways to improve the economics and address
cold weather issues.
In this paper the validated Simulink model is applied to simulate a
solar assisted absorption cooling system along with electricity
generation using Organic Rankine Cycle (ORC) and thermal storage.
A control strategy is employed to distribute the heated oil from solar
collectors among the above three systems considering the
temperature requirements. This modelling provides dynamic
performance results using measured meteorological data recorded
every minute at the solar facility location. The purpose of this
modeling approach is to accurately predict system performance at
each time step considering the solar radiation fluctuations due to
passing clouds. Optimization of the controller in cold temperatures is
another goal of the simulation to for example minimize heat losses in
winter when energy demand is high and solar resources are low.
The solar absorption cooling is modeled to use the generated heat
from the solar trough system and provide cooling in summer for a
greenhouse which is located next to the solar field.
The results of the simulation are presented for a summer day in
Winnipeg which includes comparison of performance parameters of
the absorption cooling and ORC systems at different heat transfer
fluid (HTF) temperatures.
Abstract: Cloud computing is a new technology in industry and
academia. The technology has grown and matured in last half decade
and proven their significant role in changing environment of IT
infrastructure where cloud services and resources are offered over the
network. Cloud technology enables users to use services and
resources without being concerned about the technical implications of
technology. There are substantial research work has been performed
for the usage of cloud computing in educational institutes and
majority of them provides cloud services over high-end blade servers
or other high-end CPUs. However, this paper proposes a new stack
called “CiCKAStack” which provide cloud services over unutilized
computing resources, named as commodity computers.
“CiCKAStack” provides IaaS and PaaS using underlying commodity
computers. This will not only increasing the utilization of existing
computing resources but also provide organize file system, on
demand computing resource and design and development
environment.
Abstract: Meeting the growth in demand for digital services
such as social media, telecommunications, and business and cloud
services requires large scale data centres, which has led to an increase
in their end use energy demand. Generally, over 30% of data centre
power is consumed by the necessary cooling overhead. Thus energy
can be reduced by improving the cooling efficiency. Air and liquid
can both be used as cooling media for the data centre. Traditional
data centre cooling systems use air, however liquid is recognised as a
promising method that can handle the more densely packed data
centres. Liquid cooling can be classified into three methods; rack heat
exchanger, on-chip heat exchanger and full immersion of the
microelectronics. This study quantifies the improvements of heat
transfer specifically for the case of immersed microelectronics by
varying the CPU and heat sink location. Immersion of the server is
achieved by filling the gap between the microelectronics and a water
jacket with a dielectric liquid which convects the heat from the CPU
to the water jacket on the opposite side. Heat transfer is governed by
two physical mechanisms, which is natural convection for the fixed
enclosure filled with dielectric liquid and forced convection for the
water that is pumped through the water jacket. The model in this
study is validated with published numerical and experimental work
and shows good agreement with previous work. The results show that
the heat transfer performance and Nusselt number (Nu) is improved
by 89% by placing the CPU and heat sink on the bottom of the
microelectronics enclosure.
Abstract: This paper describes the problem of building secure
computational services for encrypted information in the Cloud
Computing without decrypting the encrypted data; therefore, it meets
the yearning of computational encryption algorithmic aspiration
model that could enhance the security of big data for privacy,
confidentiality, availability of the users. The cryptographic model
applied for the computational process of the encrypted data is the
Fully Homomorphic Encryption Scheme. We contribute a theoretical
presentations in a high-level computational processes that are based
on number theory and algebra that can easily be integrated and
leveraged in the Cloud computing with detail theoretic mathematical
concepts to the fully homomorphic encryption models. This
contribution enhances the full implementation of big data analytics
based cryptographic security algorithm.
Abstract: In this paper, we propose an automatic verification
technology of software patches for user virtual environments on IaaS
Cloud to decrease verification costs of patches. In these days, IaaS
services have been spread and many users can customize virtual
machines on IaaS Cloud like their own private servers. Regarding to
software patches of OS or middleware installed on virtual machines,
users need to adopt and verify these patches by themselves. This task
increases operation costs of users. Our proposed method replicates
user virtual environments, extracts verification test cases for user
virtual environments from test case DB, distributes patches to virtual
machines on replicated environments and conducts those test cases
automatically on replicated environments. We have implemented the
proposed method on OpenStack using Jenkins and confirmed the
feasibility. Using the implementation, we confirmed the effectiveness
of test case creation efforts by our proposed idea of 2-tier abstraction
of software functions and test cases. We also evaluated the automatic
verification performance of environment replications, test cases
extractions and test cases conductions.
Abstract: Neurons in the nervous system communicate with
each other by producing electrical signals called spikes. To
investigate the physiological function of nervous system it is essential
to study the activity of neurons by detecting and sorting spikes in the
recorded signal. In this paper a method is proposed for considering
the spike sorting problem which is based on the nonlinear modeling
of spikes using exponential autoregressive model. The genetic
algorithm is utilized for model parameter estimation. In this regard
some selected model coefficients are used as features for sorting
purposes. For optimal selection of model coefficients, self-organizing
feature map is used. The results show that modeling of spikes with
nonlinear autoregressive model outperforms its linear counterpart.
Also the extracted features based on the coefficients of exponential
autoregressive model are better than wavelet based extracted features
and get more compact and well-separated clusters. In the case of
spikes different in small-scale structures where principal component
analysis fails to get separated clouds in the feature space, the
proposed method can obtain well-separated cluster which removes
the necessity of applying complex classifiers.
Abstract: Cloud computing (CC) has already gained overall
appreciation in research and practice. Whereas the willingness to
integrate cloud services in various IT environments is still unbroken,
the previous CC procurement processes run mostly in an unorganized
and non-standardized way. In practice, a sufficiently specific, yet
applicable business process for the important acquisition phase is
often lacking. And research does not appropriately remedy this
deficiency yet. Therefore, this paper introduces a field-tested
approach for CC procurement. Based on an extensive literature
review and augmented by expert interviews, we designed a model
that is validated and further refined through an in-depth real-life case
study. For the detailed process description, we apply the event-driven
process chain notation (EPC). The gained valuable insights into the
case study may help CC research to shift to a more socio-technical
area. For practice, next to giving useful organizational instructions
we will provide extended checklists and lessons learned.
Abstract: In-memory database systems are becoming popular
due to the availability and affordability of sufficiently large RAM and
processors in modern high-end servers with the capacity to manage
large in-memory database transactions. While fast and reliable inmemory
systems are still being developed to overcome cache misses,
CPU/IO bottlenecks and distributed transaction costs, disk-based data
stores still serve as the primary persistence. In addition, with the
recent growth in multi-tenancy cloud applications and associated
security concerns, many organisations consider the trade-offs and
continue to require fast and reliable transaction processing of diskbased
database systems as an available choice. For these
organizations, the only way of increasing throughput is by improving
the performance of disk-based concurrency control. This warrants a
hybrid database system with the ability to selectively apply an
enhanced disk-based data management within the context of inmemory
systems that would help improve overall throughput.
The general view is that in-memory systems substantially
outperform disk-based systems. We question this assumption and
examine how a modified variation of access invariance that we call
enhanced memory access, (EMA) can be used to allow very high
levels of concurrency in the pre-fetching of data in disk-based
systems. We demonstrate how this prefetching in disk-based systems
can yield close to in-memory performance, which paves the way for
improved hybrid database systems. This paper proposes a novel EMA
technique and presents a comparative study between disk-based EMA
systems and in-memory systems running on hardware configurations
of equivalent power in terms of the number of processors and their
speeds. The results of the experiments conducted clearly substantiate
that when used in conjunction with all concurrency control
mechanisms, EMA can increase the throughput of disk-based systems
to levels quite close to those achieved by in-memory system. The
promising results of this work show that enhanced disk-based
systems facilitate in improving hybrid data management within the
broader context of in-memory systems.
Abstract: Cloud computing has provided the impetus for change
in the demand, sourcing, and consumption of IT-enabled services.
The technology developed from an emerging trend towards a ‘musthave’.
Many organizations harnessed on the quick-wins of cloud
computing within the last five years but nowadays reach a plateau
when it comes to sustainable savings and performance. This study
aims to investigate what is needed from an organizational perspective
to make cloud computing a sustainable success. The study was
carried out in Germany among senior IT professionals, both in
management and delivery positions. Our research shows that IT
executives must be prepared to realign their IT workforce to sustain
the advantage of cloud computing for today and the near future.
While new roles will undoubtedly emerge, roles alone cannot ensure
the success of cloud deployments. What is needed is a change in the
IT workforce’s business behaviour, or put more simply, the ways in
which the IT personnel works. It gives clear guidance on which
dimensions of an employees’ working behaviour need to be adapted.
The practical implications are drawn from a series of semi-structured
interviews, resulting in a high-level workforce enablement plan.
Lastly, it elaborates on tools and gives clear guidance on which
pitfalls might arise along the proposed workforce enablement
process.
Abstract: In the cloud computing hierarchy IaaS is the lowest
layer, all other layers are built over it. Thus it is the most important
layer of cloud and requisite more importance. Along with advantages
IaaS faces some serious security related issue. Mainly Security
focuses on Integrity, confidentiality and availability. Cloud
computing facilitate to share the resources inside as well as outside of
the cloud. On the other hand, cloud still not in the state to provide
surety to 100% data security. Cloud provider must ensure that end
user/client get a Quality of Service. In this report we describe
possible aspects of cloud related security.
Abstract: Botnets are one of the most serious and widespread
cyber threats. Today botnets have been facilitating many
cybercrimes, especially financial, top secret thefts. Botnets can be
available for lease in the market and are utilized by the
cybercriminals to launch massive attacks like DDoS, click fraud,
phishing attacks etc., Several large institutions, hospitals, banks,
government organizations and many social networks such as twitter,
facebook etc., became the target of the botmasters. Recently,
noteworthy researches have been carried out to detect bot, C&C
channels, botnet and botmasters. Using many sophisticated
technologies, botmasters made botnet a titan of the cyber world.
Innumerable challenges have been put forth by the botmasters to the
researchers in the detection of botnet. In this paper we present a
survey of different types of botnet C&C channels and also provide a
comparison of various botnet categories. Finally we hope that our
survey will create awareness for forthcoming botnet research
endeavors.
Abstract: Cloud service brokering is a new service paradigm that
provides interoperability and portability of application across multiple
Cloud providers. In this paper, we designed Cloud service brokerage
system, anyBroker, supporting integrated service provisioning and
SLA based service lifecycle management. For the system design, we
introduce the system concept and whole architecture, details of main
components and use cases of primary operations in the system. These
features ease the Cloud service provider and customer’s concern and
support new Cloud service open market to increase Cloud service
profit and prompt Cloud service echo system in Cloud computing
related area.
Abstract: This paper presents a real-time visualization technique
and filtering of classified LiDAR point clouds. The visualization is
capable of displaying filtered information organized in layers by the
classification attribute saved within LiDAR datasets. We explain the
used data structure and data management, which enables real-time
presentation of layered LiDAR data. Real-time visualization is
achieved with LOD optimization based on the distance from the
observer without loss of quality. The filtering process is done in two
steps and is entirely executed on the GPU and implemented using
programmable shaders.
Abstract: This research proposes a novel reconstruction protocol
for restoring missing surfaces and low-quality edges and shapes in
photos of artifacts at historical sites. The protocol starts with the
extraction of a cloud of points. This extraction process is based on
four subordinate algorithms, which differ in the robustness and
amount of resultant. Moreover, they use different -but
complementary- accuracy to some related features and to the way
they build a quality mesh. The performance of our proposed protocol
is compared with other state-of-the-art algorithms and toolkits. The
statistical analysis shows that our algorithm significantly outperforms
its rivals in the resultant quality of its object files used to reconstruct
the desired model.