Abstract: Particle size distribution, the most important
characteristics of aerosols, is obtained through electrical
characterization techniques. The dynamics of charged nanoparticles
under the influence of electric field in Electrical Mobility
Spectrometer (EMS) reveals the size distribution of these particles.
The accuracy of this measurement is influenced by flow conditions,
geometry, electric field and particle charging process, therefore by
the transfer function (transfer matrix) of the instrument. In this work,
a wire-cylinder corona charger was designed and the combined fielddiffusion
charging process of injected poly-disperse aerosol particles
was numerically simulated as a prerequisite for the study of a
multichannel EMS. The result, a cloud of particles with no uniform
charge distribution, was introduced to the EMS. The flow pattern and
electric field in the EMS were simulated using Computational Fluid
Dynamics (CFD) to obtain particle trajectories in the device and
therefore to calculate the reported signal by each electrometer.
According to the output signals (resulted from bombardment of
particles and transferring their charges as currents), we proposed a
modification to the size of detecting rings (which are connected to
electrometers) in order to evaluate particle size distributions more
accurately. Based on the capability of the system to transfer
information contents about size distribution of the injected particles,
we proposed a benchmark for the assessment of optimality of the
design. This method applies the concept of Von Neumann entropy
and borrows the definition of entropy from information theory
(Shannon entropy) to measure optimality. Entropy, according to the
Shannon entropy, is the ''average amount of information contained in
an event, sample or character extracted from a data stream''.
Evaluating the responses (signals) which were obtained via various
configurations of detecting rings, the best configuration which gave
the best predictions about the size distributions of injected particles,
was the modified configuration. It was also the one that had the
maximum amount of entropy. A reasonable consistency was also
observed between the accuracy of the predictions and the entropy
content of each configuration. In this method, entropy is extracted
from the transfer matrix of the instrument for each configuration.
Ultimately, various clouds of particles were introduced to the
simulations and predicted size distributions were compared to the
exact size distributions.
Abstract: This study suggests the estimation method of stress
distribution for the beam structures based on TLS (Terrestrial Laser
Scanning). The main components of method are the creation of the
lattices of raw data from TLS to satisfy the suitable condition and
application of CSSI (Cubic Smoothing Spline Interpolation) for
estimating stress distribution. Estimation of stress distribution for the
structural member or the whole structure is one of the important
factors for safety evaluation of the structure. Existing sensors which
include ESG (Electric strain gauge) and LVDT (Linear Variable
Differential Transformer) can be categorized as contact type sensor
which should be installed on the structural members and also there are
various limitations such as the need of separate space where the
network cables are installed and the difficulty of access for sensor
installation in real buildings. To overcome these problems inherent in
the contact type sensors, TLS system of LiDAR (light detection and
ranging), which can measure the displacement of a target in a long
range without the influence of surrounding environment and also get
the whole shape of the structure, has been applied to the field of
structural health monitoring. The important characteristic of TLS
measuring is a formation of point clouds which has many points
including the local coordinate. Point clouds are not linear distribution
but dispersed shape. Thus, to analyze point clouds, the interpolation is
needed vitally. Through formation of averaged lattices and CSSI for
the raw data, the method which can estimate the displacement of
simple beam was developed. Also, the developed method can be
extended to calculate the strain and finally applicable to estimate a
stress distribution of a structural member. To verify the validity of the
method, the loading test on a simple beam was conducted and TLS
measured it. Through a comparison of the estimated stress and
reference stress, the validity of the method is confirmed.
Abstract: Cloud computing has emerged as a promising
direction for cost efficient and reliable service delivery across data
communication networks. The dynamic location of service facilities
and the virtualization of hardware and software elements are stressing
the communication networks and protocols, especially when data
centres are interconnected through the internet. Although the
computing aspects of cloud technologies have been largely
investigated, lower attention has been devoted to the networking
services without involving IT operating overhead. Cloud computing
has enabled elastic and transparent access to infrastructure services
without involving IT operating overhead. Virtualization has been a
key enabler for cloud computing. While resource virtualization and
service abstraction have been widely investigated, networking in
cloud remains a difficult puzzle. Even though network has significant
role in facilitating hybrid cloud scenarios, it hasn't received much
attention in research community until recently. We propose Network
as a Service (NaaS), which forms the basis of unifying public and
private clouds. In this paper, we identify various challenges in
adoption of hybrid cloud. We discuss the design and implementation
of a cloud platform.
Abstract: Nowadays, cloud environments are becoming a need for companies, this new technology gives the opportunities to access to the data anywhere and anytime. It also provides an optimized and secured access to the resources and gives more security for the data which is stored in the platform. However, some companies do not trust Cloud providers, they think that providers can access and modify some confidential data such as bank accounts. Many works have been done in this context, they conclude that encryption methods realized by providers ensure the confidentiality, but, they forgot that Cloud providers can decrypt the confidential resources. The best solution here is to apply some operations on the data before sending them to the provider Cloud in the objective to make them unreadable. The principal idea is to allow user how it can protect his data with his own methods. In this paper, we are going to demonstrate our approach and prove that is more efficient in term of execution time than some existing methods. This work aims at enhancing the quality of service of providers and ensuring the trust of the customers.
Abstract: Cloud computing is the innovative and leading
information technology model for enabling convenient, on-demand
network access to a shared pool of configurable computing resources
that can be rapidly provisioned and released with minimal
management effort. In this paper, we aim at the development of
workflow management system for cloud computing platforms based
on our previous research on the dynamic allocation of the cloud
computing resources and its workflow process. We took advantage of
the HTML5 technology and developed web-based workflow interface.
In order to enable the combination of many tasks running on the cloud
platform in sequence, we designed a mechanism and developed an
execution engine for workflow management on clouds. We also
established a prediction model which was integrated with job queuing
system to estimate the waiting time and cost of the individual tasks on
different computing nodes, therefore helping users achieve maximum
performance at lowest payment. This proposed effort has the potential
to positively provide an efficient, resilience and elastic environment
for cloud computing platform. This development also helps boost user
productivity by promoting a flexible workflow interface that lets users
design and control their tasks' flow from anywhere.
Abstract: Parabolic solar trough systems have seen limited
deployments in cold northern climates as they are more suitable for
electricity production in southern latitudes. A numerical dynamic
model is developed to simulate troughs installed in cold climates and
validated using a parabolic solar trough facility in Winnipeg. The
model is developed in Simulink and will be utilized to simulate a trigeneration
system for heating, cooling and electricity generation in
remote northern communities. The main objective of this simulation
is to obtain operational data of solar troughs in cold climates and use
the model to determine ways to improve the economics and address
cold weather issues.
In this paper the validated Simulink model is applied to simulate a
solar assisted absorption cooling system along with electricity
generation using Organic Rankine Cycle (ORC) and thermal storage.
A control strategy is employed to distribute the heated oil from solar
collectors among the above three systems considering the
temperature requirements. This modelling provides dynamic
performance results using measured meteorological data recorded
every minute at the solar facility location. The purpose of this
modeling approach is to accurately predict system performance at
each time step considering the solar radiation fluctuations due to
passing clouds. Optimization of the controller in cold temperatures is
another goal of the simulation to for example minimize heat losses in
winter when energy demand is high and solar resources are low.
The solar absorption cooling is modeled to use the generated heat
from the solar trough system and provide cooling in summer for a
greenhouse which is located next to the solar field.
The results of the simulation are presented for a summer day in
Winnipeg which includes comparison of performance parameters of
the absorption cooling and ORC systems at different heat transfer
fluid (HTF) temperatures.
Abstract: Cloud computing is a new technology in industry and
academia. The technology has grown and matured in last half decade
and proven their significant role in changing environment of IT
infrastructure where cloud services and resources are offered over the
network. Cloud technology enables users to use services and
resources without being concerned about the technical implications of
technology. There are substantial research work has been performed
for the usage of cloud computing in educational institutes and
majority of them provides cloud services over high-end blade servers
or other high-end CPUs. However, this paper proposes a new stack
called “CiCKAStack” which provide cloud services over unutilized
computing resources, named as commodity computers.
“CiCKAStack” provides IaaS and PaaS using underlying commodity
computers. This will not only increasing the utilization of existing
computing resources but also provide organize file system, on
demand computing resource and design and development
environment.
Abstract: Neurons in the nervous system communicate with
each other by producing electrical signals called spikes. To
investigate the physiological function of nervous system it is essential
to study the activity of neurons by detecting and sorting spikes in the
recorded signal. In this paper a method is proposed for considering
the spike sorting problem which is based on the nonlinear modeling
of spikes using exponential autoregressive model. The genetic
algorithm is utilized for model parameter estimation. In this regard
some selected model coefficients are used as features for sorting
purposes. For optimal selection of model coefficients, self-organizing
feature map is used. The results show that modeling of spikes with
nonlinear autoregressive model outperforms its linear counterpart.
Also the extracted features based on the coefficients of exponential
autoregressive model are better than wavelet based extracted features
and get more compact and well-separated clusters. In the case of
spikes different in small-scale structures where principal component
analysis fails to get separated clouds in the feature space, the
proposed method can obtain well-separated cluster which removes
the necessity of applying complex classifiers.
Abstract: Cloud service brokering is a new service paradigm that
provides interoperability and portability of application across multiple
Cloud providers. In this paper, we designed Cloud service brokerage
system, anyBroker, supporting integrated service provisioning and
SLA based service lifecycle management. For the system design, we
introduce the system concept and whole architecture, details of main
components and use cases of primary operations in the system. These
features ease the Cloud service provider and customer’s concern and
support new Cloud service open market to increase Cloud service
profit and prompt Cloud service echo system in Cloud computing
related area.
Abstract: This paper presents a real-time visualization technique
and filtering of classified LiDAR point clouds. The visualization is
capable of displaying filtered information organized in layers by the
classification attribute saved within LiDAR datasets. We explain the
used data structure and data management, which enables real-time
presentation of layered LiDAR data. Real-time visualization is
achieved with LOD optimization based on the distance from the
observer without loss of quality. The filtering process is done in two
steps and is entirely executed on the GPU and implemented using
programmable shaders.
Abstract: This research proposes a novel reconstruction protocol
for restoring missing surfaces and low-quality edges and shapes in
photos of artifacts at historical sites. The protocol starts with the
extraction of a cloud of points. This extraction process is based on
four subordinate algorithms, which differ in the robustness and
amount of resultant. Moreover, they use different -but
complementary- accuracy to some related features and to the way
they build a quality mesh. The performance of our proposed protocol
is compared with other state-of-the-art algorithms and toolkits. The
statistical analysis shows that our algorithm significantly outperforms
its rivals in the resultant quality of its object files used to reconstruct
the desired model.
Abstract: The spread of Web 2.0 has caused user-generated content explosion. Users can tag resources to describe and organize
them. Tag clouds provide rough impression of relative importance of
each tag within overall cloud in order to facilitate browsing among
numerous tags and resources. The goal of our paper is to enrich
visualization of tag clouds. A font distribution algorithm has been
proposed to calculate a novel metric based on frequency and content,
and to classify among classes from this metric based on power
law distribution and percentages. The suggested algorithm has been
validated and verified on the tag cloud of a real-world thesis portal.
Abstract: Due to climate warming and consequently due to ice and snow melting of the Arctic Ocean, the highly biologically active ocean surface area has been expanding quickly making possible longer marine biota growth seasons during polar summers. That increase the probability of the remote marine environment secondary contribution, especially secondary organic contribution, to the particle production and particle growth events and particle properties, consequently effecting on the open ocean, pack ice and ground based regions radiation budget and thus on the feedbacks between arctic biota, particles, clouds, and climate.
Abstract: Construction of geo-spatial information recently tends to develop as multi-dimensional geo-spatial information. People constructing spatial information is also expanding its area to the general public from some experts. As well as, studies are in progress using a variety of devices, with the aim of near real-time update. In this paper, getting the stereo images using GoPro device used widely also to the general public as well as experts. And correcting the distortion of the images, then by using SIFT, DLT, is acquired the point clouds. It presented a possibility that on the basis of this experiment, using a video device that is readily available in real life, to create a real-time digital map.
Abstract: Transient storage zones along the flow paths of rivers have great influence on the dispersion of pollutants that are either accidentally or otherwise led into them. The speed with which these pollution clouds get transported and dispersed downstream is, to a large extent, explained by the longitudinal dispersion coefficients in the free-flowing zones of rivers (Kf). In the present work, a new empirical expression for Kf has been derived employing genetic programming (GP) on published dispersion data. The proposed expression uses few hydraulic and geometric characteristics of a river that are readily available to field engineers. Based on various performance indices, the proposed expression is found superior to other existing expression for Kf.
Abstract: Cloud computing technology is very useful in present day to day life, it uses the internet and the central remote servers to provide and maintain data as well as applications. Such applications in turn can be used by the end users via the cloud communications without any installation. Moreover, the end users’ data files can be accessed and manipulated from any other computer using the internet services. Despite the flexibility of data and application accessing and usage that cloud computing environments provide, there are many questions still coming up on how to gain a trusted environment that protect data and applications in clouds from hackers and intruders. This paper surveys the “keys generation and management” mechanism and encryption/decryption algorithms used in cloud computing environments, we proposed new security architecture for cloud computing environment that considers the various security gaps as much as possible. A new cryptographic environment that implements quantum mechanics in order to gain more trusted with less computation cloud communications is given.
Abstract: Oilsands bitumen is an extremely important source of
energy for North America. However, due to the presence of large
molecules such as asphaltenes, the density and viscosity of the
bitumen recovered from these sands are much higher than those of
conventional crude oil. As a result the extracted bitumen has to be
diluted with expensive solvents, or thermochemically upgraded in
large, capital-intensive conventional upgrading facilities prior to
pipeline transport. This study demonstrates that globally abundant
natural zeolites such as clinoptilolite from Saint Clouds, New Mexico
and Ca-chabazite from Bowie, Arizona can be used as very effective
reagents for cracking and visbreaking of oilsands bitumen. Natural
zeolite cracked oilsands bitumen products are highly recoverable (up
to ~ 83%) using light hydrocarbons such as pentane, which indicates
substantial conversion of heavier fractions to lighter components.
The resultant liquid products are much less viscous, and have lighter
product distribution compared to those produced from pure thermal
treatment. These natural minerals impart similar effect on industrially
extracted Athabasca bitumen.
Abstract: According to dramatic growth of internet services, an easy and prompt service deployment has been important for internet service providers to successfully maintain time-to-market. Before global service deployment, they have to pay the big cost for service evaluation to make a decision of the proper system location, system scale, service delay and so on. But, intra-Lab evaluation tends to have big gaps in the measured data compared with the realistic situation, because it is very difficult to accurately expect the local service environment, network congestion, service delay, network bandwidth and other factors. Therefore, to resolve or ease the upper problems, we propose multiple cloud based GPES Broker system and use case that helps internet service providers to alleviate the above problems in beta release phase and to make a prompt decision for their service launching. By supporting more realistic and reliable evaluation information, the proposed GPES Broker system saves the service release cost and enables internet service provider to make a prompt decision about their service launching to various remote regions.
Abstract: Skyline extraction in mountainous images can be used
for navigation of vehicles or UAV(unmanned air vehicles), but it is
very hard to extract skyline shape because of clutters like clouds, sea
lines and field borders in images. We developed the edge-based
skyline extraction algorithm using a proposed multistage edge filtering
(MEF) technique. In this method, characteristics of clutters in the
image are first defined and then the lines classified as clutters are
eliminated by stages using the proposed MEF technique. After this
processing, we select the last line using skyline measures among the
remained lines. This proposed algorithm is robust under severe
environments with clutters and has even good performance for
infrared sensor images with a low resolution. We tested this proposed
algorithm for images obtained in the field by an infrared camera and
confirmed that the proposed algorithm produced a better performance
and faster processing time than conventional algorithms.
Abstract: CloudSim is a useful tool to simulate the cloud
environment. It shows the service availability, the power consumption,
and the network traffic of services on the cloud environment.
Moreover, it supports to calculate a network communication delay
through a network topology data easily. CloudSim allows inputting a
file of topology data, but it does not provide any generating process.
Thus, it needs the file of topology data generated from some other
tools. The BRITE is typical network topology generator. Also, it
supports various type of topology generating algorithms. If CloudSim
can include the BRITE, network simulation for clouds is easier than
existing version. This paper shows the potential of connection between
BRITE and CloudSim. Also, it proposes the direction to link between
them.