Abstract: Recent progress in the next generation of automobile
technology is geared towards incorporating information technology
into cars. Collectively called smart cars are bringing intelligence to
cars that provides comfort, convenience and safety. A branch of smart
cars is connected-car system. The key concept in connected-cars is the
sharing of driving information among cars through decentralized
manner enabling collective intelligence. This paper proposes a
foundation of the information model that is necessary to define the
driving information for smart-cars. Road conditions are modeled
through a unique data structure that unambiguously represent the time
variant traffics in the streets. Additionally, the modeled data structure
is exemplified in a navigational scenario and usage using UML.
Optimal driving route searching is also discussed using the proposed
data structure in a dynamically changing road conditions.
Abstract: Cloud computing is a business model which provides
an easier management of computing resources. Cloud users can
request virtual machine and install additional softwares and configure
them if needed. However, user can also request virtual appliance
which provides a better solution to deploy application in much faster
time, as it is ready-built image of operating system with necessary
softwares installed and configured. Large numbers of virtual
appliances are available in different image format. User can
download available appliances from public marketplace and start
using it. However, information published about the virtual appliance
differs from each providers leading to the difficulty in choosing
required virtual appliance as it is composed of specific OS with
standard software version. However, even if user choses the
appliance from respective providers, user doesn’t have any flexibility
to choose their own set of softwares with required OS and
application. In this paper, we propose a referenced architecture for
dynamically customizing virtual appliance and provision them in an
easier manner. We also add our experience in integrating our
proposed architecture with public marketplace and Mi-Cloud, a cloud
management software.
Abstract: Green chemistry for plant extraction of active principles is the main interest of many researchers concerned with climate change. While classical organic solvents are detrimental to our environment, greener alternatives to ionic liquids are very promising for sustainable organic chemistry. This study focused on the determination of functional groups observed in the main constituents from the ionic liquid extracts of Coleus aromaticus Benth leaves using FT-IR Spectroscopy. Moreover, this research aimed to determine the best ionic liquid that can separate functionalized plant constituents from the leaves Coleus aromaticus Benth using Fourier Transform Infrared Spectroscopy. Coleus aromaticus Benth leaf extract in different ionic liquids, elucidated pharmacologically important functional groups present in major constituents of the plant, namely, rosmarinic acid, caffeic acid and chlorogenic acid. In connection to distinctive appearance of functional groups in the spectrum and highest % transmittance, potassium chloride-glycerol is the best ionic liquid for green extraction.
Abstract: In this paper, autonomous performance of a small
manufactured unmanned helicopter is tried to be increased. For this
purpose, a small unmanned helicopter is manufactured in Erciyes
University, Faculty of Aeronautics and Astronautics. It is called as
ZANKA-Heli-I. For performance maximization, autopilot parameters
are determined via minimizing a cost function consisting of flight
performance parameters such as settling time, rise time, overshoot
during trajectory tracking. For this purpose, a stochastic optimization
method named as simultaneous perturbation stochastic approximation
is benefited. Using this approach, considerable autonomous
performance increase (around %23) is obtained.
Abstract: The very well-known stacked sets of numbers referred
to as Pascal’s triangle present the coefficients of the binomial
expansion of the form (x+y)n. This paper presents an approach (the
Staircase Horizontal Vertical, SHV-method) to the generalization of
planar Pascal’s triangle for polynomial expansion of the form
(x+y+z+w+r+⋯)n. The presented generalization of Pascal’s triangle
is different from other generalizations of Pascal’s triangles given in
the literature. The coefficients of the generalized Pascal’s triangles,
presented in this work, are generated by inspection, using embedded
Pascal’s triangles. The coefficients of I-variables expansion are
generated by horizontally laying out the Pascal’s elements of (I-1)
variables expansion, in a staircase manner, and multiplying them with
the relevant columns of vertically laid out classical Pascal’s elements,
hence avoiding factorial calculations for generating the coefficients
of the polynomial expansion. Furthermore, the classical Pascal’s
triangle has some pattern built into it regarding its odd and even
numbers. Such pattern is known as the Sierpinski’s triangle. In this
study, a presentation of Sierpinski-like patterns of the generalized
Pascal’s triangles is given. Applications related to those coefficients
of the binomial expansion (Pascal’s triangle), or polynomial
expansion (generalized Pascal’s triangles) can be in areas of
combinatorics, and probabilities.
Abstract: This paper describes two methods for the reduction of
the peak input current during the boosting of Dickson charge pumps.
Both methods are implemented in the fully integrated Dickson charge
pumps of a high-voltage display driver chip for smart-card
applications. Experimental results reveal good correspondence with
Spice simulations and show a reduction of the peak input current by a
factor of 6 during boosting.
Abstract: The UK has had its fair share of the shale gas
revolutionary waves blowing across the global oil and gas industry at
present. Although, its exploitation is widely agreed to have been
delayed, shale gas was looked upon favorably by the UK Parliament
when they recognized it as genuine energy source and granted
licenses to industry to search and extract the resource. This, although
a significant progress by industry, there yet remains another test the
UK fracking resource must pass in order to render shale gas
extraction feasible – it must be economically extractible and
sustainably so. Developing unconventional resources is much more
expensive and risky, and for shale gas wells, producing in
commercial volumes is conditional upon drilling horizontal wells and
hydraulic fracturing, techniques which increase CAPEX. Meanwhile,
investment in shale gas development projects is sensitive to gas price
and technical and geological risks. Using a Two-Factor Model, the
economics of the Bowland shale wells were analyzed and the
operational conditions under which fracking is profitable in the UK
was characterized. We find that there is a great degree of flexibility
about Opex spending; hence Opex does not pose much threat to the
fracking industry in the UK. However, we discover Bowland shale
gas wells fail to add value at gas price of $8/ Mmbtu. A minimum gas
price of $12/Mmbtu at Opex of no more than $2/ Mcf and no more
than $14.95M Capex are required to create value within the present
petroleum tax regime, in the UK fracking industry.
Abstract: Growth and remodeling of biological structures have
gained lots of attention over the past decades. Determining the
response of living tissues to mechanical loads is necessary for a wide
range of developing fields such as prosthetics design or computerassisted
surgical interventions. It is a well-known fact that biological
structures are never stress-free, even when externally unloaded. The
exact origin of these residual stresses is not clear, but theoretically,
growth is one of the main sources. Extracting body organ’s shapes
from medical imaging does not produce any information regarding
the existing residual stresses in that organ. The simplest cause of such
stresses is gravity since an organ grows under its influence from
birth. Ignoring such residual stresses might cause erroneous results in
numerical simulations. Accounting for residual stresses due to tissue
growth can improve the accuracy of mechanical analysis results. This
paper presents an original computational framework based on gradual
growth to determine the residual stresses due to growth. To illustrate
the method, we apply it to a finite element model of a healthy human
face reconstructed from medical images. The distribution of residual
stress in facial tissues is computed, which can overcome the effect of
gravity and maintain tissues firmness. Our assumption is that tissue
wrinkles caused by aging could be a consequence of decreasing
residual stress and thus not counteracting gravity. Taking into
account these stresses seems therefore extremely important in
maxillofacial surgery. It would indeed help surgeons to estimate
tissues changes after surgery.
Abstract: In this paper, a method has been developed to
construct the membership surfaces of row and column vectors and
arithmetic operations of imprecise matrix. A matrix with imprecise
elements would be called an imprecise matrix. The membership
surface of imprecise vector has been already shown based on
Randomness-Impreciseness Consistency Principle. The Randomness-
Impreciseness Consistency Principle leads to defining a normal law
of impreciseness using two different laws of randomness. In this
paper, the author has shown row and column membership surfaces
and arithmetic operations of imprecise matrix and demonstrated with
the help of numerical example.
Abstract: Flash flood is occurred in short time rainfall interval:
from 1 hour to 12 hours in small and medium basins. Flash floods
typically have two characteristics: large water flow and big flow
velocity. Flash flood is occurred at hill valley site (strip of lowland of
terrain) in a catchment with large enough distribution area, steep
basin slope, and heavy rainfall. The risk of flash floods is determined
through Gridded Basin Flash Flood Potential Index (GBFFPI). Flash
Flood Potential Index (FFPI) is determined through terrain slope
flash flood index, soil erosion flash flood index, land cover flash
floods index, land use flash flood index, rainfall flash flood index.
Determining GBFFPI, each cell in a map can be considered as outlet
of a water accumulation basin. GBFFPI of the cell is determined as
basin average value of FFPI of the corresponding water accumulation
basin. Based on GIS, a tool is developed to compute GBFFPI using
ArcObjects SDK for .NET. The maps of GBFFPI are built in two
types: GBFFPI including rainfall flash flood index (real time flash
flood warning) or GBFFPI excluding rainfall flash flood index.
GBFFPI Tool can be used to determine a high flash flood potential
site in a large region as quick as possible. The GBFFPI is improved
from conventional FFPI. The advantage of GBFFPI is that GBFFPI is
taking into account the basin response (interaction of cells) and
determines more true flash flood site (strip of lowland of terrain)
while conventional FFPI is taking into account single cell and does
not consider the interaction between cells. The GBFFPI Map of
QuangNam, QuangNgai, DaNang, Hue is built and exported to
Google Earth. The obtained map proves scientific basis of GBFFPI.
Abstract: In this paper, we investigate the low-lying energy
levels of the two-dimensional parabolic graphene quantum dots
(GQDs) in the presence of topological defects with long range
Coulomb impurity and subjected to an external uniform magnetic
field. The low-lying energy levels of the system are obtained within
the framework of the perturbation theory. We theoretically
demonstrate that a valley splitting can be controlled by geometrical
parameters of the graphene quantum dots and/or by tuning a uniform
magnetic field, as well as topological defects. It is found that, for
parabolic graphene dots, the valley splitting occurs due to the
introduction of spatial confinement. The corresponding splitting is
enhanced by the introduction of a uniform magnetic field and it
increases by increasing the angle of the cone in subcritical regime.
Abstract: Discussing the nexus between global health policy and local practices, this article addresses the recent Ebola outbreak as a role model for narrative co-constructions of epidemic risk. We will demonstrate in how far a theory-driven and methodologically rooted analysis of narrativity can help to improve mechanisms of prevention and intervention whenever epidemic risk needs to be addressed locally in order to contribute to global health. Analyzing the narrative transformation of Ebola, we will also address issues of transcultural problem-solving and of normative questions at stake. In this regard, we seek to contribute to a better understanding of a key question of global health and justice as well as to the underlying ethical questions. By highlighting and analyzing the functions of narratives, this paper provides a translational approach to refine our practices by which we address epidemic risk, be it on the national, the transnational or the global scale.
Abstract: Landfill leachates contain a number of persistent pollutants, including heavy metals. They have the ability to spread in ecosystems and accumulate in fish which most of them are classified as top-consumers of trophic chains. Fish are freely swimming organisms; but perhaps, due to their species-specific ecological and behavioral properties, they often prefer the most suitable biotopes and therefore, did not avoid harmful substances or environments. That is why it is necessary to evaluate the persistent pollutant dispersion in hydroecosystem using fish tissue metal concentration. In hydroecosystems of hybrid type (e.g. river-pond-river) the distance from the pollution source could be a perfect indicator of such a kind of metal distribution. The studies were carried out in the Kairiai landfill neighboring hybrid-type ecosystem which is located 5 km east of the Šiauliai City. Fish tissue (gills, liver, and muscle) metal concentration measurements were performed on two types of ecologically-different fishes according to their feeding characteristics: benthophagous (Gibel carp, roach) and predatory (Northern pike, perch). A number of mathematical models (linear, non-linear, using log and other transformations) have been applied in order to identify the most satisfactorily description of the interdependence between fish tissue metal concentration and the distance from the pollution source. However, the only one log-multiple regression model revealed the pattern that the distance from the pollution source is closely and positively correlated with metal concentration in all predatory fish tissues studied (gills, liver, and muscle).
Abstract: Magnetic Resonance Imaging Contrast Agents
(MRI-CM) are significant in the clinical and biological imaging as
they have the ability to alter the normal tissue contrast, thereby
affecting the signal intensity to enhance the visibility and detectability
of images. Superparamagnetic Iron Oxide (SPIO) nanoparticles,
coated with dextran or carboxydextran are currently available for
clinical MR imaging of the liver. Most SPIO contrast agents are
T2 shortening agents and Resovist (Ferucarbotran) is one of a
clinically tested, organ-specific, SPIO agent which has a low
molecular carboxydextran coating. The enhancement effect of
Resovist depends on its relaxivity which in turn depends on factors
like magnetic field strength, concentrations, nanoparticle properties,
pH and temperature. Therefore, this study was conducted to
investigate the impact of field strength and different contrast
concentrations on enhancement effects of Resovist. The study
explored the MRI signal intensity of Resovist in the physiological
range of plasma from T2-weighted spin echo sequence at three
magnetic field strengths: 0.47 T (r1=15, r2=101), 1.5 T (r1=7.4,
r2=95), and 3 T (r1=3.3, r2=160) and the range of contrast
concentrations by a mathematical simulation. Relaxivities of r1 and r2
(L mmol-1 Sec-1) were obtained from a previous study and the selected
concentrations were 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 1.0, 2.0, and 3.0 mmol/L. T2-weighted images were
simulated using TR/TE ratio as 2000 ms /100 ms. According to the
reference literature, with increasing magnetic field strengths, the
r1 relaxivity tends to decrease while the r2 did not show any
systematic relationship with the selected field strengths. In parallel,
this study results revealed that the signal intensity of Resovist at lower
concentrations tends to increase than the higher concentrations. The
highest reported signal intensity was observed in the low field strength
of 0.47 T. The maximum signal intensities for 0.47 T, 1.5 T and 3 T
were found at the concentration levels of 0.05, 0.06 and 0.05 mmol/L,
respectively. Furthermore, it was revealed that, the concentrations
higher than the above, the signal intensity was decreased
exponentially. An inverse relationship can be found between the field
strength and T2 relaxation time, whereas, the field strength was
increased, T2 relaxation time was decreased accordingly. However,
resulted T2 relaxation time was not significantly different between
0.47 T and 1.5 T in this study. Moreover, a linear correlation of
transverse relaxation rates (1/T2, s–1) with the concentrations of
Resovist can be observed. According to these results, it can conclude
that the concentration of SPIO nanoparticle contrast agents and the
field strengths of MRI are two important parameters which can affect the signal intensity of T2-weighted SE sequence. Therefore, when MR
imaging those two parameters should be considered prudently.
Abstract: The increasing availability of information about earth
surface elevation (Digital Elevation Models DEM) generated from
different sources (remote sensing, Aerial Images, Lidar) poses the
question about how to integrate and make available to the most than
possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the
quality of data management plays a fundamental role. Due to the high
acquisition costs and the huge amount of generated data, highresolution
terrain surveys tend to be small or medium sized and
available on limited portion of earth. Here comes the need to merge
large-scale height maps that typically are made available for free at
worldwide level, with very specific high resolute datasets. One the
other hand, the third dimension increases the user experience and the
data representation quality, unlocking new possibilities in data
analysis for civil protection, real estate, urban planning, environment
monitoring, etc. The open-source 3D virtual globes, which are
trending topics in Geovisual Analytics, aim at improving the
visualization of geographical data provided by standard web services
or with proprietary formats. Typically, 3D Virtual globes like do not
offer an open-source tool that allows the generation of a terrain
elevation data structure starting from heterogeneous-resolution terrain
datasets. This paper describes a technological solution aimed to set
up a so-called “Terrain Builder”. This tool is able to merge
heterogeneous-resolution datasets, and to provide a multi-resolution
worldwide terrain services fully compatible with CesiumJS and
therefore accessible via web using traditional browser without any
additional plug-in.
Abstract: With 40% of total world energy consumption,
building systems are developing into technically complex large
energy consumers suitable for application of sophisticated power
management approaches to largely increase the energy efficiency
and even make them active energy market participants. Centralized
control system of building heating and cooling managed by
economically-optimal model predictive control shows promising
results with estimated 30% of energy efficiency increase. The research
is focused on implementation of such a method on a case study
performed on two floors of our faculty building with corresponding
sensors wireless data acquisition, remote heating/cooling units and
central climate controller. Building walls are mathematically modeled
with corresponding material types, surface shapes and sizes. Models
are then exploited to predict thermal characteristics and changes in
different building zones. Exterior influences such as environmental
conditions and weather forecast, people behavior and comfort
demands are all taken into account for deriving price-optimal climate
control. Finally, a DC microgrid with photovoltaics, wind turbine,
supercapacitor, batteries and fuel cell stacks is added to make the
building a unit capable of active participation in a price-varying
energy market. Computational burden of applying model predictive
control on such a complex system is relaxed through a hierarchical
decomposition of the microgrid and climate control, where the
former is designed as higher hierarchical level with pre-calculated
price-optimal power flows control, and latter is designed as lower
level control responsible to ensure thermal comfort and exploit
the optimal supply conditions enabled by microgrid energy flows
management. Such an approach is expected to enable the inclusion
of more complex building subsystems into consideration in order to
further increase the energy efficiency.
Abstract: Recently, traffic monitoring has attracted the attention
of computer vision researchers. Many algorithms have been
developed to detect and track moving vehicles. In fact, vehicle
tracking in daytime and in nighttime cannot be approached with the
same techniques, due to the extreme different illumination conditions.
Consequently, traffic-monitoring systems are in need of having a
component to differentiate between daytime and nighttime scenes. In
this paper, a HSV-based day/night detector is proposed for traffic
monitoring scenes. The detector employs the hue-histogram and the
value-histogram on the top half of the image frame. Experimental
results show that the extraction of the brightness features along with
the color features within the top region of the image is effective for
classifying traffic scenes. In addition, the detector achieves high
precision and recall rates along with it is feasible for real time
applications.
Abstract: Nowadays, education cannot be imagined without digital technologies. It broadens the horizons of teaching learning processes. Several universities are offering online courses. For evaluation purpose, e-examination systems are being widely adopted in academic environments. Multiple-choice tests are extremely popular. Moving away from traditional examinations to e-examination, Moodle as Learning Management Systems (LMS) is being used. Moodle logs every click that students make for attempting and navigational purposes in e-examination. Data mining has been applied in various domains including retail sales, bioinformatics. In recent years, there has been increasing interest in the use of data mining in e-learning environment. It has been applied to discover, extract, and evaluate parameters related to student’s learning performance. The combination of data mining and e-learning is still in its babyhood. Log data generated by the students during online examination can be used to discover knowledge with the help of data mining techniques. In web based applications, number of right and wrong answers of the test result is not sufficient to assess and evaluate the student’s performance. So, assessment techniques must be intelligent enough. If student cannot answer the question asked by the instructor then some easier question can be asked. Otherwise, more difficult question can be post on similar topic. To do so, it is necessary to identify difficulty level of the questions. Proposed work concentrate on the same issue. Data mining techniques in specific clustering is used in this work. This method decide difficulty levels of the question and categories them as tough, easy or moderate and later this will be served to the desire students based on their performance. Proposed experiment categories the question set and also group the students based on their performance in examination. This will help the instructor to guide the students more specifically. In short mined knowledge helps to support, guide, facilitate and enhance learning as a whole.
Abstract: In the aviation industry, many faults may occur frequently during the maintenance processes and assembly operations of complex structured aircrafts because of their high dependencies of components. These faults affect the quality of aircraft parts or developed modules adversely. Technical employee requires long time and high labor force while checking the correctness of each component. In addition, the person must be trained regularly because of the ever-growing and changing technology. Generally, the cost of this training is very high. Augmented Reality (AR) technology reduces the cost of training radically and improves the effectiveness of the training. In this study, the usage of AR technology in the aviation industry has been investigated and the effectiveness of AR with heads-up display glasses has been examined. An application has been developed for comparison of production process with AR and manual one.
Abstract: Ant algorithms are well-known metaheuristics which
have been widely used since two decades. In most of the literature,
an ant is a constructive heuristic able to build a solution from scratch.
However, other types of ant algorithms have recently emerged: the
discussion is thus not limited by the common framework of the
constructive ant algorithms. Generally, at each generation of an ant
algorithm, each ant builds a solution step by step by adding an
element to it. Each choice is based on the greedy force (also called the
visibility, the short term profit or the heuristic information) and the
trail system (central memory which collects historical information of
the search process). Usually, all the ants of the population have the
same characteristics and behaviors. In contrast in this paper, a new
type of ant metaheuristic is proposed, namely SMART (for Solution
Methods with Ants Running by Types). It relies on the use of different
population of ants, where each population has its own personality.