Abstract: Growth and remodeling of biological structures have
gained lots of attention over the past decades. Determining the
response of living tissues to mechanical loads is necessary for a wide
range of developing fields such as prosthetics design or computerassisted
surgical interventions. It is a well-known fact that biological
structures are never stress-free, even when externally unloaded. The
exact origin of these residual stresses is not clear, but theoretically,
growth is one of the main sources. Extracting body organ’s shapes
from medical imaging does not produce any information regarding
the existing residual stresses in that organ. The simplest cause of such
stresses is gravity since an organ grows under its influence from
birth. Ignoring such residual stresses might cause erroneous results in
numerical simulations. Accounting for residual stresses due to tissue
growth can improve the accuracy of mechanical analysis results. This
paper presents an original computational framework based on gradual
growth to determine the residual stresses due to growth. To illustrate
the method, we apply it to a finite element model of a healthy human
face reconstructed from medical images. The distribution of residual
stress in facial tissues is computed, which can overcome the effect of
gravity and maintain tissues firmness. Our assumption is that tissue
wrinkles caused by aging could be a consequence of decreasing
residual stress and thus not counteracting gravity. Taking into
account these stresses seems therefore extremely important in
maxillofacial surgery. It would indeed help surgeons to estimate
tissues changes after surgery.
Abstract: While the feature sizes of recent Complementary Metal
Oxid Semiconductor (CMOS) devices decrease the influence of static
power prevails their energy consumption. Thus, power savings that
benefit from Dynamic Frequency and Voltage Scaling (DVFS) are
diminishing and temporal shutdown of cores or other microchip
components become more worthwhile. A consequence of powering off unused parts of a chip is that the
relative difference between idle and fully loaded power consumption
is increased. That means, future chips and whole server systems gain
more power saving potential through power-aware load balancing,
whereas in former times this power saving approach had only
limited effect, and thus, was not widely adopted. While powering
off complete servers was used to save energy, it will be superfluous
in many cases when cores can be powered down. An important
advantage that comes with that is a largely reduced time to respond
to increased computational demand. We include the above developments in a server power model
and quantify the advantage. Our conclusion is that strategies from
datacenters when to power off server systems might be used in the
future on core level, while load balancing mechanisms previously
used at core level might be used in the future at server level.
Abstract: With 40% of total world energy consumption,
building systems are developing into technically complex large
energy consumers suitable for application of sophisticated power
management approaches to largely increase the energy efficiency
and even make them active energy market participants. Centralized
control system of building heating and cooling managed by
economically-optimal model predictive control shows promising
results with estimated 30% of energy efficiency increase. The research
is focused on implementation of such a method on a case study
performed on two floors of our faculty building with corresponding
sensors wireless data acquisition, remote heating/cooling units and
central climate controller. Building walls are mathematically modeled
with corresponding material types, surface shapes and sizes. Models
are then exploited to predict thermal characteristics and changes in
different building zones. Exterior influences such as environmental
conditions and weather forecast, people behavior and comfort
demands are all taken into account for deriving price-optimal climate
control. Finally, a DC microgrid with photovoltaics, wind turbine,
supercapacitor, batteries and fuel cell stacks is added to make the
building a unit capable of active participation in a price-varying
energy market. Computational burden of applying model predictive
control on such a complex system is relaxed through a hierarchical
decomposition of the microgrid and climate control, where the
former is designed as higher hierarchical level with pre-calculated
price-optimal power flows control, and latter is designed as lower
level control responsible to ensure thermal comfort and exploit
the optimal supply conditions enabled by microgrid energy flows
management. Such an approach is expected to enable the inclusion
of more complex building subsystems into consideration in order to
further increase the energy efficiency.
Abstract: Orthogonal Frequency Division Multiplexing
(OFDM) has been used in many advanced wireless communication
systems due to its high spectral efficiency and robustness to
frequency selective fading channels. However, the major concern
with OFDM system is the high peak-to-average power ratio (PAPR)
of the transmitted signal. Some of the popular techniques used for
PAPR reduction in OFDM system are conventional partial transmit
sequences (CPTS) and clipping. In this paper, a parallel
combination/hybrid scheme of PAPR reduction using clipping and
CPTS algorithms is proposed. The proposed method intelligently
applies both the algorithms in order to reduce both PAPR as well as
computational complexity. The proposed scheme slightly degrades
bit error rate (BER) performance due to clipping operation and it can
be reduced by selecting an appropriate value of the clipping ratio
(CR). The simulation results show that the proposed algorithm
achieves significant PAPR reduction with much reduced
computational complexity.
Abstract: Visibility problems are central to many computational geometry applications. One of the typical visibility problems is computing the view from a given point. In this paper, a linear time procedure is proposed to compute the visibility subsets from a corner of a rectangular prism in an orthogonal polyhedron. The proposed algorithm could be useful to solve classic 3D problems.
Abstract: The statistical study has become indispensable for various fields of knowledge. Not any different, in Geotechnics the study of probabilistic and statistical methods has gained power considering its use in characterizing the uncertainties inherent in soil properties. One of the situations where engineers are constantly faced is the definition of a probability distribution that represents significantly the sampled data. To be able to discard bad distributions, goodness-of-fit tests are necessary. In this paper, three non-parametric goodness-of-fit tests are applied to a data set computationally generated to test the goodness-of-fit of them to a series of known distributions. It is shown that the use of normal distribution does not always provide satisfactory results regarding physical and behavioral representation of the modeled parameters.
Abstract: Average temperatures worldwide are expected to
continue to rise. At the same time, major cities in developing
countries are becoming increasingly populated and polluted.
Governments are tasked with the problem of overheating and air
quality in residential buildings. This paper presents the development
of a model, which is able to estimate the occupant exposure
to extreme temperatures and high air pollution within domestic
buildings. Building physics simulations were performed using the
EnergyPlus building physics software. An accurate metamodel is
then formed by randomly sampling building input parameters and
training on the outputs of EnergyPlus simulations. Metamodels are
used to vastly reduce the amount of computation time required when
performing optimisation and sensitivity analyses. Neural Networks
(NNs) have been compared to a Radial Basis Function (RBF)
algorithm when forming a metamodel. These techniques were
implemented using the PyBrain and scikit-learn python libraries,
respectively. NNs are shown to perform around 15% better than RBFs
when estimating overheating and air pollution metrics modelled by
EnergyPlus.
Abstract: This paper focuses on the mathematical modeling for
solidification of Al alloy in a cube mold cavity to study the
solidification behavior of casting process. The parametric
investigation of solidification process inside the cavity was
performed by using computational solidification/melting model
coupled with Volume of fluid (VOF) model. The implicit filling
algorithm is used in this study to understand the overall process from
the filling stage to solidification in a model metal casting process.
The model is validated with past studied at same conditions. The
solidification process is analyzed by including the effect of pouring
velocity as well as natural convection from the wall and geometry of
the cavity. These studies show the possibility of various defects
during solidification process.
Abstract: This paper presents the effect of the orbit inclination
on the pointing error of the satellite antenna and consequently on its
footprint on earth for a typical Ku- band payload system. The performance assessment is examined using both analytical
simulations and practical measurements, taking into account all the
additional sources of the pointing errors, such as East-West station
keeping, orbit eccentricity, and actual attitude control performance. An implementation and computation of the sinusoidal biases in
satellite roll and pitch used to compensate the pointing error of the
satellite antenna coverage is studied and evaluated before and after
the pointing corrections performed. A method for evaluation of the performance of the implemented
biases has been introduced through measuring satellite received level
from a mono-pulse tracking 11.1m transmitting antenna before and
after the implementation of the pointing corrections.
Abstract: This paper deals with using of prevailing operation
system MS Office (SmartArt...) for mathematical models, using
DYVELOP (Dynamic Vector Logistics of Processes) method. It
serves for crisis situations investigation and modelling within the
organizations of critical infrastructure. In first part of paper, it will be
introduced entities, operators, and actors of DYVELOP method. It
uses just three operators of Boolean algebra and four types of the
entities: the Environments, the Process Systems, the Cases, and the
Controlling. The Process Systems (PrS) have five “brothers”:
Management PrS, Transformation PrS, Logistic PrS, Event PrS and
Operation PrS. The Cases have three “sisters”: Process Cell Case,
Use Case, and Activity Case. They all need for the controlling of
their functions special Ctrl actors, except ENV – it can do without
Ctrl. Model´s maps are named the Blazons and they are able
mathematically - graphically express the relationships among entities,
actors and processes. In second part of this paper, the rich blazons of
DYVELOP method will be used for the discovering and modelling of
the cycling cases and their phases. The blazons need live PowerPoint
presentation for better comprehension of this paper mission. The
crisis management of energetic crisis infrastructure organization is
obliged to use the cycles for successful coping of crisis situations.
Several times cycling of these cases is necessary condition for the
encompassment for both emergency events and the mitigation of
organization´s damages. Uninterrupted and continuous cycling
process brings for crisis management fruitfulness and it is good
indicator and controlling actor of organizational continuity and its
sustainable development advanced possibilities. The research reliable
rules are derived for the safety and reliable continuity of energetic
critical infrastructure organization in the crisis situation.
Abstract: In order to utilize results from global climate models,
dynamical and statistical downscaling techniques have been
developed. For dynamical downscaling, usually a limited area
numerical model is used, with associated high computational cost.
This research proposes dynamic equation for specific space-time
regional climate downscaling from the Educational Global Climate
Model (EdGCM) for Southeast Asia. The equation is for surface air
temperature. This equation provides downscaling values of surface
air temperature at any specific location and time without running a
regional climate model. In the proposed equations, surface air
temperature is approximated from ground temperature, sensible heat
flux and 2m wind speed. Results from the application of the equation
show that the errors from the proposed equations are less than the
errors for direct interpolation from EdGCM.
Abstract: We present probabilistic multinomial Dirichlet
classification model for multidimensional data and Gaussian process
priors. Here, we have considered efficient computational method that
can be used to obtain the approximate posteriors for latent variables
and parameters needed to define the multiclass Gaussian process
classification model. We first investigated the process of inducing a
posterior distribution for various parameters and latent function by
using the variational Bayesian approximations and important sampling
method, and next we derived a predictive distribution of latent
function needed to classify new samples. The proposed model is
applied to classify the synthetic multivariate dataset in order to verify
the performance of our model. Experiment result shows that our model
is more accurate than the other approximation methods.
Abstract: The modelling of physical phenomena, such as the
earth’s free oscillations, the vibration of strings, the interaction of
atomic particles, or the steady state flow in a bar give rise to Sturm-
Liouville (SL) eigenvalue problems. The boundary applications of
some systems like the convection-diffusion equation, electromagnetic
and heat transfer problems requires the combination of Dirichlet and
Neumann boundary conditions. Hence, the incorporation of Robin
boundary condition in the analyses of Sturm-Liouville problem. This
paper deals with the computation of the eigenvalues and
eigenfunction of generalized Sturm-Liouville problems with Robin
boundary condition using the finite element method. Numerical
solution of classical Sturm–Liouville problem is presented. The
results show an agreement with the exact solution. High results
precision is achieved with higher number of elements.
Abstract: This research work presents the surface
thermodynamics approach to M-TB/HIV-Human sputum
interactions. This involved the use of the Hamaker coefficient
concept as a surface energetics tool in determining the interaction
processes, with the surface interfacial energies explained using van
der Waals concept of particle interactions. The Lifshitz derivation for
van der Waals forces was applied as an alternative to the contact
angle approach which has been widely used in other biological
systems. The methodology involved taking sputum samples from
twenty infected persons and from twenty uninfected persons for
absorbance measurement using a digital Ultraviolet visible
Spectrophotometer. The variables required for the computations with
the Lifshitz formula were derived from the absorbance data. The
Matlab software tools were used in the mathematical analysis of the
data produced from the experiments (absorbance values). The
Hamaker constants and the combined Hamaker coefficients were
obtained using the values of the dielectric constant together with the
Lifshitz Equation. The absolute combined Hamaker coefficients
A132abs and A131abs on both infected and uninfected sputum samples
gave the values of A132abs = 0.21631x10-21Joule for M-TB infected
sputum and Ã132abs = 0.18825x10-21Joule for M-TB/HIV infected
sputum. The significance of this result is the positive value of the
absolute combined Hamaker coefficient which suggests the existence
of net positive van der waals forces demonstrating an attraction
between the bacteria and the macrophage. This however, implies that
infection can occur. It was also shown that in the presence of HIV,
the interaction energy is reduced by 13% conforming adverse effects
observed in HIV patients suffering from tuberculosis.
Abstract: The Com-Poisson (CMP) model is one of the most
popular discrete generalized linear models (GLMS) that handles
both equi-, over- and under-dispersed data. In longitudinal context,
an integer-valued autoregressive (INAR(1)) process that incorporates
covariate specification has been developed to model longitudinal
CMP counts. However, the joint likelihood CMP function is
difficult to specify and thus restricts the likelihood-based estimating
methodology. The joint generalized quasi-likelihood approach
(GQL-I) was instead considered but is rather computationally
intensive and may not even estimate the regression effects due
to a complex and frequently ill-conditioned covariance structure.
This paper proposes a new GQL approach for estimating the
regression parameters (GQL-III) that is based on a single score vector
representation. The performance of GQL-III is compared with GQL-I
and separate marginal GQLs (GQL-II) through some simulation
experiments and is proved to yield equally efficient estimates as
GQL-I and is far more computationally stable.
Abstract: For the last decade, researchers have started to focus
their interest on Multicast Group Key Management Framework. The
central research challenge is secure and efficient group key
distribution. The present paper is based on the Bit model based
Secure Multicast Group key distribution scheme using the most
popular absolute encoder output type code named Gray Code. The
focus is of two folds. The first fold deals with the reduction of
computation complexity which is achieved in our scheme by
performing fewer multiplication operations during the key updating
process. To optimize the number of multiplication operations, an
O(1) time algorithm to multiply two N-bit binary numbers which
could be used in an N x N bit-model of reconfigurable mesh is used
in this proposed work. The second fold aims at reducing the amount
of information stored in the Group Center and group members while
performing the update operation in the key content. Comparative
analysis to illustrate the performance of various key distribution
schemes is shown in this paper and it has been observed that this
proposed algorithm reduces the computation and storage complexity
significantly. Our proposed algorithm is suitable for high
performance computing environment.
Abstract: Multiprocessor task scheduling problem for dependent
and independent tasks is computationally complex problem. Many
methods are proposed to achieve optimal running time. As the
multiprocessor task scheduling is NP hard in nature, therefore, many
heuristics are proposed which have improved the makespan of the
problem. But due to problem specific nature, the heuristic method
which provide best results for one problem, might not provide good
results for another problem. So, Simulated Annealing which is meta
heuristic approach is considered. It can be applied on all types of
problems. However, due to many runs, meta heuristic approach takes
large computation time. Hence, the hybrid approach is proposed by
combining the Duplication Scheduling Heuristic and Simulated
Annealing (SA) and the makespan results of Simple Simulated
Annealing and Hybrid approach are analyzed.
Abstract: Lateral Geniculate Nucleus (LGN) is the relay center
in the visual pathway as it receives most of the input information
from retinal ganglion cells (RGC) and sends to visual cortex. Low
threshold calcium currents (IT) at the membrane are the unique
indicator to characterize this firing functionality of the LGN neurons
gained by the RGC input. According to the LGN functional
requirements such as functional mapping of RGC to LGN, the
morphologies of the LGN neurons were developed. During the
neurological disorders like glaucoma, the mapping between RGC and
LGN is disconnected and hence stimulating LGN electrically using
deep brain electrodes can restore the functionalities of LGN. A
computational model was developed for simulating the LGN neurons
with three predominant morphologies each representing different
functional mapping of RGC to LGN. The firings of action potentials
at LGN neuron due to IT were characterized by varying the
stimulation parameters, morphological parameters and orientation. A
wide range of stimulation parameters (stimulus amplitude, duration
and frequency) represents the various strengths of the electrical
stimulation with different morphological parameters (soma size,
dendrites size and structure). The orientation (0-1800) of LGN
neuron with respect to the stimulating electrode represents the angle
at which the extracellular deep brain stimulation towards LGN
neuron is performed. A reduced dendrite structure was used in the
model using Bush–Sejnowski algorithm to decrease the
computational time while conserving its input resistance and total
surface area. The major finding is that an input potential of 0.4 V is
required to produce the action potential in the LGN neuron which is
placed at 100 μm distance from the electrode. From this study, it can
be concluded that the neuroprostheses under design would need to
consider the capability of inducing at least 0.4V to produce action
potentials in LGN.
Abstract: We consider fast and accurate solutions of scattering
problems by large perfectly conducting objects (PEC) formulated
by an optimization of the Method of Auxiliary Sources (MAS). We
present various techniques used to reduce the total computational cost
of the scattering problem. The first technique is based on replacing
the object by an array of finite number of small (PEC) object with the
same shape. The second solution reduces the problem on considering
only the half of the object.These t
Abstract: Wireless Sensor Network (WSN) routing is complex
due to its dynamic nature, computational overhead, limited battery
life, non-conventional addressing scheme, self-organization, and
sensor nodes limited transmission range. An energy efficient routing
protocol is a major concern in WSN. LEACH is a hierarchical WSN
routing protocol to increase network life. It performs self-organizing
and re-clustering functions for each round. This study proposes a
better sensor networks cluster head selection for efficient data
aggregation. The algorithm is based on Tabu search.