Abstract: Human motion capture has become one of the major
area of interest in the field of computer vision. Some of the major
application areas that have been rapidly evolving include the
advanced human interfaces, virtual reality and security/surveillance
systems. This study provides a brief overview of the techniques and
applications used for the markerless human motion capture, which
deals with analyzing the human motion in the form of mathematical
formulations. The major contribution of this research is that it
classifies the computer vision based techniques of human motion
capture based on the taxonomy, and then breaks its down into four
systematically different categories of tracking, initialization, pose
estimation and recognition. The detailed descriptions and the
relationships descriptions are given for the techniques of tracking and
pose estimation. The subcategories of each process are further
described. Various hypotheses have been used by the researchers in
this domain are surveyed and the evolution of these techniques have
been explained. It has been concluded in the survey that most
researchers have focused on using the mathematical body models for
the markerless motion capture.
Abstract: The wear measuring and wear modelling are
fundamental issues in the industrial field, mainly correlated to the
economy and safety. Therefore, there is a need to study the wear
measurements and wear estimation. Pin-on-disc test is the most
common test which is used to study the wear behaviour. In this paper,
the pin-on-disc (AEROTECH UNIDEX 11) is used for the
investigation of the effects of normal load and hardness of material on
the wear under dry and sliding conditions. In the pin-on-disc rig, two
specimens were used; one, a pin is made of steel with a tip, positioned
perpendicular to the disc, where the disc is made of aluminium. The
pin wear and disc wear were measured by using the following
instruments: The Talysurf instrument, a digital microscope, and the
alicona instrument. The Talysurf profilometer was used to measure
the pin/disc wear scar depth, digital microscope was used to measure
the diameter and width of wear scar, and the alicona was used to
measure the pin wear and disc wear. After that, the Archard model,
American Society for Testing and Materials model (ASTM), and
neural network model were used for pin/disc wear modelling.
Simulation results were implemented by using the Matlab program.
This paper focuses on how the alicona can be used for wear
measurements and how the neural network can be used for wear
estimation.
Abstract: Estimation of model parameters is necessary to predict
the behavior of a system. Model parameters are estimated using
optimization criteria. Most algorithms use historical data to estimate
model parameters. The known target values (actual) and the output
produced by the model are compared. The differences between the
two form the basis to estimate the parameters. In order to compare
different models developed using the same data different criteria are
used. The data obtained for short scale projects are used here. We
consider software effort estimation problem using radial basis
function network. The accuracy comparison is made using various
existing criteria for one and two predictors. Then, we propose a new
criterion based on linear least squares for evaluation and compared
the results of one and two predictors. We have considered another
data set and evaluated prediction accuracy using the new criterion.
The new criterion is easy to comprehend compared to single statistic.
Although software effort estimation is considered, this method is
applicable for any modeling and prediction.
Abstract: Neurons in the nervous system communicate with
each other by producing electrical signals called spikes. To
investigate the physiological function of nervous system it is essential
to study the activity of neurons by detecting and sorting spikes in the
recorded signal. In this paper a method is proposed for considering
the spike sorting problem which is based on the nonlinear modeling
of spikes using exponential autoregressive model. The genetic
algorithm is utilized for model parameter estimation. In this regard
some selected model coefficients are used as features for sorting
purposes. For optimal selection of model coefficients, self-organizing
feature map is used. The results show that modeling of spikes with
nonlinear autoregressive model outperforms its linear counterpart.
Also the extracted features based on the coefficients of exponential
autoregressive model are better than wavelet based extracted features
and get more compact and well-separated clusters. In the case of
spikes different in small-scale structures where principal component
analysis fails to get separated clouds in the feature space, the
proposed method can obtain well-separated cluster which removes
the necessity of applying complex classifiers.
Abstract: Construction cost estimation is one of the most
important aspects of construction project design. For generations, the
process of cost estimating has been manual, time-consuming and
error-prone. This has partly led to most cost estimates to be unclear
and riddled with inaccuracies that at times lead to over- or underestimation
of construction cost. The development of standard set of
measurement rules that are understandable by all those involved in a
construction project, have not totally solved the challenges. Emerging
Building Information Modelling (BIM) technologies can exploit
standard measurement methods to automate cost estimation process
and improve accuracies. This requires standard measurement
methods to be structured in ontological and machine readable format;
so that BIM software packages can easily read them. Most standard
measurement methods are still text-based in textbooks and require
manual editing into tables or Spreadsheet during cost estimation. The
aim of this study is to explore the development of an ontology based
on New Rules of Measurement (NRM) commonly used in the UK for
cost estimation. The methodology adopted is Methontology, one of
the most widely used ontology engineering methodologies. The
challenges in this exploratory study are also reported and
recommendations for future studies proposed.
Abstract: Recent research in neural networks science and
neuroscience for modeling complex time series data and statistical
learning has focused mostly on learning from high input space and
signals. Local linear models are a strong choice for modeling local
nonlinearity in data series. Locally weighted projection regression is
a flexible and powerful algorithm for nonlinear approximation in
high dimensional signal spaces. In this paper, different learning
scenario of one and two dimensional data series with different
distributions are investigated for simulation and further noise is
inputted to data distribution for making different disordered
distribution in time series data and for evaluation of algorithm in
locality prediction of nonlinearity. Then, the performance of this
algorithm is simulated and also when the distribution of data is high
or when the number of data is less the sensitivity of this approach to
data distribution and influence of important parameter of local
validity in this algorithm with different data distribution is explained.
Abstract: Building loss estimation methodologies which have
been advanced considerably in recent decades are usually used to
estimate socio and economic impacts resulting from seismic structural
damage. In accordance with these methods, this paper presents the
evaluation of an annual loss probability of a reinforced concrete
moment resisting frame designed according to Korean Building Code.
The annual loss probability is defined by (1) a fragility curve obtained
from a capacity spectrum method which is similar to a method adopted
from HAZUS, and (2) a seismic hazard curve derived from annual
frequencies of exceedance per peak ground acceleration. Seismic
fragilities are computed to calculate the annual loss probability of a
certain structure using functions depending on structural capacity,
seismic demand, structural response and the probability of exceeding
damage state thresholds. This study carried out a nonlinear static
analysis to obtain the capacity of a RC moment resisting frame
selected as a prototype building. The analysis results show that the
probability of being extensive structural damage in the prototype
building is expected to 0.01% in a year.
Abstract: Performance of different filtering approaches depends
on modeling of dynamical system and algorithm structure. For
modeling and smoothing the data the evaluation of posterior
distribution in different filtering approach should be chosen carefully.
In this paper different filtering approaches like filter KALMAN,
EKF, UKF, EKS and smoother RTS is simulated in some trajectory
tracking of path and accuracy and limitation of these approaches are
explained. Then probability of model with different filters is
compered and finally the effect of the noise variance to estimation is
described with simulations results.
Abstract: Vertical Handover(VHO) among different
communication technologies ensuring uninterruption and service
continuity is one of the most important performance parameter in
Heterogenous networks environment. In an integrated Universal
Mobile Telecommunicatin System(UMTS) and Wireless Local
Area Network(WLAN), WLAN is given an inherent priority over
UMTS because of its high data rates with low cost. Therefore
mobile users want to be associated with WLAN maximum of the
time while roaming, to enjoy best possible services with low cost.
That encourages reduction of number of VHO. In this work the
reduction of number of VHO with respect to varying number of
WLAN Access Points(APs) in an integrated UMTS and WLAN
network is investigated through simulation to provide best possible
cost effective service to the users. The simulation has been carried
out for an area (7800 × 9006)m2 where COST-231 Hata model
and 3GPP (TR 101 112 V 3.1.0) specified models are used for
WLAN and UMTS path loss models respectively. The handover
decision is triggered based on the received signal level as compared
to the fade margin. Fade margin gives a probabilistic measure of
the reliability of the communication link. A relationship between
number of WLAN APs and the number of VHO is also established
in this work.
Abstract: In this paper, the transient device performance analysis
of n-type Gate Inside JunctionLess Transistor (GI-JLT) has been
evaluated. 3-D Bohm Quantum Potential (BQP) transport device
simulation has been used to evaluate the delay and power dissipation
performance. GI-JLT has a number of desirable device parameters
such as reduced propagation delay, dynamic power dissipation,
power and delay product, intrinsic gate delay and energy delay
product as compared to Gate-all-around transistors GAA-JLT. In
addition to this, various other device performance parameters namely,
on/off current ratio, short channel effects (SCE), transconductance
Generation Factor (TGF) and unity gain cut-off frequency (fT ) and
subthreshold slope (SS) of the GI-JLT and GAA-JLT have been
analyzed and compared. GI-JLT shows better device performance
characteristics than GAA-JLT for low power and high frequency
applications, because of its larger gate electrostatic control on the
device operation.
Abstract: We present a solution to the Maxmin u/E parameters
estimation problem of possibility distributions in m-dimensional
case. Our method is based on geometrical approach, where minimal
area enclosing ellipsoid is constructed around the sample. Also we
demonstrate that one can improve results of well-known algorithms
in fuzzy model identification task using Maxmin u/E parameters
estimation.
Abstract: Locating Radio Controlled (RC) devices using their
unintended emissions has a great interest considering security
concerns. Weak nature of these emissions requires near field
localization approach since it is hard to detect these signals in far
field region of array. Instead of only angle estimation, near field
localization also requires range estimation of the source which makes
this method more complicated than far field models. Challenges of
locating such devices in a near field region and real time environment
are analyzed in this paper. An ESPRIT like near field localization
scheme is utilized for both angle and range estimation. 1-D search
with symmetric subarrays is provided. Two 7 element uniform linear
antenna arrays (ULA) are employed for locating RC source.
Experiment results of location estimation for one unintended emitting
walkie-talkie for different positions are given.
Abstract: Factors affecting construction unit cost vary
depending on a country’s political, economic, social and
technological inclinations. Factors affecting construction costs have
been studied from various perspectives. Analysis of cost factors
requires an appreciation of a country’s practices. Identified cost
factors provide an indication of a country’s construction economic
strata. The purpose of this paper is to identify the essential factors
that affect unit cost estimation and their breakdown using artificial
neural networks. Twenty five (25) identified cost factors in road
construction were subjected to a questionnaire survey and employing
SPSS factor analysis the factors were reduced to eight. The 8 factors
were analysed using neural network (NN) to determine the
proportionate breakdown of the cost factors in a given construction
unit rate. NN predicted that political environment accounted 44% of
the unit rate followed by contractor capacity at 22% and financial
delays, project feasibility and overhead & profit each at 11%. Project
location, material availability and corruption perception index had
minimal impact on the unit cost from the training data provided.
Quantified cost factors can be incorporated in unit cost estimation
models (UCEM) to produce more accurate estimates. This can create
improvements in the cost estimation of infrastructure projects and
establish a benchmark standard to assist the process of alignment of
work practises and training of new staff, permitting the on-going
development of best practises in cost estimation to become more
effective.
Abstract: The thermal conductivity of a fluid can be
significantly enhanced by dispersing nano-sized particles in it, and
the resultant fluid is termed as "nanofluid". A theoretical model for
estimating the thermal conductivity of a nanofluid has been proposed
here. It is based on the mechanism that evenly dispersed
nanoparticles within a nanofluid undergo Brownian motion in course
of which the nanoparticles repeatedly collide with the heat source.
During each collision a rapid heat transfer occurs owing to the solidsolid
contact. Molecular dynamics (MD) simulation of the collision
of nanoparticles with the heat source has shown that there is a pulselike
pick up of heat by the nanoparticles within 20-100 ps, the extent
of which depends not only on thermal conductivity of the
nanoparticles, but also on the elastic and other physical properties of
the nanoparticle. After the collision the nanoparticles undergo
Brownian motion in the base fluid and release the excess heat to the
surrounding base fluid within 2-10 ms. The Brownian motion and
associated temperature variation of the nanoparticles have been
modeled by stochastic analysis. Repeated occurrence of these events
by the suspended nanoparticles significantly contributes to the
characteristic thermal conductivity of the nanofluids, which has been
estimated by the present model for a ethylene glycol based nanofluid
containing Cu-nanoparticles of size ranging from 8 to 20 nm, with
Gaussian size distribution. The prediction of the present model has
shown a reasonable agreement with the experimental data available
in literature.
Abstract: Systems Engineering plays a key role during industrial
product development of complex technical systems. The need for
systems engineers in industry is growing. But there is a gap between
the industrial need and the academic education. Normally the
academic education is focused on the domain specific design,
implementation and testing of technical systems. Necessary systems
engineering expertise like knowledge about requirements analysis,
product cost estimation, management or social skills are poorly
taught. Thus there is the need of new academic concepts for teaching
systems engineering skills. This paper presents a project-orientated
training concept to prepare students from different technical degree
programs for systems engineering activities. The training concept has
been initially implemented and applied in the industrial engineering
master program of the University of Applied Sciences Offenburg.
Abstract: Near infrared (NIR) spectroscopy has always been of
great interest in the food and agriculture industries. The development
of prediction models has facilitated the estimation process in recent
years. In this study, 110 crude palm oil (CPO) samples were used to
build a free fatty acid (FFA) prediction model. 60% of the collected
data were used for training purposes and the remaining 40% used for
testing. The visible peaks on the NIR spectrum were at 1725 nm and
1760 nm, indicating the existence of the first overtone of C-H bands.
Principal component regression (PCR) was applied to the data in
order to build this mathematical prediction model. The optimal
number of principal components was 10. The results showed
R2=0.7147 for the training set and R2=0.6404 for the testing set.
Abstract: Fuzzy inference method based approach to the
forming of modular intellectual system of assessment the quality of
communication services is proposed. Developed under this approach
the basic fuzzy estimation model takes into account the
recommendations of the International Telecommunication Union in
respect of the operation of packet switching networks based on IPprotocol.
To implement the main features and functions of the fuzzy
control system of quality telecommunication services it is used
multilayer feedforward neural network.
Abstract: The objective of meta-analysis is to combine results
from several independent studies in order to create generalization
and provide evidence base for decision making. But recent studies
show that the magnitude of effect size estimates reported in many
areas of research significantly changed over time and this can
impair the results and conclusions of meta-analysis. A number of
sequential methods have been proposed for monitoring the effect
size estimates in meta-analysis. However they are based on statistical
theory applicable only to fixed effect model (FEM) of meta-analysis.
For random-effects model (REM), the analysis incorporates the
heterogeneity variance, τ 2 and its estimation create complications.
In this paper we study the use of a truncated CUSUM-type test with
asymptotically valid critical values for sequential monitoring in REM.
Simulation results show that the test does not control the Type I error
well, and is not recommended. Further work required to derive an
appropriate test in this important area of applications.
Abstract: A key issue in seismic risk analysis within the context
of Performance-Based Earthquake Engineering is the evaluation of
the expected seismic damage of structures under a specific
earthquake ground motion. The assessment of the seismic
performance strongly depends on the choice of the seismic Intensity
Measure (IM), which quantifies the characteristics of a ground
motion that are important to the nonlinear structural response. Several
conventional IMs of ground motion have been used to estimate their
damage potential to structures. Yet, none of them has been proved to
be able to predict adequately the seismic damage. Therefore,
alternative, scalar intensity measures, which take into account not
only ground motion characteristics but also structural information
have been proposed. Some of these IMs are based on integration of
spectral values over a range of periods, in an attempt to account for
the information that the shape of the acceleration, velocity or
displacement spectrum provides. The adequacy of a number of these
IMs in predicting the structural damage of 3D R/C buildings is
investigated in the present paper. The investigated IMs, some of
which are structure specific and some are non structure-specific, are
defined via integration of spectral values. To achieve this purpose
three symmetric in plan R/C buildings are studied. The buildings are
subjected to 59 bidirectional earthquake ground motions. The two
horizontal accelerograms of each ground motion are applied along
the structural axes. The response is determined by nonlinear time
history analysis. The structural damage is expressed in terms of the
maximum interstory drift as well as the overall structural damage
index. The values of the aforementioned seismic damage measures
are correlated with seven scalar ground motion IMs. The comparative
assessment of the results revealed that the structure-specific IMs
present higher correlation with the seismic damage of the three
buildings. However, the adequacy of the IMs for estimation of the
structural damage depends on the response parameter adopted.
Furthermore, it was confirmed that the widely used spectral
acceleration at the fundamental period of the structure is a good
indicator of the expected earthquake damage level.
Abstract: Tumor is an uncontrolled growth of tissues in any part
of the body. Tumors are of different types and they have different
characteristics and treatments. Brain tumor is inherently serious and
life-threatening because of its character in the limited space of the
intracranial cavity (space formed inside the skull). Locating the tumor
within MR (magnetic resonance) image of brain is integral part of the
treatment of brain tumor. This segmentation task requires
classification of each voxel as either tumor or non-tumor, based on
the description of the voxel under consideration. Many studies are
going on in the medical field using Markov Random Fields (MRF) in
segmentation of MR images. Even though the segmentation process
is better, computing the probability and estimation of parameters is
difficult. In order to overcome the aforementioned issues, Conditional
Random Field (CRF) is used in this paper for segmentation, along
with the modified artificial bee colony optimization and modified
fuzzy possibility c-means (MFPCM) algorithm. This work is mainly
focused to reduce the computational complexities, which are found in
existing methods and aimed at getting higher accuracy. The
efficiency of this work is evaluated using the parameters such as
region non-uniformity, correlation and computation time. The
experimental results are compared with the existing methods such as
MRF with improved Genetic Algorithm (GA) and MRF-Artificial
Bee Colony (MRF-ABC) algorithm.