Abstract: Reliability allocation is quite important during early
design and development stages for a system to apportion its specified
reliability goal to subsystems. This paper improves the reliability
fuzzy allocation method, and gives concrete processes on determining
the factor and sub-factor sets, weight sets, judgment set, and
multi-stage fuzzy evaluation. To determine the weight of factor and
sub-factor sets, the modified trapezoidal numbers are proposed to
reduce errors caused by subjective factors. To decrease the fuzziness
in fuzzy division, an approximation method based on linear
programming is employed. To compute the explicit values of fuzzy
numbers, centroid method of defuzzification is considered. An
example is provided to illustrate the application of the proposed
reliability allocation method based on fuzzy arithmetic.
Abstract: In general, classical methods such as maximum
likelihood (ML) and least squares (LS) estimation methods are used
to estimate the shape parameters of the Burr XII distribution.
However, these estimators are very sensitive to the outliers. To
overcome this problem we propose alternative robust estimators
based on the M-estimation method for the shape parameters of the
Burr XII distribution. We provide a small simulation study and a real
data example to illustrate the performance of the proposed estimators
over the ML and the LS estimators. The simulation results show that
the proposed robust estimators generally outperform the classical
estimators in terms of bias and root mean square errors when there
are outliers in data.
Abstract: In this paper, we introduced a gradient-based inverse
solver to obtain the missing boundary conditions based on the
readings of internal thermocouples. The results show that the method
is very sensitive to measurement errors, and becomes unstable when
small time steps are used. The artificial neural networks are shown to
be capable of capturing the whole thermal history on the run-out
table, but are not very effective in restoring the detailed behavior of
the boundary conditions. Also, they behave poorly in nonlinear cases
and where the boundary condition profile is different.
GA and PSO are more effective in finding a detailed
representation of the time-varying boundary conditions, as well as in
nonlinear cases. However, their convergence takes longer. A
variation of the basic PSO, called CRPSO, showed the best
performance among the three versions. Also, PSO proved to be
effective in handling noisy data, especially when its performance
parameters were tuned. An increase in the self-confidence parameter
was also found to be effective, as it increased the global search
capabilities of the algorithm. RPSO was the most effective variation
in dealing with noise, closely followed by CRPSO. The latter
variation is recommended for inverse heat conduction problems, as it
combines the efficiency and effectiveness required by these
problems.
Abstract: A model was constructed to predict the amount of
solar radiation that will make contact with the surface of the earth in
a given location an hour into the future. This project was supported
by the Southern Company to determine at what specific times during
a given day of the year solar panels could be relied upon to produce
energy in sufficient quantities. Due to their ability as universal
function approximators, an artificial neural network was used to
estimate the nonlinear pattern of solar radiation, which utilized
measurements of weather conditions collected at the Griffin, Georgia
weather station as inputs. A number of network configurations and
training strategies were utilized, though a multilayer perceptron with
a variety of hidden nodes trained with the resilient propagation
algorithm consistently yielded the most accurate predictions. In
addition, a modeled direct normal irradiance field and adjacent
weather station data were used to bolster prediction accuracy. In later
trials, the solar radiation field was preprocessed with a discrete
wavelet transform with the aim of removing noise from the
measurements. The current model provides predictions of solar
radiation with a mean square error of 0.0042, though ongoing efforts
are being made to further improve the model’s accuracy.
Abstract: Wireless Sensor Networks (WSNs) have wide variety
of applications and provide limitless future potentials. Nodes in
WSNs are prone to failure due to energy depletion, hardware failure,
communication link errors, malicious attacks, and so on. Therefore,
fault tolerance is one of the critical issues in WSNs. We study how
fault tolerance is addressed in different applications of WSNs. Fault
tolerant routing is a critical task for sensor networks operating in
dynamic environments. Many routing, power management, and data
dissemination protocols have been specifically designed for WSNs
where energy awareness is an essential design issue. The focus,
however, has been given to the routing protocols which might differ
depending on the application and network architecture.
Abstract: This paper examines the effect of the volatility of oil
prices on food price in South Africa using monthly data covering the
period 2002:01 to 2014:09. Food price is measured by the South
African consumer price index for food while oil price is proxied by
the Brent crude oil. The study employs the GARCH-in-mean VAR
model, which allows the investigation of the effect of a negative and
positive shock in oil price volatility on food price. The model also
allows the oil price uncertainty to be measured as the conditional
standard deviation of a one-step-ahead forecast error of the change in
oil price. The results show that oil price uncertainty has a positive
and significant effect on food price in South Africa. The responses of
food price to a positive and negative oil price shocks is asymmetric.
Abstract: In urban context, urban nodes such as amenity or
hazard will certainly affect house price, while classic hedonic analysis
will employ distance variables measured from each urban nodes.
However, effects from distances to facilities on house prices generally
do not represent the true price of the property. Distance variables
measured on the same surface are suffering a problem called
multicollinearity, which is usually presented as magnitude variance
and mean value in regression, errors caused by instability. In this paper,
we provided a theoretical framework to identify and gather the data
with less bias, and also provided specific sampling method on locating
the sample region to avoid the spatial multicollinerity problem in three
distance variable’s case.
Abstract: Two finite element (FEM) models are presented in
this paper to address the random nature of the response of glued
timber structures made of wood segments with variable elastic
moduli evaluated from 3600 indentation measurements. This total
database served to create the same number of ensembles as was the
number of segments in the tested beam. Statistics of these ensembles
were then assigned to given segments of beams and the Latin
Hypercube Sampling (LHS) method was called to perform 100
simulations resulting into the ensemble of 100 deflections subjected
to statistical evaluation. Here, a detailed geometrical arrangement of
individual segments in the laminated beam was considered in the
construction of two-dimensional FEM model subjected to in fourpoint
bending to comply with the laboratory tests. Since laboratory
measurements of local elastic moduli may in general suffer from a
significant experimental error, it appears advantageous to exploit the
full scale measurements of timber beams, i.e. deflections, to improve
their prior distributions with the help of the Bayesian statistical
method. This, however, requires an efficient computational model
when simulating the laboratory tests numerically. To this end, a
simplified model based on Mindlin’s beam theory was established.
The improved posterior distributions show that the most significant
change of the Young’s modulus distribution takes place in laminae in
the most strained zones, i.e. in the top and bottom layers within the
beam center region. Posterior distributions of moduli of elasticity
were subsequently utilized in the 2D FEM model and compared with
the original simulations.
Abstract: Urban Search and Rescue (USAR) is a functional
capability that has been developed to allow the United Kingdom Fire
and Rescue Service to deal with ‘major incidents’ primarily involving
structural collapse. The nature of the work undertaken by USAR
means that staying out of a damaged or collapsed building structure is
not usually an option for search and rescue personnel. As a result
there is always a risk that they themselves could become victims. For
this paper, a systematic and investigative review using desk research
was undertaken to explore the role which structural engineering can
play in assisting search and rescue personnel to conduct structural
assessments when in the field. The focus is on how search and rescue
personnel can assess damaged and collapsed building structures, not
just in terms of structural damage that may been countered, but also
in relation to structural stability. Natural disasters, accidental
emergencies, acts of terrorism and other extreme events can vary
significantly in nature and ferocity, and can cause a wide variety of
damage to building structures. It is not possible or, even realistic, to
provide search and rescue personnel with definitive guidelines and
procedures to assess damaged and collapsed building structures as
there are too many variables to consider. However, understanding
what implications damage may have upon the structural stability of a
building structure will enable search and rescue personnel to better judge
and quantify risk from a life-safety standpoint. It is intended that this
will allow search and rescue personnel to make informed decisions
and ensure every effort is made to mitigate risk, so that they
themselves do not become victims.
Abstract: Over the years, it has been extensively established that
the practice of assuming a structure being fixed at base, leads to gross
errors in evaluation of its overall response due to dynamic loadings
and overestimations in design. The extent of these errors depends on
a number of variables; soil type being one of the major factor. This
paper studies the effect of Soil Structure Interaction (SSI) on multistorey
buildings with varying under-laying soil types after proper
validation of the effect of SSI. Analysis for soft, stiff and very stiff
base soils has been carried out, using a powerful Finite Element
Method (FEM) software package ANSYS v14.5. Results lead to
some very important conclusions regarding time period, deflection
and acceleration responses.
Abstract: Pulmonary Function Tests are important non-invasive
diagnostic tests to assess respiratory impairments and provides
quantifiable measures of lung function. Spirometry is the most
frequently used measure of lung function and plays an essential role
in the diagnosis and management of pulmonary diseases. However,
the test requires considerable patient effort and cooperation,
markedly related to the age of patients resulting in incomplete data
sets. This paper presents, a nonlinear model built using Multivariate
adaptive regression splines and Random forest regression model to
predict the missing spirometric features. Random forest based feature
selection is used to enhance both the generalization capability and the
model interpretability. In the present study, flow-volume data are
recorded for N= 198 subjects. The ranked order of feature importance
index calculated by the random forests model shows that the
spirometric features FVC, FEF25, PEF, FEF25-75, FEF50 and the
demographic parameter height are the important descriptors. A
comparison of performance assessment of both models prove that, the
prediction ability of MARS with the `top two ranked features namely
the FVC and FEF25 is higher, yielding a model fit of R2= 0.96 and
R2= 0.99 for normal and abnormal subjects. The Root Mean Square
Error analysis of the RF model and the MARS model also shows that
the latter is capable of predicting the missing values of FEV1 with a
notably lower error value of 0.0191 (normal subjects) and 0.0106
(abnormal subjects) with the aforementioned input features. It is
concluded that combining feature selection with a prediction model
provides a minimum subset of predominant features to train the
model, as well as yielding better prediction performance. This
analysis can assist clinicians with a intelligence support system in the
medical diagnosis and improvement of clinical care.
Abstract: The development of allometric models is crucial to
accurate forest biomass/carbon stock assessment. The aim of this
study was to develop a set of biomass prediction models that will
enable the determination of total tree aboveground biomass for
savannah woodland area in Niger State, Nigeria. Based on the data
collected through biometric measurements of 1816 trees and
destructive sampling of 36 trees, five species specific and one site
specific models were developed. The sample size was distributed
equally between the five most dominant species in the study site
(Vitellaria paradoxa, Irvingia gabonensis, Parkia biglobosa,
Anogeissus leiocarpus, Pterocarpus erinaceous). Firstly, the
equations were developed for five individual species. Secondly these
five species were mixed and were used to develop an allometric
equation of mixed species. Overall, there was a strong positive
relationship between total tree biomass and the stem diameter. The
coefficient of determination (R2 values) ranging from 0.93 to 0.99 P
< 0.001 were realised for the models; with considerable low standard
error of the estimates (SEE) which confirms that the total tree above
ground biomass has a significant relationship with the dbh. F-test
values for the biomass prediction models were also significant at p
Abstract: Previous studies on financial distress prediction choose
the conventional failing and non-failing dichotomy; however, the
distressed extent differs substantially among different financial
distress events. To solve the problem, “non-distressed”, “slightlydistressed”
and “reorganization and bankruptcy” are used in our article
to approximate the continuum of corporate financial health. This paper
explains different financial distress events using the two-stage method.
First, this investigation adopts firm-specific financial ratios, corporate
governance and market factors to measure the probability of various
financial distress events based on multinomial logit models.
Specifically, the bootstrapping simulation is performed to examine the
difference of estimated misclassifying cost (EMC). Second, this work
further applies macroeconomic factors to establish the credit cycle
index and determines the distressed cut-off indicator of the two-stage
models using such index. Two different models, one-stage and
two-stage prediction models are developed to forecast financial
distress, and the results acquired from different models are compared
with each other, and with the collected data. The findings show that the
one-stage model has the lower misclassification error rate than the
two-stage model. The one-stage model is more accurate than the
two-stage model.
Abstract: Quantification of cardiac function is performed by
calculating blood volume and ejection fraction in routine clinical
practice. However, these works have been performed by manual
contouring, which requires computational costs and varies on the
observer. In this paper, an automatic left ventricle segmentation
algorithm on cardiac magnetic resonance images (MRI) is presented.
Using knowledge on cardiac MRI, a K-mean clustering technique is
applied to segment blood region on a coil-sensitivity corrected image.
Then, a graph searching technique is used to correct segmentation
errors from coil distortion and noises. Finally, blood volume and
ejection fraction are calculated. Using cardiac MRI from 15 subjects,
the presented algorithm is tested and compared with manual
contouring by experts to show outstanding performance.
Abstract: In this paper, we regard as a coded transmission over a
frequency-selective channel. We plan to study analytically the
convergence of the turbo-detector using a maximum a posteriori
(MAP) equalizer and a MAP decoder. We demonstrate that the
densities of the maximum likelihood (ML) exchanged during the
iterations are e-symmetric and output-symmetric. Under the Gaussian
approximation, this property allows to execute a one-dimensional
scrutiny of the turbo-detector. By deriving the analytical terminology
of the ML distributions under the Gaussian approximation, we confirm
that the bit error rate (BER) performance of the turbo-detector
converges to the BER performance of the coded additive white
Gaussian noise (AWGN) channel at high signal to noise ratio (SNR),
for any frequency selective channel.
Abstract: In urban area, several landmarks may affect housing
price and rents, and hedonic analysis should employ distance variables
corresponding to each landmarks. Unfortunately, the effects of
distances to landmarks on housing prices are generally not consistent
with the true price. These distance variables may cause magnitude
error in regression, pointing a problem of spatial multicollinearity. In
this paper, we provided some approaches for getting the samples with
less bias and method on locating the specific sampling area to avoid
the multicollinerity problem in two specific landmarks case.
Abstract: The characteristic requirement for producing
rectangular shape bottles was a uniform thickness of the plastic bottle
wall. Die shaping was a good technique which controlled the wall
thickness of bottles. An advance technology which was the finite
element method (FEM) for blowing parison to be a rectangular shape
bottle was conducted to reduce waste plastic from a trial and error
method of a die shaping and parison control method. The artificial
intelligent (AI) comprised of artificial neural network and genetic
algorithm was selected to optimize the die gap shape from the FEM
results. The application of AI technique could optimize the suitable
die gap shape for the parison blow molding which did not depend on
the parison control method to produce rectangular bottles with the
uniform wall. Particularly, this application can be used with cheap
blow molding machines without a parison controller therefore it will
reduce cost of production in the bottle blow molding process.
Abstract: This paper introduces an original method of
parametric optimization of the structure for multimodal decisionlevel
fusion scheme which combines the results of the partial solution
of the classification task obtained from assembly of the mono-modal
classifiers. As a result, a multimodal fusion classifier which has the
minimum value of the total error rate has been obtained.
Abstract: The inherent skin patterns created at the joints in the
finger exterior are referred as finger knuckle-print. It is exploited to
identify a person in a unique manner because the finger knuckle print
is greatly affluent in textures. In biometric system, the region of
interest is utilized for the feature extraction algorithm. In this paper,
local and global features are extracted separately. Fast Discrete
Orthonormal Stockwell Transform is exploited to extract the local
features. Global feature is attained by escalating the size of Fast
Discrete Orthonormal Stockwell Transform to infinity. Two features
are fused to increase the recognition accuracy. A matching distance is
calculated for both the features individually. Then two distances are
merged mutually to acquire the final matching distance. The
proposed scheme gives the better performance in terms of equal error
rate and correct recognition rate.
Abstract: The legends about “user-friendly” and “easy-to-use”
birotical tools (computer-related office tools) have been spreading
and misleading end-users. This approach has led us to the extremely
high number of incorrect documents, causing serious financial losses
in the creating, modifying, and retrieving processes. Our research
proved that there are at least two sources of this underachievement:
(1) The lack of the definition of the correctly edited, formatted
documents. Consequently, end-users do not know whether their
methods and results are correct or not. They are not aware of their
ignorance. They are so ignorant that their ignorance does not allow
them to realize their lack of knowledge. (2) The end-users’ problem
solving methods. We have found that in non-traditional programming
environments end-users apply, almost exclusively, surface approach
metacognitive methods to carry out their computer related activities,
which are proved less effective than deep approach methods.
Based on these findings we have developed deep approach
methods which are based on and adapted from traditional
programming languages. In this study, we focus on the most popular
type of birotical documents, the text based documents. We have
provided the definition of the correctly edited text, and based on this
definition, adapted the debugging method known in programming.
According to the method, before the realization of text editing, a
thorough debugging of already existing texts and the categorization
of errors are carried out. With this method in advance to real text
editing users learn the requirements of text based documents and also
of the correctly formatted text.
The method has been proved much more effective than the
previously applied surface approach methods. The advantages of the
method are that the real text handling requires much less human and
computer sources than clicking aimlessly in the GUI (Graphical User
Interface), and the data retrieval is much more effective than from
error-prone documents.