Abstract: Reliability allocation is quite important during early
design and development stages for a system to apportion its specified
reliability goal to subsystems. This paper improves the reliability
fuzzy allocation method, and gives concrete processes on determining
the factor and sub-factor sets, weight sets, judgment set, and
multi-stage fuzzy evaluation. To determine the weight of factor and
sub-factor sets, the modified trapezoidal numbers are proposed to
reduce errors caused by subjective factors. To decrease the fuzziness
in fuzzy division, an approximation method based on linear
programming is employed. To compute the explicit values of fuzzy
numbers, centroid method of defuzzification is considered. An
example is provided to illustrate the application of the proposed
reliability allocation method based on fuzzy arithmetic.
Abstract: In general, classical methods such as maximum
likelihood (ML) and least squares (LS) estimation methods are used
to estimate the shape parameters of the Burr XII distribution.
However, these estimators are very sensitive to the outliers. To
overcome this problem we propose alternative robust estimators
based on the M-estimation method for the shape parameters of the
Burr XII distribution. We provide a small simulation study and a real
data example to illustrate the performance of the proposed estimators
over the ML and the LS estimators. The simulation results show that
the proposed robust estimators generally outperform the classical
estimators in terms of bias and root mean square errors when there
are outliers in data.
Abstract: In this paper, we introduced a gradient-based inverse
solver to obtain the missing boundary conditions based on the
readings of internal thermocouples. The results show that the method
is very sensitive to measurement errors, and becomes unstable when
small time steps are used. The artificial neural networks are shown to
be capable of capturing the whole thermal history on the run-out
table, but are not very effective in restoring the detailed behavior of
the boundary conditions. Also, they behave poorly in nonlinear cases
and where the boundary condition profile is different.
GA and PSO are more effective in finding a detailed
representation of the time-varying boundary conditions, as well as in
nonlinear cases. However, their convergence takes longer. A
variation of the basic PSO, called CRPSO, showed the best
performance among the three versions. Also, PSO proved to be
effective in handling noisy data, especially when its performance
parameters were tuned. An increase in the self-confidence parameter
was also found to be effective, as it increased the global search
capabilities of the algorithm. RPSO was the most effective variation
in dealing with noise, closely followed by CRPSO. The latter
variation is recommended for inverse heat conduction problems, as it
combines the efficiency and effectiveness required by these
problems.
Abstract: Wireless Sensor Networks (WSNs) have wide variety
of applications and provide limitless future potentials. Nodes in
WSNs are prone to failure due to energy depletion, hardware failure,
communication link errors, malicious attacks, and so on. Therefore,
fault tolerance is one of the critical issues in WSNs. We study how
fault tolerance is addressed in different applications of WSNs. Fault
tolerant routing is a critical task for sensor networks operating in
dynamic environments. Many routing, power management, and data
dissemination protocols have been specifically designed for WSNs
where energy awareness is an essential design issue. The focus,
however, has been given to the routing protocols which might differ
depending on the application and network architecture.
Abstract: In urban context, urban nodes such as amenity or
hazard will certainly affect house price, while classic hedonic analysis
will employ distance variables measured from each urban nodes.
However, effects from distances to facilities on house prices generally
do not represent the true price of the property. Distance variables
measured on the same surface are suffering a problem called
multicollinearity, which is usually presented as magnitude variance
and mean value in regression, errors caused by instability. In this paper,
we provided a theoretical framework to identify and gather the data
with less bias, and also provided specific sampling method on locating
the sample region to avoid the spatial multicollinerity problem in three
distance variable’s case.
Abstract: Over the years, it has been extensively established that
the practice of assuming a structure being fixed at base, leads to gross
errors in evaluation of its overall response due to dynamic loadings
and overestimations in design. The extent of these errors depends on
a number of variables; soil type being one of the major factor. This
paper studies the effect of Soil Structure Interaction (SSI) on multistorey
buildings with varying under-laying soil types after proper
validation of the effect of SSI. Analysis for soft, stiff and very stiff
base soils has been carried out, using a powerful Finite Element
Method (FEM) software package ANSYS v14.5. Results lead to
some very important conclusions regarding time period, deflection
and acceleration responses.
Abstract: Quantification of cardiac function is performed by
calculating blood volume and ejection fraction in routine clinical
practice. However, these works have been performed by manual
contouring, which requires computational costs and varies on the
observer. In this paper, an automatic left ventricle segmentation
algorithm on cardiac magnetic resonance images (MRI) is presented.
Using knowledge on cardiac MRI, a K-mean clustering technique is
applied to segment blood region on a coil-sensitivity corrected image.
Then, a graph searching technique is used to correct segmentation
errors from coil distortion and noises. Finally, blood volume and
ejection fraction are calculated. Using cardiac MRI from 15 subjects,
the presented algorithm is tested and compared with manual
contouring by experts to show outstanding performance.
Abstract: The legends about “user-friendly” and “easy-to-use”
birotical tools (computer-related office tools) have been spreading
and misleading end-users. This approach has led us to the extremely
high number of incorrect documents, causing serious financial losses
in the creating, modifying, and retrieving processes. Our research
proved that there are at least two sources of this underachievement:
(1) The lack of the definition of the correctly edited, formatted
documents. Consequently, end-users do not know whether their
methods and results are correct or not. They are not aware of their
ignorance. They are so ignorant that their ignorance does not allow
them to realize their lack of knowledge. (2) The end-users’ problem
solving methods. We have found that in non-traditional programming
environments end-users apply, almost exclusively, surface approach
metacognitive methods to carry out their computer related activities,
which are proved less effective than deep approach methods.
Based on these findings we have developed deep approach
methods which are based on and adapted from traditional
programming languages. In this study, we focus on the most popular
type of birotical documents, the text based documents. We have
provided the definition of the correctly edited text, and based on this
definition, adapted the debugging method known in programming.
According to the method, before the realization of text editing, a
thorough debugging of already existing texts and the categorization
of errors are carried out. With this method in advance to real text
editing users learn the requirements of text based documents and also
of the correctly formatted text.
The method has been proved much more effective than the
previously applied surface approach methods. The advantages of the
method are that the real text handling requires much less human and
computer sources than clicking aimlessly in the GUI (Graphical User
Interface), and the data retrieval is much more effective than from
error-prone documents.
Abstract: Examining existing experimental results for shallow
rigid foundations subjected to vertical centric load (N), accompanied
or not with a bending moment (M), two main non-linear mechanisms
governing the cyclic response of the soil-foundation system can be
distinguished: foundation uplift and soil yielding. A soil-foundation
failure limit, is defined as a domain of resistance in the two
dimensional (2D) load space (N, M) inside of which lie all the
admissible combinations of loads; these latter correspond to a pure
elastic, non-linear elastic or plastic behavior of the soil-foundation
system, while the points lying on the failure limit correspond to a
combination of loads leading to a failure of the soil-foundation
system. In this study, the proposed resistance domain is constructed
analytically based on mechanics. Original elastic limit, uplift
initiation limit and iso-uplift limits are constructed inside this
domain. These limits give a prediction of the mechanisms activated
for each combination of loads applied to the foundation. A
comparison of the proposed failure limit with experimental tests
existing in the literature shows interesting results. Also, the
developed uplift initiation limit and iso-uplift curves are confronted
with others already proposed in the literature and widely used due to
the absence of other alternatives, and remarkable differences are
noted, showing evident errors in the past proposals and relevant
accuracy for those given in the present work.
Abstract: In this work, neural networks methods MLP type were
applied to a database from an array of six sensors for the detection of
three toxic gases. The choice of the number of hidden layers and the
weight values are influential on the convergence of the learning
algorithm. We proposed, in this article, a mathematical formula to
determine the optimal number of hidden layers and good weight
values based on the method of back propagation of errors. The results
of this modeling have improved discrimination of these gases and
optimized the computation time. The model presented here has
proven to be an effective application for the fast identification of
toxic gases.
Abstract: This paper presents the application of the Discrete
Component Model for heating and evaporation to multi-component
biodiesel fuel droplets in direct injection internal combustion engines.
This model takes into account the effects of temperature gradient,
recirculation and species diffusion inside droplets. A distinctive
feature of the model used in the analysis is that it is based on the
analytical solutions to the temperature and species diffusion
equations inside the droplets. Nineteen types of biodiesel fuels are
considered. It is shown that a simplistic model, based on the
approximation of biodiesel fuel by a single component or ignoring
the diffusion of components of biodiesel fuel, leads to noticeable
errors in predicted droplet evaporation time and time evolution of
droplet surface temperature and radius.
Abstract: Children are more susceptible to medication errors
than adults. Medication administration process is the last stage in the
medication treatment process and most of the errors detected in this
stage. Little research has been undertaken about medication errors in
children in the Middle East countries. This study was aimed to
evaluate how the paediatric nurses adhere to the medication
administration policy and also to identify any medication preparation
and administration errors or any risk factors. An observational,
prospective study of medication administration process from when
the nurses preparing patient medication until administration stage
(May to August 2014) was conducted in Saudi Arabia. Twelve
paediatric nurses serving 90 paediatric patients were observed. 456
drug administered doses were evaluated. Adherence rate was variable
in 7 steps out of 16 steps. Patient allergy information, dose
calculation, drug expiry date were the steps in medication
administration with lowest adherence rates. 63 medication
preparation and administration errors were identified with error rate
13.8% of medication administrations. No potentially life-threating
errors were witnessed. Few logistic and administrative factors were
reported. The results showed that the medication administration
policy and procedure need an urgent revision to be more sensible for
nurses in practice. Nurses’ knowledge and skills regarding to the
medication administration process should be improved.
Abstract: Images are important source of information used as
evidence during any investigation process. Their clarity and accuracy
is essential and of the utmost importance for any investigation.
Images are vulnerable to losing blocks and having noise added to
them either after alteration or when the image was taken initially,
therefore, having a high performance image processing system and it
is implementation is very important in a forensic point of view. This
paper focuses on improving the quality of the forensic images.
For different reasons packets that store data can be affected,
harmed or even lost because of noise. For example, sending the
image through a wireless channel can cause loss of bits. These types
of errors might give difficulties generally for the visual display
quality of the forensic images.
Two of the images problems: noise and losing blocks are covered.
However, information which gets transmitted through any way of
communication may suffer alteration from its original state or even
lose important data due to the channel noise. Therefore, a developed
system is introduced to improve the quality and clarity of the forensic
images.
Abstract: In this paper, we introduce a generalized Chebyshev
collocation method (GCCM) based on the generalized Chebyshev
polynomials for solving stiff systems. For employing a technique
of the embedded Runge-Kutta method used in explicit schemes, the
property of the generalized Chebyshev polynomials is used, in which
the nodes for the higher degree polynomial are overlapped with those
for the lower degree polynomial. The constructed algorithm controls
both the error and the time step size simultaneously and further
the errors at each integration step are embedded in the algorithm
itself, which provides the efficiency of the computational cost. For
the assessment of the effectiveness, numerical results obtained by the
proposed method and the Radau IIA are presented and compared.
Abstract: This study is purposed to develop an efficient fault
detection method for Global Navigation Satellite Systems (GNSS)
applications based on adaptive noise covariance estimation. Due to the
dependence on radio frequency signals, GNSS measurements are
dominated by systematic errors in receiver’s operating environment.
In the proposed method, the pseudorange and carrier-phase
measurement noise covariances are obtained at time propagations and
measurement updates in process of Carrier-Smoothed Code (CSC)
filtering, respectively. The test statistics for fault detection are
generated by the estimated measurement noise covariances. To
evaluate the fault detection capability, intentional faults were added to
the filed-collected measurements. The experiment result shows that
the proposed method is efficient in detecting unhealthy measurements
and improves GNSS positioning accuracy against fault occurrences.
Abstract: Different strategies and tools are available at the oil
and gas industry for detecting and analyzing tension and possible
fractures in borehole walls. Most of these techniques are based on
manual observation of the captured borehole images. While this
strategy may be possible and convenient with small images and few
data, it may become difficult and suitable to errors when big
databases of images must be treated. While the patterns may differ
among the image area, depending on many characteristics (drilling
strategy, rock components, rock strength, etc.). In this work we
propose the inclusion of data-mining classification strategies in order
to create a knowledge database of the segmented curves. These
classifiers allow that, after some time using and manually pointing
parts of borehole images that correspond to tension regions and
breakout areas, the system will indicate and suggest automatically
new candidate regions, with higher accuracy. We suggest the use of
different classifiers methods, in order to achieve different knowledge
dataset configurations.
Abstract: Load Forecasting plays a key role in making today's
and future's Smart Energy Grids sustainable and reliable. Accurate
power consumption prediction allows utilities to organize in advance
their resources or to execute Demand Response strategies more
effectively, which enables several features such as higher
sustainability, better quality of service, and affordable electricity
tariffs. It is easy yet effective to apply Load Forecasting at larger
geographic scale, i.e. Smart Micro Grids, wherein the lower available
grid flexibility makes accurate prediction more critical in Demand
Response applications. This paper analyses the application of
short-term load forecasting in a concrete scenario, proposed within the
EU-funded GreenCom project, which collect load data from single
loads and households belonging to a Smart Micro Grid. Three
short-term load forecasting techniques, i.e. linear regression, artificial
neural networks, and radial basis function network, are considered,
compared, and evaluated through absolute forecast errors and training
time. The influence of weather conditions in Load Forecasting is also
evaluated. A new definition of Gain is introduced in this paper, which
innovatively serves as an indicator of short-term prediction
capabilities of time spam consistency. Two models, 24- and
1-hour-ahead forecasting, are built to comprehensively compare these
three techniques.
Abstract: A method is proposed for stable detection of
seismoacoustic sources in C-OTDR systems that guarantee given
upper bounds for probabilities of type I and type II errors. Properties
of the proposed method are rigorously proved. The results of
practical applications of the proposed method in a real C-OTDRsystem
are presented.
Abstract: The generalized wave equation models various
problems in sciences and engineering. In this paper, a new three-time
level implicit approach based on cubic trigonometric B-spline for the
approximate solution of wave equation is developed. The usual finite
difference approach is used to discretize the time derivative while
cubic trigonometric B-spline is applied as an interpolating function in
the space dimension. Von Neumann stability analysis is used to
analyze the proposed method. Two problems are discussed to exhibit
the feasibility and capability of the method. The absolute errors and
maximum error are computed to assess the performance of the
proposed method. The results were found to be in good agreement
with known solutions and with existing schemes in literature.
Abstract: It is known that residual welding deformations give
negative effect to processability and operational quality of welded
structures, complicating their assembly and reducing strength.
Therefore, selection of optimal technology, ensuring minimum
welding deformations, is one of the main goals in developing a
technology for manufacturing of welded structures.
Through years, JSC SSTC has been developing a theory for
estimation of welding deformations and practical activities for
reducing and compensating such deformations during welding
process. During long time a methodology was used, based on analytic
dependence. This methodology allowed defining volumetric changes
of metal due to welding heating and subsequent cooling. However,
dependences for definition of structures deformations, arising as a
result of volumetric changes of metal in the weld area, allowed
performing calculations only for simple structures, such as units, flat
sections and sections with small curvature. In case of complex 3D
structures, estimations on the base of analytic dependences gave
significant errors.
To eliminate this shortage, it was suggested to use finite elements
method for resolving of deformation problem. Here, one shall first
calculate volumes of longitudinal and transversal shortenings of
welding joints using method of analytic dependences and further,
with obtained shortenings, calculate forces, which action is
equivalent to the action of active welding stresses. Further, a finiteelements
model of the structure is developed and equivalent forces
are added to this model. Having results of calculations, an optimal
sequence of assembly and welding is selected and special measures to
reduce and compensate welding deformations are developed and
taken.