Abstract: Recent trends in building constructions in Libya are
more toward tall (high-rise) building projects. As a consequence, a
better estimation of the lateral loading in the design process is
becoming the focal of a safe and cost effective building industry. Byin-
large, Libya is not considered a potential earthquake prone zone,
making wind is the dominant design lateral loads. Current design
practice in the country estimates wind speeds on a mere random
bases by considering certain factor of safety to the chosen wind
speed. Therefore, a need for a more accurate estimation of wind
speeds in Libya was the motivation behind this study. Records of
wind speed data were collected from 22 metrological stations in
Libya, and were statistically analysed. The analysis of more than four
decades of wind speed records suggests that the country can be
divided into four zones of distinct wind speeds. A computer “survey"
program was manipulated to draw design wind speeds contour map
for the state of Libya.
The paper presents the statistical analysis of Libya-s recorded
wind speed data and proposes design wind speed values for a 50-year
return period that covers the entire country.
Abstract: In this paper, we propose a face recognition algorithm
using AAM and Gabor features. Gabor feature vectors which are well
known to be robust with respect to small variations of shape, scaling,
rotation, distortion, illumination and poses in images are popularly
employed for feature vectors for many object detection and
recognition algorithms. EBGM, which is prominent among face
recognition algorithms employing Gabor feature vectors, requires
localization of facial feature points where Gabor feature vectors are
extracted. However, localization method employed in EBGM is based
on Gabor jet similarity and is sensitive to initial values. Wrong
localization of facial feature points affects face recognition rate. AAM
is known to be successfully applied to localization of facial feature
points. In this paper, we devise a facial feature point localization
method which first roughly estimate facial feature points using AAM
and refine facial feature points using Gabor jet similarity-based facial
feature localization method with initial points set by the rough facial
feature points obtained from AAM, and propose a face recognition
algorithm using the devised localization method for facial feature
localization and Gabor feature vectors. It is observed through
experiments that such a cascaded localization method based on both
AAM and Gabor jet similarity is more robust than the localization
method based on only Gabor jet similarity. Also, it is shown that the
proposed face recognition algorithm using this devised localization
method and Gabor feature vectors performs better than the
conventional face recognition algorithm using Gabor jet
similarity-based localization method and Gabor feature vectors like
EBGM.
Abstract: This paper presents a computational methodology
based on matrix operations for a computer based solution to the
problem of performance analysis of software reliability models
(SRMs). A set of seven comparison criteria have been formulated to
rank various non-homogenous Poisson process software reliability
models proposed during the past 30 years to estimate software
reliability measures such as the number of remaining faults, software
failure rate, and software reliability. Selection of optimal SRM for
use in a particular case has been an area of interest for researchers in
the field of software reliability. Tools and techniques for software
reliability model selection found in the literature cannot be used with
high level of confidence as they use a limited number of model
selection criteria. A real data set of middle size software project from
published papers has been used for demonstration of matrix method.
The result of this study will be a ranking of SRMs based on the
Permanent value of the criteria matrix formed for each model based
on the comparison criteria. The software reliability model with
highest value of the Permanent is ranked at number – 1 and so on.
Abstract: Most routing protocols (DSR, AODV etc.) that have
been designed for wireless adhoc networks incorporate the broadcasting
operation in their route discovery scheme. Probabilistic broadcasting
techniques have been developed to optimize the broadcast operation
which is otherwise very expensive in terms of the redundancy
and the traffic it generates. In this paper we have explored percolation
theory to gain a different perspective on probabilistic broadcasting
schemes which have been actively researched in the recent years.
This theory has helped us estimate the value of broadcast probability
in a wireless adhoc network as a function of the size of the network.
We also show that, operating at those optimal values of broadcast
probability there is at least 25-30% reduction in packet regeneration
during successful broadcasting.
Abstract: The response surface methodology (RSM) is a
collection of mathematical and statistical techniques useful in the
modeling and analysis of problems in which the dependent variable
receives the influence of several independent variables, in order to
determine which are the conditions under which should operate these
variables to optimize a production process. The RSM estimated a
regression model of first order, and sets the search direction using the
method of maximum / minimum slope up / down MMS U/D.
However, this method selects the step size intuitively, which can
affect the efficiency of the RSM. This paper assesses how the step
size affects the efficiency of this methodology. The numerical
examples are carried out through Monte Carlo experiments,
evaluating three response variables: efficiency gain function, the
optimum distance and the number of iterations. The results in the
simulation experiments showed that in response variables efficiency
and gain function at the optimum distance were not affected by the
step size, while the number of iterations is found that the efficiency if
it is affected by the size of the step and function type of test used.
Abstract: Time varying network induced delays in networked
control systems (NCS) are known for degrading control system-s
quality of performance (QoP) and causing stability problems. In
literature, a control method employing modeling of communication
delays as probability distribution, proves to be a better method. This
paper focuses on modeling of network induced delays as probability
distribution.
CAN and MIL-STD-1553B are extensively used to carry periodic
control and monitoring data in networked control systems.
In literature, methods to estimate only the worst-case delays for
these networks are available. In this paper probabilistic network
delay model for CAN and MIL-STD-1553B networks are given.
A systematic method to estimate values to model parameters from
network parameters is given. A method to predict network delay in
next cycle based on the present network delay is presented. Effect of
active network redundancy and redundancy at node level on network
delay and system response-time is also analyzed.
Abstract: Estimation of voltage stability based on optimal
filtering method is presented. PV curve is used as a tool for voltage stability analysis. Dynamic voltage stability estimation is done by
using particle filter method. Optimum value (nose point) of PV curve can be estimated by estimating parameter of PV curve equation
optimal value represents critical voltage and
condition at specified point of measurement. Voltage stability is then estimated by analyzing loading margin condition c stimating equation. This
maximum loading
ecified dynamically.
Abstract: In the recent years multimedia traffic and in particular
VoIP services are growing dramatically. We present a new algorithm
to control the resource utilization and to optimize the voice codec
selection during SIP call setup on behalf of the traffic condition
estimated on the network path.
The most suitable methodologies and the tools that perform realtime
evaluation of the available bandwidth on a network path have
been integrated with our proposed algorithm: this selects the best
codec for a VoIP call in function of the instantaneous available
bandwidth on the path. The algorithm does not require any explicit
feedback from the network, and this makes it easily deployable over
the Internet. We have also performed intensive tests on real network
scenarios with a software prototype, verifying the algorithm
efficiency with different network topologies and traffic patterns
between two SIP PBXs.
The promising results obtained during the experimental validation
of the algorithm are now the basis for the extension towards a larger
set of multimedia services and the integration of our methodology
with existing PBX appliances.
Abstract: A heuristic conceptual model for to develop the
Reliability Centered Maintenance (RCM), especially in preventive
strategy, has been explored during this paper. In most real cases
which complicity of system obligates high degree of reliability, this
model proposes a more appropriate reliability function between life
time distribution based and another which is based on relevant
Extreme Value (EV) distribution. A statistical and mathematical
approach is used to estimate and verify these two distribution
functions. Then best one is chosen just among them, whichever is
more reliable. A numeric Industrial case study will be reviewed to
represent the concepts of this paper, more clearly.
Abstract: This paper discusses EM algorithm and Bootstrap
approach combination applied for the improvement of the satellite
image fusion process. This novel satellite image fusion method based
on estimation theory EM algorithm and reinforced by Bootstrap
approach was successfully implemented and tested. The sensor
images are firstly split by a Bayesian segmentation method to
determine a joint region map for the fused image. Then, we use the
EM algorithm in conjunction with the Bootstrap approach to develop
the bootstrap EM fusion algorithm, hence producing the fused
targeted image. We proposed in this research to estimate the
statistical parameters from some iterative equations of the EM
algorithm relying on a reference of representative Bootstrap samples
of images. Sizes of those samples are determined from a new
criterion called 'hybrid criterion'. Consequently, the obtained results
of our work show that using the Bootstrap EM (BEM) in image
fusion improve performances of estimated parameters which involve
amelioration of the fused image quality; and reduce the computing
time during the fusion process.
Abstract: This paper discusses two observers, which are used
for the estimation of parameters of PMSM. Former one, reduced
order observer, which is used to estimate the inaccessible parameters
of PMSM. Later one, full order observer, which is used to estimate
all the parameters of PMSM even though some of the parameters are
directly available for measurement, so as to meet with the
insensitivity to the parameter variation. However, the state space
model contains some nonlinear terms i.e. the product of different
state variables. The asymptotic state observer, which approximately
reconstructs the state vector for linear systems without uncertainties,
was presented by Luenberger. In this work, a modified form of such
an observer is used by including a non-linear term involving the
speed. So, both the observers are designed in the framework of
nonlinear control; their stability and rate of convergence is discussed.
Abstract: Urinary Tract Infections (UTI) account for an estimated 25-40% nosocomial infection, out of which 90% are associated with urinary catheter, called Catheter associated urinary tract infection (CAUTI). The microbial populations within CAUTI frequently develop as biofilms. In the present study, microbial contamination of indwelling urinary catheters was investigated. Biofilm forming ability of the isolates was determined by tissue culture plate method. Prevention of biofilm formation in the urinary catheter by Pseudomonas aeruginosa was also determined by coating the catheter with some enzymes, gentamycin and EDTA. It was found that 64% of the urinary catheters get contaminated during the course of catheterization. Of the total 6 isolates, biofilm formation was seen in 100% Pseudomonas aeruginosa and E. coli, 90% in Enterococci, 80% in Klebsiella and 66% in S. aureus. It was noted that the biofilm production by Pseudomonas was prolonged by 7 days in amylase, 8 days in protease, 6 days in lysozyme, 7days in gentamycin and 5 days in EDTA treated catheter.
Abstract: In this paper, we present the video quality measure
estimation via a neural network. This latter predicts MOS (mean
opinion score) by providing height parameters extracted from
original and coded videos. The eight parameters that are used are: the
average of DFT differences, the standard deviation of DFT
differences, the average of DCT differences, the standard deviation
of DCT differences, the variance of energy of color, the luminance
Y, the chrominance U and the chrominance V. We chose Euclidean
Distance to make comparison between the calculated and estimated
output.
Abstract: Un-doped GaN film of thickness 1.90 mm, grown on
sapphire substrate were uniformly implanted with 325 keV Mn+ ions
for various fluences varying from 1.75 x 1015 - 2.0 x 1016 ions cm-2 at
3500 C substrate temperature. The structural, morphological and
magnetic properties of Mn ion implanted gallium nitride samples
were studied using XRD, AFM and SQUID techniques. XRD of the
sample implanted with various ion fluences showed the presence of
different magnetic phases of Ga3Mn, Ga0.6Mn0.4 and Mn4N.
However, the compositions of these phases were found to be
depended on the ion fluence. AFM images of non-implanted sample
showed micrograph with rms surface roughness 2.17 nm. Whereas
samples implanted with the various fluences showed the presence of
nano clusters on the surface of GaN. The shape, size and density of
the clusters were found to vary with respect to ion fluence. Magnetic
moment versus applied field curves of the samples implanted with
various fluences exhibit the hysteresis loops. The Curie temperature
estimated from zero field cooled and field cooled curves for the
samples implanted with the fluence of 1.75 x 1015, 1.5 x 1016 and 2.0
x 1016 ions cm-2 was found to be 309 K, 342 K and 350 K
respectively.
Abstract: TiO2/MgO composite films were prepared by coating
the magnesium acetate solution in the pores of mesoporous TiO2
films using a dip coating method. Concentrations of magnesium
acetate solution were varied in a range of 1x10-4 – 1x10-1 M. The
TiO2/MgO composite films were characterized by scanning electron
microscopy (SEM), transmission electron microscropy (TEM),
electrochemical impedance spectroscopy(EIS) , transient voltage
decay and I-V test. The TiO2 films and TiO2/MgO composite films
were immersed in a 0.3 mM N719 dye solution. The Dye-sensitized
solar cells with the TiO2/MgO/N719 structure showed an optimal
concentration of magnesium acetate solution of 1x10-3 M resulting in
the MgO film estimated thickness of 0.0963 nm and giving the
maximum efficiency of 4.85%. The improved efficiency of dyesensitized
solar cell was due to the magnesium oxide film as the wide
band gap coating decays the electron back transfer to the triiodide
electrolyte and reduce charge recombination.
Abstract: In this paper we will develop further the sequential
life test approach presented in a previous article by [1] using an
underlying two parameter Weibull sampling distribution. The
minimum life will be considered equal to zero. We will again provide
rules for making one of the three possible decisions as each
observation becomes available; that is: accept the null hypothesis H0;
reject the null hypothesis H0; or obtain additional information by
making another observation. The product being analyzed is a new
type of a low alloy-high strength steel product. To estimate the shape
and the scale parameters of the underlying Weibull model we will use
a maximum likelihood approach for censored failure data. A new
example will further develop the proposed sequential life testing
approach.
Abstract: Estimation of stormwater pollutants is a pre-requisite
for the protection and improvement of the aquatic environment and
for appropriate management options. The usual practice for the
stormwater quality prediction is performed through water quality
modeling. However, the accuracy of the prediction by the models
depends on the proper estimation of model parameters. This paper
presents the estimation of model parameters for a catchment water
quality model developed for the continuous simulation of stormwater
pollutants from a catchment to the catchment outlet. The model is
capable of simulating the accumulation and transportation of the
stormwater pollutants; suspended solids (SS), total nitrogen (TN) and
total phosphorus (TP) from a particular catchment. Rainfall and water
quality data were collected for the Hotham Creek Catchment (HTCC),
Gold Coast, Australia. Runoff calculations from the developed model
were compared with the calculated discharges from the widely used
hydrological models, WBNM and DRAINS. Based on the measured
water quality data, model water quality parameters were calibrated
for the above-mentioned catchment. The calibrated parameters are
expected to be helpful for the best management practices (BMPs)
of the region. Sensitivity analyses of the estimated parameters were
performed to assess the impacts of the model parameters on overall
model estimations of runoff water quality.
Abstract: Radio frequency identification (RFID) applications have grown rapidly in many industries, especially in indoor location identification. The advantage of using received signal strength indicator (RSSI) values as an indoor location measurement method is a cost-effective approach without installing extra hardware. Because the accuracy of many positioning schemes using RSSI values is limited by interference factors and the environment, thus it is challenging to use RFID location techniques based on integrating positioning algorithm design. This study proposes the location estimation approach and analyzes a scheme relying on RSSI values to minimize location errors. In addition, this paper examines different factors that affect location accuracy by integrating the backpropagation neural network (BPN) with the LANDMARC algorithm in a training phase and an online phase. First, the training phase computes coordinates obtained from the LANDMARC algorithm, which uses RSSI values and the real coordinates of reference tags as training data for constructing an appropriate BPN architecture and training length. Second, in the online phase, the LANDMARC algorithm calculates the coordinates of tracking tags, which are then used as BPN inputs to obtain location estimates. The results show that the proposed scheme can estimate locations more accurately compared to LANDMARC without extra devices.
Abstract: Soil erosion is the most serious problem faced at
global and local level. So planning of soil conservation measures has
become prominent agenda in the view of water basin managers. To
plan for the soil conservation measures, the information on soil
erosion is essential. Universal Soil Loss Equation (USLE), Revised
Universal Soil Loss Equation 1 (RUSLE1or RUSLE) and Modified
Universal Soil Loss Equation (MUSLE), RUSLE 1.06, RUSLE1.06c,
RUSLE2 are most widely used conventional erosion estimation
methods. The essential drawbacks of USLE, RUSLE1 equations are
that they are based on average annual values of its parameters and so
their applicability to small temporal scale is questionable. Also these
equations do not estimate runoff generated soil erosion. So
applicability of these equations to estimate runoff generated soil
erosion is questionable. Data used in formation of USLE, RUSLE1
equations was plot data so its applicability at greater spatial scale
needs some scale correction factors to be induced. On the other hand
MUSLE is unsuitable for predicting sediment yield of small and large
events. Although the new revised forms of USLE like RUSLE 1.06,
RUSLE1.06c and RUSLE2 were land use independent and they have
almost cleared all the drawbacks in earlier versions like USLE and
RUSLE1, they are based on the regional data of specific area and
their applicability to other areas having different climate, soil, land
use is questionable. These conventional equations are applicable for
sheet and rill erosion and unable to predict gully erosion and spatial
pattern of rills. So the research was focused on development of nonconventional
(other than conventional) methods of soil erosion
estimation. When these non-conventional methods are combined with
GIS and RS, gives spatial distribution of soil erosion. In the present
paper the review of literature on non- conventional methods of soil
erosion estimation supported by GIS and RS is presented.
Abstract: It is estimated that the total cost of abnormal
conditions to US process industries is around $20 billion dollars in
annual losses. The hydrotreatment (HDT) of diesel fuel in petroleum
refineries is a conversion process that leads to high profitable
economical returns. However, this is a difficult process to control
because it is operated continuously, with high hydrogen pressures
and it is also subject to disturbances in feed properties and catalyst
performance. So, the automatic detection of fault and diagnosis plays
an important role in this context. In this work, a hybrid approach
based on neural networks together with a pos-processing
classification algorithm is used to detect faults in a simulated HDT
unit. Nine classes (8 faults and the normal operation) were correctly
classified using the proposed approach in a maximum time of 5
minutes, based on on-line data process measurements.