Abstract: Wind is among the potential energy resources which
can be harnessed to generate wind energy for conversion into
electrical power. Due to the variability of wind speed with time and
height, it becomes difficult to predict the generated wind energy more
optimally. In this paper, an attempt is made to establish a
probabilistic model fitting the wind speed data recorded at
Makambako site in Tanzania. Wind speeds and direction were
respectively measured using anemometer (type AN1) and wind Vane
(type WD1) both supplied by Delta-T-Devices at a measurement
height of 2 m. Wind speeds were then extrapolated for the height of
10 m using power law equation with an exponent of 0.47. Data were
analysed using MINITAB statistical software to show the variability
of wind speeds with time and height, and to determine the underlying
probability model of the extrapolated wind speed data. The results
show that wind speeds at Makambako site vary cyclically over time;
and they conform to the Weibull probability distribution. From these
results, Weibull probability density function can be used to predict
the wind energy.
Abstract: This paper introduces the foundations of Bayesian probability theory and Bayesian decision method. The main goal of Bayesian decision theory is to minimize the expected loss of a decision or minimize the expected risk. The purposes of this study are to review the decision process on the issue of flood occurrences and to suggest possible process for decision improvement. This study examines the problem structure of flood occurrences and theoretically explicates the decision-analytic approach based on Bayesian decision theory and application to flood occurrences in Environmental Engineering. In this study, we will discuss about the flood occurrences upon an annual maximum water level in cm, 43-year record available from 1965 to 2007 at the gauging station of Sagaing on the Ayeyarwady River with the drainage area - 120193 sq km by using Bayesian decision method. As a result, we will discuss the loss and risk of vast areas of agricultural land whether which will be inundated or not in the coming year based on the two standard maximum water levels during 43 years. And also we forecast about that lands will be safe from flood water during the next 10 years.
Abstract: In this work we study the effect of several covariates X on a censored response variable T with unknown probability distribution. In this context, most of the studies in the literature can be located in two possible general classes of regression models: models that study the effect the covariates have on the hazard function; and models that study the effect the covariates have on the censored response variable. Proposals in this paper are in the second class of models and, more specifically, on least squares based model approach. Thus, using the bootstrap estimate of the bias, we try to improve the estimation of the regression parameters by reducing their bias, for small sample sizes. Simulation results presented in the paper show that, for reasonable sample sizes and censoring levels, the bias is always smaller for the new proposals.
Abstract: Reliable secure multicast communication in mobile
adhoc networks is challenging due to its inherent characteristics of
infrastructure-less architecture with lack of central authority, high
packet loss rates and limited resources such as bandwidth, time and
power. Many emerging commercial and military applications require
secure multicast communication in adhoc environments. Hence key
management is the fundamental challenge in achieving reliable
secure communication using multicast key distribution for mobile
adhoc networks. Thus in designing a reliable multicast key
distribution scheme, reliability and congestion control over
throughput are essential components. This paper proposes and
evaluates the performance of an enhanced optimized multicast cluster
tree algorithm with destination sequenced distance vector routing
protocol to provide reliable multicast key distribution. Simulation
results in NS2 accurately predict the performance of proposed
scheme in terms of key delivery ratio and packet loss rate under
varying network conditions. This proposed scheme achieves
reliability, while exhibiting low packet loss rate with high key
delivery ratio compared with the existing scheme.
Abstract: This paper deals with condition monitoring of electric switch machine for railway points. Point machine, as a complex electro-mechanical device, switch the track between two alternative routes. There has been an increasing interest in railway safety and the optimal management of railway equipments maintenance, e.g. point machine, in order to enhance railway service quality and reduce system failure. This paper explores the development of Kolmogorov- Smirnov (K-S) test to detect some point failures (external to the machine, slide chairs, fixing, stretchers, etc), while the point machine (inside the machine) is in its proper condition. Time-domain stator Current signatures of normal (healthy) and faulty points are taken by 3 Hall Effect sensors and are analyzed by K-S test. The test is simulated by creating three types of such failures, namely putting a hard stone and a soft stone between stock rail and switch blades as obstacles and also slide chairs- friction. The test has been applied for those three faults which the results show that K-S test can effectively be developed for the aim of other point failures detection, which their current signatures deviate parametrically from the healthy current signature. K-S test as an analysis technique, assuming that any defect has a specific probability distribution. Empirical cumulative distribution functions (ECDF) are used to differentiate these probability distributions. This test works based on the null hypothesis that ECDF of target distribution is statistically similar to ECDF of reference distribution. Therefore by comparing a given current signature (as target signal) from unknown switch state to a number of template signatures (as reference signal) from known switch states, it is possible to identify which is the most likely state of the point machine under analysis.
Abstract: Independent component analysis can estimate unknown
source signals from their mixtures under the assumption that the
source signals are statistically independent. However, in a real environment,
the separation performance is often deteriorated because
the number of the source signals is different from that of the sensors.
In this paper, we propose an estimation method for the number of
the sources based on the joint distribution of the observed signals
under two-sensor configuration. From several simulation results, it
is found that the number of the sources is coincident to that of
peaks in the histogram of the distribution. The proposed method can
estimate the number of the sources even if it is larger than that of
the observed signals. The proposed methods have been verified by
several experiments.
Abstract: A two-parameter fatigue model explicitly accounting for the cyclic as well as the mean stress was used to fit static and fatigue data available in literature concerning carbon fiber reinforced composite laminates subjected tension-tension fatigue. The model confirms the strength–life equal rank assumption and predicts reasonably the probability of failure under cyclic loading. The model parameters were found by best fitting procedures and required a minimum of experimental tests.
Abstract: In this manuscript, we discuss the problem of determining the optimum stratification of a study (or main) variable based on the auxiliary variable that follows a uniform distribution. If the stratification of survey variable is made using the auxiliary variable it may lead to substantial gains in precision of the estimates. This problem is formulated as a Nonlinear Programming Problem (NLPP), which turn out to multistage decision problem and is solved using dynamic programming technique.
Abstract: Extreme temperature of several stations in Malaysia is
modelled by fitting the monthly maximum to the Generalized
Extreme Value (GEV) distribution. The Mann-Kendall (MK) test
suggests a non-stationary model. Two models are considered for
stations with trend and the Likelihood Ratio test is used to determine
the best-fitting model. Results show that half of the stations favour a
model which is linear for the location parameters. The return level is
the level of events (maximum temperature) which is expected to be
exceeded once, on average, in a given number of years, is obtained.
Abstract: Electric impedance imaging is a method of
reconstructing spatial distribution of electrical conductivity inside a
subject. In this paper, a new method of electrical impedance imaging
using eddy current is proposed. The eddy current distribution in the
body depends on the conductivity distribution and the magnetic field
pattern. By changing the position of magnetic core, a set of voltage
differences is measured with a pair of electrodes. This set of voltage
differences is used in image reconstruction of conductivity
distribution. The least square error minimization method is used as a
reconstruction algorithm. The back projection algorithm is used to
get two dimensional images. Based on this principle, a measurement
system is developed and some model experiments were performed
with a saline filled phantom. The shape of each model in the
reconstructed image is similar to the corresponding model,
respectively. From the results of these experiments, it is confirmed
that the proposed method is applicable in the realization of electrical
imaging.
Abstract: In this paper, a two-dimensional (2D) numerical
model for the tidal currents simulation in Persian Gulf is presented.
The model is based on the depth averaged equations of shallow water
which consider hydrostatic pressure distribution. The continuity
equation and two momentum equations including the effects of bed
friction, the Coriolis effects and wind stress have been solved. To
integrate the 2D equations, the Alternative Direction Implicit (ADI)
technique has been used. The base of equations discritization was
finite volume method applied on rectangular mesh. To evaluate the
model validation, a dam break case study including analytical
solution is selected and the comparison is done. After that, the
capability of the model in simulation of tidal current in a real field is
represented by modeling the current behavior in Persian Gulf. The
tidal fluctuations in Hormuz Strait have caused the tidal currents in
the area of study. Therefore, the water surface oscillations data at
Hengam Island on Hormoz Strait are used as the model input data.
The check point of the model is measured water surface elevations at
Assaluye port. The comparison between the results and the
acceptable agreement of them showed the model ability for modeling
marine hydrodynamic.
Abstract: This paper considers inference under progressive type II censoring with a compound Rayleigh failure time distribution. The maximum likelihood (ML), and Bayes methods are used for estimating the unknown parameters as well as some lifetime parameters, namely reliability and hazard functions. We obtained Bayes estimators using the conjugate priors for two shape and scale parameters. When the two parameters are unknown, the closed-form expressions of the Bayes estimators cannot be obtained. We use Lindley.s approximation to compute the Bayes estimates. Another Bayes estimator has been obtained based on continuous-discrete joint prior for the unknown parameters. An example with the real data is discussed to illustrate the proposed method. Finally, we made comparisons between these estimators and the maximum likelihood estimators using a Monte Carlo simulation study.
Abstract: Computer based geostatistical methods can offer effective data analysis possibilities for agricultural areas by using
vectorial data and their objective informations. These methods will help to detect the spatial changes on different locations of the large
agricultural lands, which will lead to effective fertilization for optimal yield with reduced environmental pollution. In this study, topsoil (0-20 cm) and subsoil (20-40 cm) samples were taken from a
sugar beet field by 20 x 20 m grids. Plant samples were also collected
from the same plots. Some physical and chemical analyses for these
samples were made by routine methods. According to derived variation coefficients, topsoil organic matter (OM) distribution was more than subsoil OM distribution. The highest C.V. value of
17.79% was found for topsoil OM. The data were analyzed
comparatively according to kriging methods which are also used
widely in geostatistic. Several interpolation methods (Ordinary,Simple and Universal) and semivariogram models (Spherical,
Exponential and Gaussian) were tested in order to choose the suitable
methods. Average standard deviations of values estimated by simple
kriging interpolation method were less than average standard
deviations (topsoil OM ± 0.48, N ± 0.37, subsoil OM ± 0.18) of measured values. The most suitable interpolation method was simple
kriging method and exponantial semivariogram model for topsoil,
whereas the best optimal interpolation method was simple kriging
method and spherical semivariogram model for subsoil. The results
also showed that these computer based geostatistical methods should
be tested and calibrated for different experimental conditions and semivariogram models.
Abstract: In this paper, we present a maintenance model of a
two-unit series system with economic dependence. Unit#1 which is
considered to be more expensive and more important, is subject to
condition monitoring (CM) at equidistant, discrete time epochs and
unit#2, which is not subject to CM has a general lifetime distribution.
The multivariate observation vectors obtained through condition
monitoring carry partial information about the hidden state of unit#1,
which can be in a healthy or a warning state while operating. Only the
failure state is assumed to be observable for both units. The objective
is to find an optimal opportunistic maintenance policy minimizing
the long-run expected average cost per unit time. The problem
is formulated and solved in the partially observable semi-Markov
decision process framework. An effective computational algorithm
for finding the optimal policy and the minimum average cost is
developed, illustrated by a numerical example.
Abstract: This paper developed the c-Chart based on a Zero- Inflated Poisson (ZIP) processes that approximated by a geometric distribution with parameter p. The p estimated that fit for ZIP distribution used in calculated the mean, median, and variance of geometric distribution for constructed the c-Chart by three difference methods. For cg-Chart, developed c-Chart by used the mean and variance of the geometric distribution constructed control limits. For cmg-Chart, the mean used for constructed the control limits. The cme- Chart, developed control limits of c-Chart from median and variance values of geometric distribution. The performance of charts considered from the Average Run Length and Average Coverage Probability. We found that for an in-control process, the cg-Chart is superior for low level of mean at all level of proportion zero. For an out-of-control process, the cmg-Chart and cme-Chart are the best for mean = 2, 3 and 4 at all level of parameter.
Abstract: In social network analysis the mean nodal degree and
density of the graph can be considered as a measure of the activity of
all actors in the network and this is an important property of a graph
and for making comparisons among networks. Since subjects in a
family or organization are subject to common environment factors, it
is prime interest to study the association between responses.
Therefore, we study the distribution of the mean nodal degree and
density of the graph under correlated binary units. The cross product
ratio is used to capture the intra-units association among subjects.
Computer program and an application are given to show the benefits
of the method.
Abstract: Rainfall data at fine resolution and knowledge of its
characteristics plays a major role in the efficient design and operation
of agricultural, telecommunication, runoff and erosion control as well
as water quality control systems. The paper is aimed to study the
statistical distribution of hourly rainfall depth for 12 representative
stations spread across Peninsular Malaysia. Hourly rainfall data of 10
to 22 years period were collected and its statistical characteristics
were estimated. Three probability distributions namely, Generalized
Pareto, Exponential and Gamma distributions were proposed to
model the hourly rainfall depth, and three goodness-of-fit tests,
namely, Kolmogorov-Sminov, Anderson-Darling and Chi-Squared
tests were used to evaluate their fitness. Result indicates that the east
cost of the Peninsular receives higher depth of rainfall as compared
to west coast. However, the rainfall frequency is found to be
irregular. Also result from the goodness-of-fit tests show that all the
three models fit the rainfall data at 1% level of significance.
However, Generalized Pareto fits better than Exponential and
Gamma distributions and is therefore recommended as the best fit.
Abstract: In this paper, first we introduce the stable distribution, stable process and theirs characteristics. The a -stable distribution family has received great interest in the last decade due to its success in modeling data, which are too impulsive to be accommodated by the Gaussian distribution. In the second part, we propose major applications of alpha stable distribution in telecommunication, computer science such as network delays and signal processing and financial markets. At the end, we focus on using stable distribution to estimate measure of risk in stock markets and show simulated data with statistical softwares.
Abstract: This paper presents a supervised clustering algorithm,
namely Grid-Based Supervised Clustering (GBSC), which is able to
identify clusters of any shapes and sizes without presuming any
canonical form for data distribution. The GBSC needs no prespecified
number of clusters, is insensitive to the order of the input
data objects, and is capable of handling outliers. Built on the
combination of grid-based clustering and density-based clustering,
under the assistance of the downward closure property of density
used in bottom-up subspace clustering, the GBSC can notably reduce
its search space to avoid the memory confinement situation during its
execution. On two-dimension synthetic datasets, the GBSC can
identify clusters with different shapes and sizes correctly. The GBSC
also outperforms other five supervised clustering algorithms when
the experiments are performed on some UCI datasets.
Abstract: We develop a new estimator of the renewal function for heavy-tailed claims amounts. Our approach is based on the peak over threshold method for estimating the tail of the distribution with a generalized Pareto distribution. The asymptotic normality of an appropriately centered and normalized estimator is established, and its performance illustrated in a simulation study.