Abstract: Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.
Abstract: Ren et al. presented an efficient carrier frequency offset
(CFO) estimation method for orthogonal frequency division multiplexing
(OFDM), which has an estimation range as large as the
bandwidth of the OFDM signal and achieves high accuracy without
any constraint on the structure of the training sequence. However,
its detection probability of the integer frequency offset (IFO) rapidly
varies according to the fractional frequency offset (FFO) change. In
this paper, we first analyze the Ren-s method and define two criteria
suitable for detection of IFO. Then, we propose a novel method for
the IFO estimation based on the maximum-likelihood (ML) principle
and the detection criteria defined in this paper. The simulation results
demonstrate that the proposed method outperforms the Ren-s method
in terms of the IFO detection probability irrespective of a value of
the FFO.
Abstract: In this paper, we propose an algorithm to compute
initial cluster centers for K-means clustering. Data in a cell is
partitioned using a cutting plane that divides cell in two smaller cells.
The plane is perpendicular to the data axis with the highest variance
and is designed to reduce the sum squared errors of the two cells as
much as possible, while at the same time keep the two cells far apart
as possible. Cells are partitioned one at a time until the number of
cells equals to the predefined number of clusters, K. The centers of
the K cells become the initial cluster centers for K-means. The
experimental results suggest that the proposed algorithm is effective,
converge to better clustering results than those of the random
initialization method. The research also indicated the proposed
algorithm would greatly improve the likelihood of every cluster
containing some data in it.
Abstract: In this paper we will develop further the sequential life test approach presented in a previous article by [1] using an underlying two parameter Inverse Weibull sampling distribution. The location parameter or minimum life will be considered equal to zero. Once again we will provide rules for making one of the three possible decisions as each observation becomes available; that is: accept the null hypothesis H0; reject the null hypothesis H0; or obtain additional information by making another observation. The product being analyzed is a new electronic component. There is little information available about the possible values the parameters of the corresponding Inverse Weibull underlying sampling distribution could have.To estimate the shape and the scale parameters of the underlying Inverse Weibull model we will use a maximum likelihood approach for censored failure data. A new example will further develop the proposed sequential life testing approach.
Abstract: Support Vector Machine (SVM) is a statistical
learning tool developed to a more complex concept of
structural risk minimization (SRM). In this paper, SVM is
applied to signal detection in communication systems in the
presence of channel noise in various environments in the form
of Rayleigh fading, additive white Gaussian background noise
(AWGN), and interference noise generalized as additive color
Gaussian noise (ACGN). The structure and performance of
SVM in terms of the bit error rate (BER) metric is derived and
simulated for these advanced stochastic noise models and the
computational complexity of the implementation, in terms of
average computational time per bit, is also presented. The
performance of SVM is then compared to conventional binary
signaling optimal model-based detector driven by binary
phase shift keying (BPSK) modulation. We show that the
SVM performance is superior to that of conventional matched
filter-, innovation filter-, and Wiener filter-driven detectors,
even in the presence of random Doppler carrier deviation,
especially for low SNR (signal-to-noise ratio) ranges. For
large SNR, the performance of the SVM was similar to that of
the classical detectors. However, the convergence between
SVM and maximum likelihood detection occurred at a higher
SNR as the noise environment became more hostile.
Abstract: In this paper we consider the problem of change
detection and non stationary signals tracking. Using parametric
estimation of signals based on least square lattice adaptive filters we
consider for change detection statistical parametric methods using
likelihood ratio and hypothesis tests. In order to track signals
dynamics, we introduce a compensation procedure in the adaptive
estimation. This will improve the adaptive estimation performances
and fasten it-s convergence after changes detection.
Abstract: The zero truncated model is usually used in modeling
count data without zero. It is the opposite of zero inflated model.
Zero truncated Poisson and zero truncated negative binomial models
are discussed and used by some researchers in analyzing the
abundance of rare species and hospital stay. Zero truncated models
are used as the base in developing hurdle models. In this study, we
developed a new model, the zero truncated strict arcsine model,
which can be used as an alternative model in modeling count data
without zero and with extra variation. Two simulated and one real
life data sets are used and fitted into this developed model. The
results show that the model provides a good fit to the data. Maximum
likelihood estimation method is used in estimating the parameters.
Abstract: Zero inflated strict arcsine model is a newly developed
model which is found to be appropriate in modeling overdispersed
count data. In this study, we extend zero inflated strict arcsine model
to zero inflated strict arcsine regression model by taking into
consideration the extra variability caused by extra zeros and
covariates in count data. Maximum likelihood estimation method is
used in estimating the parameters for this zero inflated strict arcsine
regression model.
Abstract: The aim of this paper is to provide an empirical
evidence about the effects that the management of continuous
training have on employability (or employment stability) in the
Spanish labour market. With this purpose a binary logit model with
interaction effect is been used. The dependent variable includes two
situations of the active workers: continuous and discontinuous
employability. To distinguish between them an Employability Index
Stability (ESI) was calculated taking into account two factors: time
worked and job security. Various aspects of the continuous training
and personal workers data are used as independent variables. The
data obtained from a survey of a sample of 918 employed have
revealed a relationship between the likelihood of continuous
employability and continuous training received. The empirical results
support the positive and significant relationship between various
aspects of the training provided by firms and employability
likelihood of the workers, postulate alike from a theoretical point of
view.
Abstract: This study investigates the possibility providing gully
erosion map by the supervised classification of satellite images
(ETM+) in two mountainous and plain land types. These land types
were the part of Varamin plain, Tehran province, and Roodbar subbasin,
Guilan province, as plain and mountain land types,
respectively. The position of 652 and 124 ground control points were
recorded by GPS respectively in mountain and plain land types. Soil
gully erosion, land uses or plant covers were investigated in these
points. Regarding ground control points and auxiliary points, training
points of gully erosion and other surface features were introduced to
software (Ilwis 3.3 Academic). The supervised classified map of
gully erosion was prepared by maximum likelihood method and then,
overall accuracy of this map was computed. Results showed that the
possibility supervised classification of gully erosion isn-t possible,
although it need more studies for results generalization to other
mountainous regions. Also, with increasing land uses and other
surface features in plain physiography, it decreases the classification
of accuracy.
Abstract: In this paper we present a soft timing phase estimation (STPE) method for wireless mobile receivers operating in low signal to noise ratios (SNRs). Discrete Polyphase Matched (DPM) filters, a Log-maximum a posterior probability (MAP) and/or a Soft-output Viterbi algorithm (SOVA) are combined to derive a new timing recovery (TR) scheme. We apply this scheme to wireless cellular communication system model that comprises of a raised cosine filter (RCF), a bit-interleaved turbo-coded multi-level modulation (BITMM) scheme and the channel is assumed to be memory-less. Furthermore, no clock signals are transmitted to the receiver contrary to the classical data aided (DA) models. This new model ensures that both the bandwidth and power of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft timing recovery (ISTR) scheme outperforms the conventional schemes.
Abstract: In this paper we will develop further the sequential
life test approach presented in a previous article by [1] using an
underlying two parameter Weibull sampling distribution. The
minimum life will be considered equal to zero. We will again provide
rules for making one of the three possible decisions as each
observation becomes available; that is: accept the null hypothesis H0;
reject the null hypothesis H0; or obtain additional information by
making another observation. The product being analyzed is a new
type of a low alloy-high strength steel product. To estimate the shape
and the scale parameters of the underlying Weibull model we will use
a maximum likelihood approach for censored failure data. A new
example will further develop the proposed sequential life testing
approach.
Abstract: Web usage mining algorithms have been widely
utilized for modeling user web navigation behavior. In this study we
advance a model for mining of user-s navigation pattern. The model
makes user model based on expectation-maximization (EM)
algorithm.An EM algorithm is used in statistics for finding maximum
likelihood estimates of parameters in probabilistic models, where the
model depends on unobserved latent variables. The experimental
results represent that by decreasing the number of clusters, the log
likelihood converges toward lower values and probability of the
largest cluster will be decreased while the number of the clusters
increases in each treatment.
Abstract: The zero inflated models are usually used in modeling
count data with excess zeros where the existence of the excess zeros
could be structural zeros or zeros which occur by chance. These type
of data are commonly found in various disciplines such as finance,
insurance, biomedical, econometrical, ecology, and health sciences
which involve sex and health dental epidemiology. The most popular
zero inflated models used by many researchers are zero inflated
Poisson and zero inflated negative binomial models. In addition, zero
inflated generalized Poisson and zero inflated double Poisson models
are also discussed and found in some literature. Recently zero
inflated inverse trinomial model and zero inflated strict arcsine
models are advocated and proven to serve as alternative models in
modeling overdispersed count data caused by excessive zeros and
unobserved heterogeneity. The purpose of this paper is to review
some related literature and provide a variety of examples from
different disciplines in the application of zero inflated models.
Different model selection methods used in model comparison are
discussed.
Abstract: In the planning point of view, it is essential to have
mode choice, due to the massive amount of incurred in transportation
systems. The intercity travellers in Libya have distinct features, as
against travellers from other countries, which includes cultural and
socioeconomic factors. Consequently, the goal of this study is to
recognize the behavior of intercity travel using disaggregate models,
for projecting the demand of nation-level intercity travel in Libya.
Multinomial Logit Model for all the intercity trips has been
formulated to examine the national-level intercity transportation in
Libya. The Multinomial logit model was calibrated using nationwide
revealed preferences (RP) and stated preferences (SP) survey. The
model was developed for deference purpose of intercity trips (work,
social and recreational). The variables of the model have been
predicted based on maximum likelihood method. The data needed for
model development were obtained from all major intercity corridors
in Libya. The final sample size consisted of 1300 interviews. About
two-thirds of these data were used for model calibration, and the
remaining parts were used for model validation. This study, which is
the first of its kind in Libya, investigates the intercity traveler’s
mode-choice behavior. The intercity travel mode-choice model was
successfully calibrated and validated. The outcomes indicate that, the
overall model is effective and yields higher precision of estimation.
The proposed model is beneficial, due to the fact that, it is receptive
to a lot of variables, and can be employed to determine the impact of
modifications in the numerous characteristics on the need for various
travel modes. Estimations of the model might also be of valuable to
planners, who can estimate possibilities for various modes and
determine the impact of unique policy modifications on the need for
intercity travel.
Abstract: A 7-step method (with 25 sub-steps) to assess risk of
air pollutants is introduced. These steps are: pre-considerations,
sampling, statistical analysis, exposure matrix and likelihood, doseresponse
matrix and likelihood, total risk evaluation, and discussion
of findings. All mentioned words and expressions are wellunderstood;
however, almost all steps have been modified, improved,
and coupled in such a way that a comprehensive method has been
prepared. Accordingly, the SADRA (Statistical Analysis-Driven Risk
Assessment) emphasizes extensive and ongoing application of
analytical statistics in traditional risk assessment models. A Sulfur
Dioxide case study validates the claim and provides a good
illustration for this method.
Abstract: In this paper we propose a mixture of two different
distributions such as Exponential-Gamma, Exponential-Weibull and
Gamma-Weibull to model heterogeneous survival data. Various
properties of the proposed mixture of two different distributions are
discussed. Maximum likelihood estimations of the parameters are
obtained by using the EM algorithm. Illustrative example based on
real data are also given.
Abstract: As originally designed for wired networks, TCP (transmission control protocol) congestion control mechanism is triggered into action when packet loss is detected. This implicit assumption for packet loss mostly due to network congestion does not work well in Mobile Ad Hoc Network, where there is a comparatively high likelihood of packet loss due to channel errors and node mobility etc. Such non-congestion packet loss, when dealt with by congestion control mechanism, causes poor TCP performance in MANET. In this study, we continue to investigate the impact of the interaction between transport protocols and on-demand routing protocols on the performance and stability of 802.11 multihop networks. We evaluate the important wireless networking events caused routing change, and propose a cross layer method to delay the unnecessary routing changes, only need to add a sensitivity parameter α , which represents the on-demand routing-s reaction to link failure of MAC layer. Our proposal is applicable to the plain 802.11 networking environment, the simulation results that this method can remarkably improve the stability and performance of TCP without any modification on TCP and MAC protocol.
Abstract: This study examines the issue of recommendation
sources from the perspectives of gender and consumers- perceived
risk, and validates a model for the antecedents of consumer online
purchases. The method of obtaining quantitative data was that of the
instrument of a survey questionnaire. Data were collected via
questionnaires from 396 undergraduate students aged 18-24, and a
multiple regression analysis was conducted to identify causal
relationships. Empirical findings established the link between
recommendation sources (word-of-mouth, advertising, and
recommendation systems) and the likelihood of making online
purchases and demonstrated the role of gender and perceived risk as
moderators in this context. The results showed that the effects of
word-of-mouth on online purchase intentions were stronger than those
of advertising and recommendation systems. In addition, female
consumers have less experience with online purchases, so they may be
more likely than males to refer to recommendations during the
decision-making process. The findings of the study will help
marketers to address the recommendation factor which influences
consumers- intention to purchase and to improve firm performances to
meet consumer needs.
Abstract: In recent years, the use of vector variance as a
measure of multivariate variability has received much attention in
wide range of statistics. This paper deals with a more economic
measure of multivariate variability, defined as vector variance minus
all duplication elements. For high dimensional data, this will increase
the computational efficiency almost 50 % compared to the original
vector variance. Its sampling distribution will be investigated to make
its applications possible.