Abstract: Given that entrepreneurship is a very significant factor of regional development, it is necessary to approach systematically the development with measures of regional politics. According to international classification The Nomenclature of Territorial Units for Statistics (NUTS II), there are three regions in Croatia. The indicators of entrepreneurial activities on the national level of Croatia are analyzed in the paper, taking into consideration the results of referent research. The level of regional development is shown based on the analysis of entrepreneurs- operations. The results of the analysis show a very unfavorable situation in entrepreneurial activities on the national level of Croatia. The origin of this situation is to be found in the surroundings with an expressed inequality of regional development, which is caused by the non-existence of a strategically directed regional policy. In this paper recommendations which could contribute to the reduction of regional inequality in Croatia, have been made.
Abstract: The concentrations of As, Hg, Co, Cr and Cd were
tested for each soil sample, and their spatial patterns were analyzed
by the semivariogram approach of geostatistics and geographical
information system technology. Multivariate statistic approaches
(principal component analysis and cluster analysis) were used to
identify heavy metal sources and their spatial pattern. Principal
component analysis coupled with correlation between heavy metals
showed that primary inputs of As, Hg and Cd were due to
anthropogenic while, Co, and Cr were associated with pedogenic
factors. Ordinary kriging was carried out to map the spatial patters of
heavy metals. The high pollution sources evaluated was related with
usage of urban and industrial wastewater. The results of this study
helpful for risk assessment of environmental pollution for decision
making for industrial adjustment and remedy soil pollution.
Abstract: Data of wave height and wind speed were collected
from three existing oil fields in South China Sea – offshore
Peninsular Malaysia, Sarawak and Sabah regions. Extreme values
and other significant data were employed for analysis. The data were
recorded from 1999 until 2008. The results show that offshore
structures are susceptible to unacceptable motions initiated by wind
and waves with worst structural impacts caused by extreme wave
heights. To protect offshore structures from damage, there is a need
to quantify descriptive statistics and determine spectra envelope of
wind speed and wave height, and to ascertain the frequency content
of each spectrum for offshore structures in the South China Sea
shallow waters using measured time series. The results indicate that
the process is nonstationary; it is converted to stationary process by
first differencing the time series. For descriptive statistical analysis,
both wind speed and wave height have significant influence on the
offshore structure during the northeast monsoon with high mean wind
speed of 13.5195 knots ( = 6.3566 knots) and the high mean wave
height of 2.3597 m ( = 0.8690 m). Through observation of the
spectra, there is no clear dominant peak and the peaks fluctuate
randomly. Each wind speed spectrum and wave height spectrum has
its individual identifiable pattern. The wind speed spectrum tends to
grow gradually at the lower frequency range and increasing till it
doubles at the higher frequency range with the mean peak frequency
range of 0.4104 Hz to 0.4721 Hz, while the wave height tends to
grow drastically at the low frequency range, which then fluctuates
and decreases slightly at the high frequency range with the mean
peak frequency range of 0.2911 Hz to 0.3425 Hz.
Abstract: In this paper, a new probability density function (pdf)
is proposed to model the statistics of wavelet coefficients, and a
simple Kalman-s filter is derived from the new pdf using Bayesian
estimation theory. Specifically, we decompose the speckled image
into wavelet subbands, we apply the Kalman-s filter to the high
subbands, and reconstruct a despeckled image from the modified
detail coefficients. Experimental results demonstrate that our method
compares favorably to several other despeckling methods on test
synthetic aperture radar (SAR) images.
Abstract: Optimization of filter banks based on the knowledge of input statistics has been of interest for a long time. Finite impulse response (FIR) Compaction filters are used in the design of optimal signal adapted orthonormal FIR filter banks. In this paper we discuss three different approaches for the design of interpolated finite impulse response (IFIR) compaction filters. In the first method, the magnitude squared response satisfies Nyquist constraint approximately. In the second and third methods Nyquist constraint is exactly satisfied. These methods yield FIR compaction filters whose response is comparable with that of the existing methods. At the same time, IFIR filters enjoy significant saving in the number of multipliers and can be implemented efficiently. Since eigenfilter approach is used here, the method is less complex. Design of IFIR filters in the least square sense is presented.
Abstract: Information theory and Statistics play an important role in Biological Sciences when we use information measures for the study of diversity and equitability. In this communication, we develop the link among the three disciplines and prove that sampling distributions can be used to develop new information measures. Our study will be an interdisciplinary and will find its applications in Biological systems.
Abstract: This paper presents an exploration into the structure of the corporate governance network and interlocking directorates in the Czech Republic. First a literature overview and a basic terminology of the network theory is presented. Further in the text, statistics and other calculations relevant to corporate governance networks are presented. For this purpose an empirical data set consisting of 2 906 joint stock companies in the Czech Republic was examined. Industries with the highest average number of interlocks per company were healthcare, and energy and utilities. There is no observable link between the financial performance of the company and the number of its interlocks. Also interlocks with financial companies are very rare.
Abstract: An important step in studying the statistics of
fingerprint minutia features is to reliably extract minutia features from
the fingerprint images. A new reliable method of computation for
minutiae feature extraction from fingerprint images is presented. A
fingerprint image is treated as a textured image. An orientation flow
field of the ridges is computed for the fingerprint image. To
accurately locate ridges, a new ridge orientation based computation
method is proposed. After ridge segmentation a new method of
computation is proposed for smoothing the ridges. The ridge skeleton
image is obtained and then smoothed using morphological operators
to detect the features. A post processing stage eliminates a large
number of false features from the detected set of minutiae features.
The detected features are observed to be reliable and accurate.
Abstract: This paper deals with the localization of the wideband sources. We develop a new approach for estimating the wide band sources parameters. This method is based on the high order statistics of the recorded data in order to eliminate the Gaussian components from the signals received on the various hydrophones.In fact the noise of sea bottom is regarded as being Gaussian. Thanks to the coherent signal subspace algorithm based on the cumulant matrix of the received data instead of the cross-spectral matrix the wideband correlated sources are perfectly located in the very noisy environment. We demonstrate the performance of the proposed algorithm on the real data recorded during an underwater acoustics experiments.
Abstract: Web usage mining algorithms have been widely
utilized for modeling user web navigation behavior. In this study we
advance a model for mining of user-s navigation pattern. The model
makes user model based on expectation-maximization (EM)
algorithm.An EM algorithm is used in statistics for finding maximum
likelihood estimates of parameters in probabilistic models, where the
model depends on unobserved latent variables. The experimental
results represent that by decreasing the number of clusters, the log
likelihood converges toward lower values and probability of the
largest cluster will be decreased while the number of the clusters
increases in each treatment.
Abstract: Saturated hydraulic conductivity of Soil is an
important property in processes involving water and solute flow in
soils. Saturated hydraulic conductivity of soil is difficult to measure
and can be highly variable, requiring a large number of replicate
samples. In this study, 60 sets of soil samples were collected at
Saqhez region of Kurdistan province-IRAN. The statistics such as
Correlation Coefficient (R), Root Mean Square Error (RMSE), Mean
Bias Error (MBE) and Mean Absolute Error (MAE) were used to
evaluation the multiple linear regression models varied with number
of dataset. In this study the multiple linear regression models were
evaluated when only percentage of sand, silt, and clay content (SSC)
were used as inputs, and when SSC and bulk density, Bd, (SSC+Bd)
were used as inputs. The R, RMSE, MBE and MAE values of the 50
dataset for method (SSC), were calculated 0.925, 15.29, -1.03 and
12.51 and for method (SSC+Bd), were calculated 0.927, 15.28,-1.11
and 12.92, respectively, for relationship obtained from multiple
linear regressions on data. Also the R, RMSE, MBE and MAE values
of the 10 dataset for method (SSC), were calculated 0.725, 19.62, -
9.87 and 18.91 and for method (SSC+Bd), were calculated 0.618,
24.69, -17.37 and 22.16, respectively, which shows when number of
dataset increase, precision of estimated saturated hydraulic
conductivity, increases.
Abstract: The recurring decimal of rural and urban poverty in
Nigeria, resulting from lack of sustainable livelihood activities by
the people due to non-diversification of the economy, necessitated
this study. One hundred snail farmers were randomly selected in
Akure North and Akure South Local Government areas of Ondo
State, Southwest Nigeria where snail farming is widely practised.
Data collection was through questionnaires administration and onsite
observation of farms. Data obtained were subjected to
descriptive statistics, Student-s t-test and regression analysis. Cost
benefit ratio (CBR) and rate of return on investment (RORI) were
calculated in order to determine the poverty alleviation potentials of
snail farming in the study areas. Although snail farming was
profitable and viable, it was below poverty line. With time and more
knowledge in its farming activities, and with more people taking to
snail production, its poverty alleviation and reduction potentials will
increase.
Abstract: The use of High Order Statistics (HOS) analysis is
expected to provide so many candidates of features that can be selected for pattern recognition. More candidates of the feature can
be extracted using simple manipulation through a specific mathematical function prior to the HOS analysis. Feature extraction
method using HOS analysis combined with Difference to the Nth-Power manipulation has been examined in application for Automatic
Modulation Recognition (AMR) to perform scheme recognition of three digital modulation signal, i.e. QPSK-16QAM-64QAM in the
AWGN transmission channel. The simulation results is reported
when the analysis of HOS up to order-12 and the manipulation of Difference to the Nth-Power up to N = 4. The obtained accuracy rate
of AMR using the method of Simple Decision obtained 90% in SNR > 10 dB in its classifier, while using the method of Voted Decision is
96% in SNR > 2 dB.
Abstract: One of the primary uses of higher order statistics in
signal processing has been for detecting and estimation of non-
Gaussian signals in Gaussian noise of unknown covariance. This is
motivated by the ability of higher order statistics to suppress additive
Gaussian noise. In this paper, several methods to test for non-
Gaussianity of a given process are presented. These methods include
histogram plot, kurtosis test, and hypothesis testing using cumulants
and bispectrum of the available sequence. The hypothesis testing is
performed by constructing a statistic to test whether the bispectrum
of the given signal is non-zero. A zero bispectrum is not a proof of
Gaussianity. Hence, other tests such as the kurtosis test should be
employed. Examples are given to demonstrate the performance of the
presented methods.
Abstract: A 7-step method (with 25 sub-steps) to assess risk of
air pollutants is introduced. These steps are: pre-considerations,
sampling, statistical analysis, exposure matrix and likelihood, doseresponse
matrix and likelihood, total risk evaluation, and discussion
of findings. All mentioned words and expressions are wellunderstood;
however, almost all steps have been modified, improved,
and coupled in such a way that a comprehensive method has been
prepared. Accordingly, the SADRA (Statistical Analysis-Driven Risk
Assessment) emphasizes extensive and ongoing application of
analytical statistics in traditional risk assessment models. A Sulfur
Dioxide case study validates the claim and provides a good
illustration for this method.
Abstract: Chronic diseases prevailed along with economic
growth as well as life style changed in recent years in Taiwan.
According to the governmental statistics, hypertension related disease
is the tenth of death causes with 1,816 died directly from hypertension
in 2010. There were more death causes amongst the top ten had been
proofed that having strong association with the hypertension, such as
heart diseases, cardiovascular diseases, and diabetes. Hypertension or
High blood pressure is one of the major indicators for chronic diseases,
and was generally perceived as the major causes of mortality. The
literature generally suggested that regular physical exercise was
helpful to prevent the occurrence or to ease the progress of a
hypertension. This paper reported the process and outcomes in
detailed of an improvement project of physical exercise intervention
specific for hypertension patients. Physical information were
measured before and after the project to obtain information such as
weight, waistline, cholesterol (HD & LD), blood examination, as well
as self-perceived health status. The intervention project involved a
six-week exercise program, of which contained three times a week, 30
minutes of tutored physical exercise intervention. The project had
achieved several gains in changing the subjects- behavior in terms of
many important biophysical indexes. Around 20% of the participants
had significantly improved their cholesterols, BMI, and changed
unhealthy behaviors. Results from the project were encouraging, and
would be good reference for other samples.
Abstract: Baseball is unique among other sports in Taiwan.
Baseball has become a “symbol of the Taiwanese spirit and Taiwan-s
national sport". Taiwan-s first professional sports league, the Chinese
Professional Baseball League (CPBL), was established in 1989.
Starters pitch many more innings over the course of a season and for
a century teams have made all their best pitchers starters. In this
study, we attempt to determine the on-field performance these
pitchers and which won the most CPBL games in 2009. We utilize
the discriminate analysis approach to solve the problem, examining
winning pitchers and their statistics, to reliably find the best starting
pitcher. The data employed in this paper include innings pitched (IP),
earned runs allowed (ERA) and walks plus hits per inning pitched
(WPHIP) provided by the official website of the CPBL. The results
show that Aaron Rakers was the best starting pitcher of the CPBL.
The top 10 CPBL starting pitchers won 14 games to 8 games in the
2009 season. Though Fisher Discriminant Analysis, predicted to top
10 CPBL starting pitchers probably won 20 games to 9 games, more
1 game to 7 games in actually counts in 2009 season.
Abstract: In recent years, the use of vector variance as a
measure of multivariate variability has received much attention in
wide range of statistics. This paper deals with a more economic
measure of multivariate variability, defined as vector variance minus
all duplication elements. For high dimensional data, this will increase
the computational efficiency almost 50 % compared to the original
vector variance. Its sampling distribution will be investigated to make
its applications possible.
Abstract: The paper presents an on-line recognition machine
(RM) for continuous/isolated, dynamic and static gestures that arise
in Flight Deck Officer (FDO) training. RM is based on generic pattern
recognition framework. Gestures are represented as templates using
summary statistics. The proposed recognition algorithm exploits temporal
and spatial characteristics of gestures via dynamic programming
and Markovian process. The algorithm predicts corresponding index
of incremental input data in the templates in an on-line mode.
Accumulated consistency in the sequence of prediction provides a
similarity measurement (Score) between input data and the templates.
The algorithm provides an intuitive mechanism for automatic detection
of start/end frames of continuous gestures. In the present paper,
we consider isolated gestures. The performance of RM is evaluated
using four datasets - artificial (W TTest), hand motion (Yang) and
FDO (tracker, vision-based ). RM achieves comparable results which
are in agreement with other on-line and off-line algorithms such as
hidden Markov model (HMM) and dynamic time warping (DTW).
The proposed algorithm has the additional advantage of providing
timely feedback for training purposes.
Abstract: We present a method to create special domain
collections from news sites. The method only requires a single
sample article as a seed. No prior corpus statistics are needed and the
method is applicable to multiple languages. We examine various
similarity measures and the creation of document collections for
English and Japanese. The main contributions are as follows. First,
the algorithm can build special domain collections from as little as
one sample document. Second, unlike other algorithms it does not
require a second “general" corpus to compute statistics. Third, in our
testing the algorithm outperformed others in creating collections
made up of highly relevant articles.