Abstract: This work proposes a data-driven multiscale based
quantitative measures to reveal the underlying complexity of
electroencephalogram (EEG), applying to a rodent model of
hypoxic-ischemic brain injury and recovery. Motivated by that real
EEG recording is nonlinear and non-stationary over different
frequencies or scales, there is a need of more suitable approach over
the conventional single scale based tools for analyzing the EEG data.
Here, we present a new framework of complexity measures
considering changing dynamics over multiple oscillatory scales. The
proposed multiscale complexity is obtained by calculating entropies of
the probability distributions of the intrinsic mode functions extracted
by the empirical mode decomposition (EMD) of EEG. To quantify
EEG recording of a rat model of hypoxic-ischemic brain injury
following cardiac arrest, the multiscale version of Tsallis entropy is
examined. To validate the proposed complexity measure, actual EEG
recordings from rats (n=9) experiencing 7 min cardiac arrest followed
by resuscitation were analyzed. Experimental results demonstrate that
the use of the multiscale Tsallis entropy leads to better discrimination
of the injury levels and improved correlations with the neurological
deficit evaluation after 72 hours after cardiac arrest, thus suggesting an
effective metric as a prognostic tool.
Abstract: In this research the effect of moisture at three levels
(47, 57, and 67 w.b.%) on the physical properties of the Pofaki pea
variety including, dimensions, geometric mean diameter, volume,
sphericity index and the surface area was determined. The influence
of different moisture levels (47, 57 and 67 w.b.%), in two loading
orientation (longitudinal and transverse) and three loading speed (4,6
and 8 mm min-1) on the mechanical properties of pea such as
maximum deformation, rupture force, rupture energy, toughness and
the power to break the pea was investigated. It was observed in the
physical properties that moisture changes were affective at 1% on,
dimensions, geometric mean diameter, volume, sphericity index and
the surface area. It was observed in the mechanical properties that
moisture changes were effective at 1% on, maximum deformation,
rupture force, rupture energy, toughness and the power to break.
Loading speed was effective on maximum deformation, rupture
force, rupture energy at 1% and it was effective on toughness at 5%.
Loading orientation was effective on maximum deformation, rupture
force, rupture energy, toughness at 1% and it was effective on power
at 5%. The mutual effect of speed and orientation were effective on
rupture energy at 1% and were effective on toughness at 5%
probability. The mutual effect of moisture and speed were effective
on rupture force and rupture energy at 1% and were effective on
toughness 5% probability. The mutual effect of orientation and
moisture on rupture energy and toughness were effective at 1%.
Abstract: We investigate relaxation dynamics of a quantum
dipole emitter (QDE), e.g., a molecule or quantum dot, located near a
metal nanoparticle (MNP) exhibiting a dipolar localized surface
plasmon (LSP) resonance at the frequency of the QDE radiative
transition. It is shown that under the condition of the QDE-MNP
characteristic relaxation time being much shorter than that of the
QDE in free-space but much longer than the LSP lifetime. It is also
shown that energy dissipation in the QDE-MNP system is relatively
weak with the probability of the photon emission being about 0.75, a
number which, rather surprisingly, does not explicitly depend on the
metal absorption characteristics. The degree of entanglement
measured by the concurrency takes the maximum value, while the
distances between the QDEs and metal ball approximately are equal.
Abstract: This paper introduces an original method for
guaranteed estimation of the accuracy for an ensemble of Lipschitz
classifiers. The solution was obtained as a finite closed set of
alternative hypotheses, which contains an object of classification with
probability of not less than the specified value. Thus, the
classification is represented by a set of hypothetical classes. In this
case, the smaller the cardinality of the discrete set of hypothetical
classes is, the higher is the classification accuracy. Experiments have
shown that if cardinality of the classifiers ensemble is increased then
the cardinality of this set of hypothetical classes is reduced. The
problem of the guaranteed estimation of the accuracy for an ensemble
of Lipschitz classifiers is relevant in multichannel classification of
target events in C-OTDR monitoring systems. Results of suggested
approach practical usage to accuracy control in C-OTDR monitoring
systems are present.
Abstract: Previous studies on financial distress prediction choose
the conventional failing and non-failing dichotomy; however, the
distressed extent differs substantially among different financial
distress events. To solve the problem, “non-distressed”, “slightlydistressed”
and “reorganization and bankruptcy” are used in our article
to approximate the continuum of corporate financial health. This paper
explains different financial distress events using the two-stage method.
First, this investigation adopts firm-specific financial ratios, corporate
governance and market factors to measure the probability of various
financial distress events based on multinomial logit models.
Specifically, the bootstrapping simulation is performed to examine the
difference of estimated misclassifying cost (EMC). Second, this work
further applies macroeconomic factors to establish the credit cycle
index and determines the distressed cut-off indicator of the two-stage
models using such index. Two different models, one-stage and
two-stage prediction models are developed to forecast financial
distress, and the results acquired from different models are compared
with each other, and with the collected data. The findings show that the
one-stage model has the lower misclassification error rate than the
two-stage model. The one-stage model is more accurate than the
two-stage model.
Abstract: This paper presents two techniques, local feature
extraction using image spectrum and low frequency spectrum
modelling using GMM to capture the underlying statistical
information to improve the performance of face recognition
system. Local spectrum features are extracted using overlap sub
block window that are mapped on the face image. For each of this
block, spatial domain is transformed to frequency domain using
DFT. A low frequency coefficient is preserved by discarding high
frequency coefficients by applying rectangular mask on the
spectrum of the facial image. Low frequency information is non-
Gaussian in the feature space and by using combination of several
Gaussian functions that has different statistical properties, the best
feature representation can be modelled using probability density
function. The recognition process is performed using maximum
likelihood value computed using pre-calculated GMM components.
The method is tested using FERET datasets and is able to achieved
92% recognition rates.
Abstract: IEEE 802.11a/b/g standards provide multiple
transmission rates, which can be changed dynamically according to the
channel condition. Cooperative communications were introduced to
improve the overall performance of wireless LANs with the help of
relay nodes with higher transmission rates. The cooperative
communications are based on the fact that the transmission is much
faster when sending data packets to a destination node through a relay
node with higher transmission rate, rather than sending data directly to
the destination node at low transmission rate. To apply the cooperative
communications in wireless LAN, several MAC protocols have been
proposed. Some of them can result in collisions among relay nodes in a
dense network. In order to solve this problem, we propose a new
protocol. Relay nodes are grouped based on their transmission rates.
And then, relay nodes only in the highest group try to get channel
access. Performance evaluation is conducted using simulation, and
shows that the proposed protocol significantly outperforms the
previous protocol in terms of throughput and collision probability.
Abstract: Building loss estimation methodologies which have
been advanced considerably in recent decades are usually used to
estimate socio and economic impacts resulting from seismic structural
damage. In accordance with these methods, this paper presents the
evaluation of an annual loss probability of a reinforced concrete
moment resisting frame designed according to Korean Building Code.
The annual loss probability is defined by (1) a fragility curve obtained
from a capacity spectrum method which is similar to a method adopted
from HAZUS, and (2) a seismic hazard curve derived from annual
frequencies of exceedance per peak ground acceleration. Seismic
fragilities are computed to calculate the annual loss probability of a
certain structure using functions depending on structural capacity,
seismic demand, structural response and the probability of exceeding
damage state thresholds. This study carried out a nonlinear static
analysis to obtain the capacity of a RC moment resisting frame
selected as a prototype building. The analysis results show that the
probability of being extensive structural damage in the prototype
building is expected to 0.01% in a year.
Abstract: An attempt has been made in the present
communication to elucidate the efficacy of robust ANOVA methods
to analyse horticultural field experimental data in the presence of
outliers. Results obtained fortify the use of robust ANOVA methods
as there was substantiate reduction in error mean square, and hence
the probability of committing Type I error, as compared to the regular
approach.
Abstract: Performance of different filtering approaches depends
on modeling of dynamical system and algorithm structure. For
modeling and smoothing the data the evaluation of posterior
distribution in different filtering approach should be chosen carefully.
In this paper different filtering approaches like filter KALMAN,
EKF, UKF, EKS and smoother RTS is simulated in some trajectory
tracking of path and accuracy and limitation of these approaches are
explained. Then probability of model with different filters is
compered and finally the effect of the noise variance to estimation is
described with simulations results.
Abstract: The idea of the asynchronous transmission in
wavelength division multiplexing (WDM) ring MANs is studied in
this paper. Especially, we present an efficient access technique to
coordinate the collisions-free transmission of the variable sizes of IP
traffic in WDM ring core networks. Each node is equipped with a
tunable transmitter and a tunable receiver. In this way, all the
wavelengths are exploited for both transmission and reception. In
order to evaluate the performance measures of average throughput,
queuing delay and packet dropping probability at the buffers, a
simulation model that assumes symmetric access rights among the
nodes is developed based on Poisson statistics. Extensive numerical
results show that the proposed protocol achieves apart from high
bandwidth exploitation for a wide range of offered load, fairness of
queuing delay and dropping events among the different packets size
categories.
Abstract: Vertical Handover(VHO) among different
communication technologies ensuring uninterruption and service
continuity is one of the most important performance parameter in
Heterogenous networks environment. In an integrated Universal
Mobile Telecommunicatin System(UMTS) and Wireless Local
Area Network(WLAN), WLAN is given an inherent priority over
UMTS because of its high data rates with low cost. Therefore
mobile users want to be associated with WLAN maximum of the
time while roaming, to enjoy best possible services with low cost.
That encourages reduction of number of VHO. In this work the
reduction of number of VHO with respect to varying number of
WLAN Access Points(APs) in an integrated UMTS and WLAN
network is investigated through simulation to provide best possible
cost effective service to the users. The simulation has been carried
out for an area (7800 × 9006)m2 where COST-231 Hata model
and 3GPP (TR 101 112 V 3.1.0) specified models are used for
WLAN and UMTS path loss models respectively. The handover
decision is triggered based on the received signal level as compared
to the fade margin. Fade margin gives a probabilistic measure of
the reliability of the communication link. A relationship between
number of WLAN APs and the number of VHO is also established
in this work.
Abstract: This paper focuses on the assessment of the air
pollution and morbidity relationship in Tunisia. Air pollution is
measured by ozone air concentration and the morbidity is measured
by the number of respiratory-related restricted activity days during
the 2-week period prior to the interview. Socioeconomic data are also
collected in order to adjust for any confounding covariates. Our
sample is composed by 407 Tunisian respondents; 44.7% are women,
the average age is 35.2, near 69% are living in a house built after
1980, and 27.8% have reported at least one day of respiratory-related
restricted activity. The model consists on the regression of the
number of respiratory-related restricted activity days on the air
quality measure and the socioeconomic covariates. In order to correct
for zero-inflation and heterogeneity, we estimate several models
(Poisson, negative binomial, zero inflated Poisson, Poisson hurdle,
negative binomial hurdle and finite mixture Poisson models).
Bootstrapping and post-stratification techniques are used in order to
correct for any sample bias. According to the Akaike information
criteria, the hurdle negative binomial model has the greatest goodness
of fit. The main result indicates that, after adjusting for
socioeconomic data, the ozone concentration increases the probability
of positive number of restricted activity days.
Abstract: This paper presents optimization of makespan for ‘n’
jobs and ‘m’ machines flexible job shop scheduling problem with
sequence dependent setup time using genetic algorithm (GA)
approach. A restart scheme has also been applied to prevent the
premature convergence. Two case studies are taken into
consideration. Results are obtained by considering crossover
probability (pc = 0.85) and mutation probability (pm = 0.15). Five
simulation runs for each case study are taken and minimum value
among them is taken as optimal makespan. Results indicate that
optimal makespan can be achieved with more than one sequence of
jobs in a production order.
Abstract: The exact theoretical expression describing the
probability distribution of nonlinear sea-surface elevations derived
from the second-order narrowband model has a cumbersome form
that requires numerical computations, not well-disposed to theoretical
or practical applications. Here, the same narrowband model is reexamined
to develop a simpler closed-form approximation suitable
for theoretical and practical applications. The salient features of the
approximate form are explored, and its relative validity is verified
with comparisons to other readily available approximations, and
oceanic data.
Abstract: The IEEE 802.22 working group aims to drive the
Digital Video Broadcasting-Terrestrial (DVB-T) bands for data
communication to the rural area without interfering the TV broadcast.
In this paper, we arrive at a closed-form expression for average
detection probability of Fusion center (FC) with multiple antenna
over the κ − μ fading channel model. We consider a centralized
cooperative multiple antenna network for reporting. The DVB-T
samples forwarded by the secondary user (SU) were combined using
Maximum ratio combiner at FC, an energy detection is performed
to make the decision. The fading effects of the channel degrades
the detection probability of the FC, a generalized independent and
identically distributed (IID) κ − μ and an additive white Gaussian
noise (AWGN) channel is considered for reporting and sensing
respectively. The proposed system performance is verified through
simulation results.
Abstract: Underwater acoustic network is one of the rapidly
growing areas of research and finds different applications for
monitoring and collecting various data for environmental studies. The
communication among dynamic nodes and high error probability in
an acoustic medium forced to maximize energy consumption in
Underwater Sensor Networks (USN) than in traditional sensor
networks. Developing energy-efficient routing protocol is the
fundamental and a curb challenge because all the sensor nodes are
powered by batteries, and they cannot be easily replaced in UWSNs.
This paper surveys the various recent routing techniques that mainly
focus on energy efficiency.
Abstract: Tumor is an uncontrolled growth of tissues in any part
of the body. Tumors are of different types and they have different
characteristics and treatments. Brain tumor is inherently serious and
life-threatening because of its character in the limited space of the
intracranial cavity (space formed inside the skull). Locating the tumor
within MR (magnetic resonance) image of brain is integral part of the
treatment of brain tumor. This segmentation task requires
classification of each voxel as either tumor or non-tumor, based on
the description of the voxel under consideration. Many studies are
going on in the medical field using Markov Random Fields (MRF) in
segmentation of MR images. Even though the segmentation process
is better, computing the probability and estimation of parameters is
difficult. In order to overcome the aforementioned issues, Conditional
Random Field (CRF) is used in this paper for segmentation, along
with the modified artificial bee colony optimization and modified
fuzzy possibility c-means (MFPCM) algorithm. This work is mainly
focused to reduce the computational complexities, which are found in
existing methods and aimed at getting higher accuracy. The
efficiency of this work is evaluated using the parameters such as
region non-uniformity, correlation and computation time. The
experimental results are compared with the existing methods such as
MRF with improved Genetic Algorithm (GA) and MRF-Artificial
Bee Colony (MRF-ABC) algorithm.
Abstract: In this paper the CVA computation of interest rate
swap is presented based on its rating. Rating and probability default
given by Moody’s Investors Service are used to calculate our CVA
for a specific swap with different maturities. With this computation
the influence of rating variation can be shown on CVA. Application
is made to the analysis of Greek CDS variation during the period of
Greek crisis between 2008 and 2011. The main point is the
determination of correlation between the fluctuation of Greek CDS
cumulative value and the variation of swap CVA due to change of
rating.
Abstract: At-site flood frequency analysis is used to estimate
flood quantiles when at-site record length is reasonably long. In
Australia, FLIKE software has been introduced for at-site flood
frequency analysis. The advantage of FLIKE is that, for a given
application, the user can compare a number of most commonly
adopted probability distributions and parameter estimation methods
relatively quickly using a windows interface. The new version of
FLIKE has been incorporated with the multiple Grubbs and Beck test
which can identify multiple numbers of potentially influential low
flows. This paper presents a case study considering six catchments in
eastern Australia which compares two outlier identification tests
(original Grubbs and Beck test and multiple Grubbs and Beck test)
and two commonly applied probability distributions (Generalized
Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE
software. It has been found that the multiple Grubbs and Beck test
when used with LP3 distribution provides more accurate flood
quantile estimates than when LP3 distribution is used with the
original Grubbs and Beck test. Between these two methods, the
differences in flood quantile estimates have been found to be up to
61% for the six study catchments. It has also been found that GEV
distribution (with L moments) and LP3 distribution with the multiple
Grubbs and Beck test provide quite similar results in most of the
cases; however, a difference up to 38% has been noted for flood
quantiles for annual exceedance probability (AEP) of 1 in 100 for one
catchment. This finding needs to be confirmed with a greater number
of stations across other Australian states.