Abstract: Regional earthquake early warning (EEW) systems are not suitable for Taiwan, as most destructive seismic hazards arise due to in-land earthquakes. These likely cause the lead-time provided by regional EEW systems before a destructive earthquake wave arrives to become null. On the other hand, an on-site EEW system can provide more lead-time at a region closer to an epicenter, since only seismic information of the target site is required. Instead of leveraging the information of several stations, the on-site system extracts some P-wave features from the first few seconds of vertical ground acceleration of a single station and performs a prediction of the oncoming earthquake intensity at the same station according to these features. Since seismometers could be triggered by non-earthquake events such as a passing of a truck or other human activities, to reduce the likelihood of false alarms, a seismometer was installed at three different locations on the same site and the performance of the EEW system for these three sensor locations were discussed. The results show that the location on the ground of the first floor of a school building maybe a good choice, since the false alarms could be reduced and the cost for installation and maintenance is the lowest.
Abstract: Abstract—Attribute or feature selection is one of the basic
strategies to improve the performances of data classification tasks,
and, at the same time, to reduce the complexity of classifiers,
and it is a particularly fundamental one when the number
of attributes is relatively high. Its application to unsupervised
classification is restricted to a limited number of experiments in
the literature. Evolutionary computation has already proven itself
to be a very effective choice to consistently reduce the number
of attributes towards a better classification rate and a simpler
semantic interpretation of the inferred classifiers. We present a feature
selection wrapper model composed by a multi-objective evolutionary
algorithm, the clustering method Expectation-Maximization (EM),
and the classifier C4.5 for the unsupervised classification of data
extracted from a psychological test named BASC-II (Behavior
Assessment System for Children - II ed.) with two objectives:
Maximizing the likelihood of the clustering model and maximizing
the accuracy of the obtained classifier. We present a methodology
to integrate feature selection for unsupervised classification, model
evaluation, decision making (to choose the most satisfactory model
according to a a posteriori process in a multi-objective context), and
testing. We compare the performance of the classifier obtained by the
multi-objective evolutionary algorithms ENORA and NSGA-II, and
the best solution is then validated by the psychologists that collected
the data.
Abstract: Stochastic modeling concerns the use of probability
to model real-world situations in which uncertainty is present.
Therefore, the purpose of stochastic modeling is to estimate the
probability of outcomes within a forecast, i.e. to be able to predict
what conditions or decisions might happen under different situations.
In the present study, we present a model of a stochastic diffusion
process based on the bi-Weibull distribution function (its trend
is proportional to the bi-Weibull probability density function). In
general, the Weibull distribution has the ability to assume the
characteristics of many different types of distributions. This has
made it very popular among engineers and quality practitioners, who
have considered it the most commonly used distribution for studying
problems such as modeling reliability data, accelerated life testing,
and maintainability modeling and analysis. In this work, we start
by obtaining the probabilistic characteristics of this model, as the
explicit expression of the process, its trends, and its distribution by
transforming the diffusion process in a Wiener process as shown in
the Ricciaardi theorem. Then, we develop the statistical inference of
this model using the maximum likelihood methodology. Finally, we
analyse with simulated data the computational problems associated
with the parameters, an issue of great importance in its application to
real data with the use of the convergence analysis methods. Overall,
the use of a stochastic model reflects only a pragmatic decision on
the part of the modeler. According to the data that is available and
the universe of models known to the modeler, this model represents
the best currently available description of the phenomenon under
consideration.
Abstract: The objective of this paper is to study the work of children and adolescents and the vicious circle of poverty from the perspective of Guinar Myrdal’s Theory of Circular Cumulative Causation. The objective is to show that if a person starts working in the juvenile phase of life they will be classified as poor or extremely poor when they are adult, which can to be observed in the case of Brazil, more specifically in the north and northeast. To do this, the methodology used was statistical and econometric analysis by applying a probit model. The main results show that: if people reside in the northeastern region of Brazil, and if they have a low educational level and if they start their professional life before the age 18, they will increase the likelihood that they will be poor or extremely poor. There is a consensus in the literature that one of the causes of the intergenerational transmission of poverty is related to child labor, this because when one starts their professional life while still in the toddler or adolescence stages of life, they end up sacrificing their studies. Because of their low level of education, children or adolescents are forced to perform low-paid functions and abandon school, becoming in the future, people who will be classified as poor or extremely poor. As a result of poverty, parents may be forced to send their children out to work when they are young, so that in the future they will also become poor adults, a process that is characterized as the "vicious circle of poverty."
Abstract: The purpose of this article is to find a method
of comparing designs for ordinal regression models using
quantile dispersion graphs in the presence of linear predictor
misspecification. The true relationship between response variable
and the corresponding control variables are usually unknown.
Experimenter assumes certain form of the linear predictor of the
ordinal regression models. The assumed form of the linear predictor
may not be correct always. Thus, the maximum likelihood estimates
(MLE) of the unknown parameters of the model may be biased due to
misspecification of the linear predictor. In this article, the uncertainty
in the linear predictor is represented by an unknown function. An
algorithm is provided to estimate the unknown function at the
design points where observations are available. The unknown function
is estimated at all points in the design region using multivariate
parametric kriging. The comparison of the designs are based on
a scalar valued function of the mean squared error of prediction
(MSEP) matrix, which incorporates both variance and bias of the
prediction caused by the misspecification in the linear predictor. The
designs are compared using quantile dispersion graphs approach.
The graphs also visually depict the robustness of the designs on the
changes in the parameter values. Numerical examples are presented
to illustrate the proposed methodology.
Abstract: In recent decades, rapid and incorrect changes in land-use have been associated with consequences such as natural resources degradation and environmental pollution. Detecting changes in land-use is one of the tools for natural resource management and assessment of changes in ecosystems. The target of this research is studying the land-use changes in Haraz basin with an area of 677000 hectares in a 15 years period (1996 to 2011) using LANDSAT data. Therefore, the quality of the images was first evaluated. Various enhancement methods for creating synthetic bonds were used in the analysis. Separate training sites were selected for each image. Then the images of each period were classified in 9 classes using supervised classification method and the maximum likelihood algorithm. Finally, the changes were extracted in GIS environment. The results showed that these changes are an alarm for the HARAZ basin status in future. The reason is that 27% of the area has been changed, which is related to changing the range lands to bare land and dry farming and also changing the dense forest to sparse forest, horticulture, farming land and residential area.
Abstract: The oral cavity can be the site for early manifestations of mucocutaneous disorders (MD) or the only site for occurrence of these disorders. It can also exhibit oral lesions with simultaneous associated skin lesions. The MD involving the oral mucosa commonly presents with signs such as ulcers, vesicles and bullae. The unique environment of the oral cavity may modify these signs of the disease, thereby making the clinical diagnosis an arduous task. In addition to the unique environment of oral cavity, the overlapping of the signs of various mucocutaneous disorders, also makes the clinical diagnosis more intricate. The aim of this review is to present the oral signs of dermatological disorders having common oral involvement and emphasize their importance in early detection of the systemic disorders. The aim is also to highlight the necessity of oral examination by a dermatologist while examining the skin lesions. Prior to the oral examination, it must be imperative for the dermatologists and the dental clinicians to have the knowledge of oral anatomy. It is also important to know the impact of various diseases on oral mucosa, and the characteristic features of various oral mucocutaneous lesions. An initial clinical oral examination is may help in the early diagnosis of the MD. Failure to identify the oral manifestations may reduce the likelihood of early treatment and lead to more serious problems. This paper reviews the oral manifestations of immune mediated dermatological disorders with common oral manifestations.
Abstract: Sludge originates from the process of treatment of wastewater. It is the byproduct of wastewater treatment containing concentrated heavy metals and poorly biodegradable trace organic compounds, as well as potentially pathogenic organisms (viruses, bacteria, etc.) which are usually difficult to treat or dispose of. China, like other countries, is no stranger to the challenges posed by increase of wastewater. Treatment and disposal of sludge has been a problem for most cities in China. However, this problem has been exacerbated by other issues such as lack of technology, funding, and other factors. Suitable methods for such climatic conditions are still unavailable for modern cities in China. Against this background, this paper seeks to describe the methods used for treatment and disposal of sludge from industries and suggest a suitable method for treatment and disposal in Chongqing/China. From the research conducted, it was discovered that the highest treatment rate of sludge in Chongqing was 10.08%. The industrial waste piping system is not separated from the domestic system. Considering the proliferation of industry and urbanization, there is a likelihood that the production of sludge in Chongqing will increase. If the sludge produced is not properly managed, this may lead to adverse health and environmental effects. Disposal costs and methods for Chongqing were also included in this paper’s analysis. Research showed that incineration is the most expensive method of sludge disposal in China/Chongqing. Subsequent research therefore considered optional alternatives such as composting. Composting represents a relatively cheap waste disposal method considering the vast population, current technology and economic conditions of Chongqing, as well as China at large.
Abstract: Hydrologic models are increasingly used as tools to
predict stormwater quantity and quality from urban catchments.
However, due to a range of practical issues, most models produce
gross errors in simulating complex hydraulic and hydrologic systems.
Difficulty in finding a robust approach for model calibration is one of
the main issues. Though automatic calibration techniques are
available, they are rarely used in common commercial hydraulic and
hydrologic modelling software e.g. MIKE URBAN. This is partly
due to the need for a large number of parameters and large datasets in
the calibration process. To overcome this practical issue, a
framework for automatic calibration of a hydrologic model was
developed in R platform and presented in this paper. The model was
developed based on the time-area conceptualization. Four calibration
parameters, including initial loss, reduction factor, time of
concentration and time-lag were considered as the primary set of
parameters. Using these parameters, automatic calibration was
performed using Approximate Bayesian Computation (ABC). ABC is
a simulation-based technique for performing Bayesian inference
when the likelihood is intractable or computationally expensive to
compute. To test the performance and usefulness, the technique was
used to simulate three small catchments in Gold Coast. For
comparison, simulation outcomes from the same three catchments
using commercial modelling software, MIKE URBAN were used.
The graphical comparison shows strong agreement of MIKE URBAN
result within the upper and lower 95% credible intervals of posterior
predictions as obtained via ABC. Statistical validation for posterior
predictions of runoff result using coefficient of determination (CD),
root mean square error (RMSE) and maximum error (ME) was found
reasonable for three study catchments. The main benefit of using
ABC over MIKE URBAN is that ABC provides a posterior
distribution for runoff flow prediction, and therefore associated
uncertainty in predictions can be obtained. In contrast, MIKE
URBAN just provides a point estimate. Based on the results of the
analysis, it appears as though ABC the developed framework
performs well for automatic calibration.
Abstract: Digital technologies offer many opportunities in the
design and implementation of brand communication and advertising.
Augmented reality (AR) is an innovative technology in marketing
communication that focuses on the fact that virtual interaction with a
product ad offers additional value to consumers. AR enables
consumers to obtain (almost) real product experiences by the way of
virtual information even before the purchase of a certain product.
Aim of AR applications in relation with advertising is in-depth
examination of product characteristics to enhance product knowledge
as well as brand knowledge. Interactive design of advertising
provides observers with an intense examination of a specific
advertising message and therefore leads to better brand knowledge.
The elaboration likelihood model and the central route to persuasion
strongly support this argumentation. Nevertheless, AR in brand
communication is still in an initial stage and therefore scientific
findings about the impact of AR on information processing and brand
attitude are rare. The aim of this paper is to empirically investigate
the potential of AR applications in combination with traditional print
advertising. To that effect an experimental design with different
levels of interactivity is built to measure the impact of interactivity of
an ad on different variables o advertising effectiveness.
Abstract: This research provides a technical account of
estimating Transition Probability using Time-homogeneous Markov
Jump Process applying by South African HIV/AIDS data from the
Statistics South Africa. It employs Maximum Likelihood Estimator
(MLE) model to explore the possible influence of Transition
Probability of mortality cases in which case the data was based on
actual Statistics South Africa. This was conducted via an integrated
demographic and epidemiological model of South African HIV/AIDS
epidemic. The model was fitted to age-specific HIV prevalence data
and recorded death data using MLE model. Though the previous
model results suggest HIV in South Africa has declined and AIDS
mortality rates have declined since 2002 – 2013, in contrast, our
results differ evidently with the generally accepted HIV models
(Spectrum/EPP and ASSA2008) in South Africa. However, there is
the need for supplementary research to be conducted to enhance the
demographic parameters in the model and as well apply it to each of
the nine (9) provinces of South Africa.
Abstract: Purpose: The key aim of the research was to identify
the secondary stressors experienced by businesses affected by single
or repeated flooding and to determine to what extent businesses were
affected by these stressors, along with any resulting impact on health.
Additionally the research aimed to establish the likelihood of
businesses being re-exposed to the secondary stressors through
assessing awareness of flood risk, implementation of property
protection measures and level of community resilience. Design/methodology/approach: The chosen research method
involved the distribution of a questionnaire survey to businesses
affected by either single or repeated flood events. The questionnaire
included the Impact of Event Scale (a 15-item self-report measure
which assesses subjective distress caused by traumatic events). Findings: 55 completed questionnaires were returned by flood
impacted businesses. 89% of the businesses had sustained internal
flooding, while 11% had experienced external flooding. The results
established that the key secondary stressors experienced by
businesses, in order of priority, were: flood damage, fear of
reoccurring flooding, prevention of access to the premise/closure,
loss of income, repair works, length of closure and insurance issues.
There was a lack of preparedness for potential future floods and
consequent vulnerability to the emergence of secondary stressors
among flood affected businesses, as flood resistance or flood
resilience measures had only been implemented by 11% and 13%
respectively. In relation to the psychological repercussions, the
Impact of Event scores suggested that potential prevalence of posttraumatic
stress disorder (PTSD) was noted among 8 out of 55
respondents (l5%). Originality/value: The results improve understanding of the
enduring repercussions of flood events on businesses, indicating that
not only residents may be susceptible to the detrimental health
impacts of flood events and single flood events may be just as likely
as reoccurring flooding to contribute to ongoing stress. Lack of
financial resources is a possible explanation for the lack of
implementation of property protection measures among businesses,
despite 49% experiencing flooding on multiple occasions. Therefore
it is recommended that policymakers should consider potential
sources of financial support or grants towards flood defences for
flood impacted businesses. Any form of assistance should be made
available to businesses at the earliest opportunity as there was no
significant association between the time of the last flood event and
the likelihood of experiencing PTSD symptoms.
Abstract: We present probabilistic multinomial Dirichlet
classification model for multidimensional data and Gaussian process
priors. Here, we have considered efficient computational method that
can be used to obtain the approximate posteriors for latent variables
and parameters needed to define the multiclass Gaussian process
classification model. We first investigated the process of inducing a
posterior distribution for various parameters and latent function by
using the variational Bayesian approximations and important sampling
method, and next we derived a predictive distribution of latent
function needed to classify new samples. The proposed model is
applied to classify the synthetic multivariate dataset in order to verify
the performance of our model. Experiment result shows that our model
is more accurate than the other approximation methods.
Abstract: In this paper, we propose the variational EM inference
algorithm for the multi-class Gaussian process classification model
that can be used in the field of human behavior recognition. This
algorithm can drive simultaneously both a posterior distribution of a
latent function and estimators of hyper-parameters in a Gaussian
process classification model with multiclass. Our algorithm is based
on the Laplace approximation (LA) technique and variational EM
framework. This is performed in two steps: called expectation and
maximization steps. First, in the expectation step, using the Bayesian
formula and LA technique, we derive approximately the posterior
distribution of the latent function indicating the possibility that each
observation belongs to a certain class in the Gaussian process
classification model. Second, in the maximization step, using a derived
posterior distribution of latent function, we compute the maximum
likelihood estimator for hyper-parameters of a covariance matrix
necessary to define prior distribution for latent function. These two
steps iteratively repeat until a convergence condition satisfies.
Moreover, we apply the proposed algorithm with human action
classification problem using a public database, namely, the KTH
human action data set. Experimental results reveal that the proposed
algorithm shows good performance on this data set.
Abstract: The Com-Poisson (CMP) model is one of the most
popular discrete generalized linear models (GLMS) that handles
both equi-, over- and under-dispersed data. In longitudinal context,
an integer-valued autoregressive (INAR(1)) process that incorporates
covariate specification has been developed to model longitudinal
CMP counts. However, the joint likelihood CMP function is
difficult to specify and thus restricts the likelihood-based estimating
methodology. The joint generalized quasi-likelihood approach
(GQL-I) was instead considered but is rather computationally
intensive and may not even estimate the regression effects due
to a complex and frequently ill-conditioned covariance structure.
This paper proposes a new GQL approach for estimating the
regression parameters (GQL-III) that is based on a single score vector
representation. The performance of GQL-III is compared with GQL-I
and separate marginal GQLs (GQL-II) through some simulation
experiments and is proved to yield equally efficient estimates as
GQL-I and is far more computationally stable.
Abstract: Routing in adhoc networks is a challenge as nodes are
mobile, and links are constantly created and broken. Present ondemand
adhoc routing algorithms initiate route discovery after a path
breaks, incurring significant cost to detect disconnection and
establish a new route. Specifically, when a path is about to be broken,
the source is warned of the likelihood of a disconnection. The source
then initiates path discovery early, avoiding disconnection totally. A
path is considered about to break when link availability decreases.
This study modifies Adhoc On-demand Multipath Distance Vector
routing (AOMDV) so that route handoff occurs through link
availability estimation.
Abstract: Objectives: To determine the nutritional status and
risk factors associated with women practicing geophagia in QwaQwa,
South Africa. Materials and Methods: An observational epidemiological study
design was adopted which included an exposed (geophagia) and nonexposed
(control) group. A food frequency questionnaire, anthropometric measurements and blood sampling were applied to
determine nutritional status of participants. Logistic regression
analysis was performed in order to identify factors that were likely to
be associated with the practice of geophagia. Results: The mean total energy intake for the geophagia group (G)
and control group (C) were 10324.31 ± 2755.00 kJ and 10763.94 ±
2556.30 kJ respectively. Both groups fell within the overweight
category according to the mean Body Mass Index (BMI) of each
group (G= 25.59 kg/m2; C= 25.14 kg/m2). The mean serum iron
levels of the geophagia group (6.929 μmol/l) were significantly lower
than that of the control group (13.75 μmol/l) (p = 0.000). Serum
transferrin (G=3.23g/l; C=2.7054g/l) and serum transferrin saturation
(G=8.05%; C=18.74%) levels also differed significantly between
groups (p=0.00). Factors that were associated with the practice of
geophagia included haemoglobin (Odds ratio (OR):14.50), serumiron
(OR: 9.80), serum-ferritin (OR: 3.75), serum-transferrin (OR:
6.92) and transferrin saturation (OR: 14.50). A significant negative
association (p=0.014) was found between women who were wageearners
and those who were not wage-earners and the practice of
geophagia (OR: 0.143; CI: 0.027; 0.755). These findings seem to
indicate that a permanent income may decrease the likelihood of
practising geophagia. Key Findings: Geophagia was confirmed to be a risk factor for
iron deficiency in this community. The significantly strong
association between geophagia and iron deficiency emphasizes the
importance of identifying the practice of geophagia in women,
especially during their child bearing years.
Abstract: We introduce a new model called the Marshall-Olkin Rayleigh distribution which extends the Rayleigh distribution using Marshall-Olkin transformation and has increasing and decreasing shapes for the hazard rate function. Various structural properties of the new distribution are derived including explicit expressions for the moments, generating and quantile function, some entropy measures, and order statistics are presented. The model parameters are estimated by the method of maximum likelihood and the observed information matrix is determined. The potentiality of the new model is illustrated by means of a simulation study.
Abstract: Multiple-input multiple-output (MIMO) radar has
received increasing attention in recent years. MIMO radar has many
advantages over conventional phased array radar such as target
detection,resolution enhancement, and interference suppression. In
this paper, the results are presented from a simulation study of MIMO
uniformly-spaced linear array (ULA) antennas. The performance is
investigated under varied parameters, including varied array size,
pseudo random (PN) sequence length, number of snapshots, and
signal to noise ratio (SNR). The results of MIMO are compared to a
traditional array antenna.
Abstract: Aurèsregion is one of the arid and semi-arid areas that
have suffered climate crises and overexploitation of natural resources
they have led to significant land degradation. The use of remote sensing data allowed us to analyze the land and
its spatiotemporal changes in the Aurès between 1987 and 2013, for
this work, we adopted a method of analysis based on the exploitation
of the images satellite Landsat TM 1987 and Landsat OLI 2013, from
the supervised classification likelihood coupled with field surveys of
the mission of May and September of 2013. Using ENVI EX software by the superposition of the ground cover
maps from 1987 and 2013, one can extract a spatial map change of
different land cover units. The results show that between 1987 and
2013 vegetation has suffered negative changes are the significant
degradation of forests and steppe rangelands, and sandy soils and
bare land recorded a considerable increase. The spatial change map land cover units between 1987 and 2013
allows us to understand the extensive or regressive orientation of
vegetation and soil, this map shows that dense forests give his place
to clear forests and steppe vegetation develops from a degraded forest
vegetation and bare, sandy soils earn big steppe surfaces that explain
its remarkable extension.
The analysis of remote sensing data highlights the profound
changes in our environment over time and quantitative monitoring of
the risk of desertification.