Abstract: In this paper, the potential use of an exponential
hidden Markov model to model a hidden pavement deterioration
process, i.e. one that is not directly measurable, is investigated. It is
assumed that the evolution of the physical condition, which is the
hidden process, and the evolution of the values of pavement distress
indicators, can be adequately described using discrete condition states
and modeled as a Markov processes. It is also assumed that condition
data can be collected by visual inspections over time and represented
continuously using an exponential distribution. The advantage of
using such a model in decision making process is illustrated through
an empirical study using real world data.
Abstract: A new technique of topological multi-scale analysis is
introduced. By performing a clustering recursively to build a
hierarchy, and analyzing the co-scale and intra-scale similarities, an
Iterated Function System can be extracted from any data set. The study
of fractals shows that this method is efficient to extract
self-similarities, and can find elegant solutions the inverse problem of
building fractals. The theoretical aspects and practical
implementations are discussed, together with examples of analyses of
simple fractals.
Abstract: This study endeavors to evaluate the effects of farmers’ training program on the adoption of improved farming practices, the output of rice farming, and the income as well as the profit from rice farming by employing an ex-post non-experimental data in Sierra Leone. It was established that participating in farmers’ training program increased the possibility of adoption of the improved farming activities that were implemented in the study area. Through the training program also, the proceeds from rice production was also established to have increased considerably. These results were in line with the assumption that one of the main constraints on the growth in agricultural output particularly rice cultivation in most African states is the lack of efficient extension programs.
Abstract: In this paper we are to find the optimum multiwavelet for compression of electrocardiogram (ECG) signals and then, selecting it for using with SPIHT codec. At present, it is not well known which multiwavelet is the best choice for optimum compression of ECG. In this work, we examine different multiwavelets on 24 sets of ECG data with entirely different characteristics, selected from MIT-BIH database. For assessing the functionality of the different multiwavelets in compressing ECG signals, in addition to known factors such as Compression Ratio (CR), Percent Root Difference (PRD), Distortion (D), Root Mean Square Error (RMSE) in compression literature, we also employed the Cross Correlation (CC) criterion for studying the morphological relations between the reconstructed and the original ECG signal and Signal to reconstruction Noise Ratio (SNR). The simulation results show that the Cardinal Balanced Multiwavelet (cardbal2) by the means of identity (Id) prefiltering method to be the best effective transformation. After finding the most efficient multiwavelet, we apply SPIHT coding algorithm on the transformed signal by this multiwavelet.
Abstract: The paper presents a one-dimensional transient
mathematical model of compressible non-isothermal multicomponent
fluid mixture flow in a pipe. The set of the mass,
momentum and enthalpy conservation equations for gas phase is
solved in the model. Thermo-physical properties of multi-component
gas mixture are calculated by solving the Equation of State (EOS)
model. The Soave-Redlich-Kwong (SRK-EOS) model is chosen. Gas
mixture viscosity is calculated on the basis of the Lee-Gonzales-
Eakin (LGE) correlation. Numerical analysis of rapid gas
decompression process in rich and base natural gases is made on the
basis of the proposed mathematical model. The model is successfully
validated on the experimental data [1]. The proposed mathematical
model shows a very good agreement with the experimental data [1] in
a wide range of pressure values and predicts the decompression in
rich and base gas mixtures much better than analytical and
mathematical models, which are available from the open source
literature.
Abstract: Iran is one of the greatest producers of date in the
world. However due to lack of information about its viscoelastic
properties, much of the production downgraded during harvesting
and postharvesting processes. In this study the effect of temperature
and moisture content of product were investigated on stress
relaxation characteristics. Therefore, the freshly harvested date
(kabkab) at tamar stage were put in controlled environment chamber
to obtain different temperature levels (25, 35, 45, and 55 0C) and
moisture contents (8.5, 8.7, 9.2, 15.3, 20, 32.2 %d.b.). A texture
analyzer TAXT2 (Stable Microsystems, UK) was used to apply
uniaxial compression tests. A chamber capable to control temperature
was designed and fabricated around the plunger of texture analyzer to
control the temperature during the experiment. As a new approach a
CCD camera (A4tech, 30 fps) was mounted on a cylindrical glass
probe to scan and record contact area between date and disk.
Afterwards, pictures were analyzed using image processing toolbox
of Matlab software. Individual date fruit was uniaxially compressed
at speed of 1 mm/s. The constant strain of 30% of thickness of date
was applied to the horizontally oriented fruit. To select a suitable
model for describing stress relaxation of date, experimental data were
fitted with three famous stress relaxation models including the
generalized Maxwell, Nussinovitch, and Pelege. The constant in
mentioned model were determined and correlated with temperature
and moisture content of product using non-linear regression analysis.
It was found that Generalized Maxwell and Nussinovitch models
appropriately describe viscoelastic characteristics of date fruits as
compared to Peleg mode.
Abstract: The ideal sinc filter, ignoring the noise statistics, is often
applied for generating an arbitrary sample of a bandlimited signal by
using the uniformly sampled data. In this article, an optimal interpolator is proposed; it reaches a minimum mean square error (MMSE)
at its output in the presence of noise. The resulting interpolator is
thus a Wiener filter, and both the optimal infinite impulse response
(IIR) and finite impulse response (FIR) filters are presented. The
mean square errors (MSE-s) for the interpolator of different length
impulse responses are obtained by computer simulations; it shows that
the MSE-s of the proposed interpolators with a reasonable length are
improved about 0.4 dB under flat power spectra in noisy environment with signal-to-noise power ratio (SNR) equal 10 dB. As expected,
the results also demonstrate the improvements for the MSE-s with various fractional delays of the optimal interpolator against the ideal
sinc filter under a fixed length impulse response.
Abstract: In the recent past, there has been an increasing interest
in applying evolutionary methods to Knowledge Discovery in
Databases (KDD) and a number of successful applications of Genetic
Algorithms (GA) and Genetic Programming (GP) to KDD have been
demonstrated. The most predominant representation of the
discovered knowledge is the standard Production Rules (PRs) in the
form If P Then D. The PRs, however, are unable to handle
exceptions and do not exhibit variable precision. The Censored
Production Rules (CPRs), an extension of PRs, were proposed by
Michalski & Winston that exhibit variable precision and supports an
efficient mechanism for handling exceptions. A CPR is an
augmented production rule of the form:
If P Then D Unless C, where C (Censor) is an exception to the rule.
Such rules are employed in situations, in which the conditional
statement 'If P Then D' holds frequently and the assertion C holds
rarely. By using a rule of this type we are free to ignore the exception
conditions, when the resources needed to establish its presence are
tight or there is simply no information available as to whether it
holds or not. Thus, the 'If P Then D' part of the CPR expresses
important information, while the Unless C part acts only as a switch
and changes the polarity of D to ~D.
This paper presents a classification algorithm based on evolutionary
approach that discovers comprehensible rules with exceptions in the
form of CPRs.
The proposed approach has flexible chromosome encoding, where
each chromosome corresponds to a CPR. Appropriate genetic
operators are suggested and a fitness function is proposed that
incorporates the basic constraints on CPRs. Experimental results are
presented to demonstrate the performance of the proposed algorithm.
Abstract: In this paper, we investigate a blind channel estimation method for Multi-carrier CDMA systems that use a subspace decomposition technique. This technique exploits the orthogonality property between the noise subspace and the received user codes to obtain channel of each user. In the past we used Singular Value Decomposition (SVD) technique but SVD have most computational complexity so in this paper use a new algorithm called URV Decomposition, which serve as an intermediary between the QR decomposition and SVD, replaced in SVD technique to track the noise space of the received data. Because of the URV decomposition has almost the same estimation performance as the SVD, but has less computational complexity.
Abstract: Principle component analysis is often combined with
the state-of-art classification algorithms to recognize human faces.
However, principle component analysis can only capture these
features contributing to the global characteristics of data because it is a
global feature selection algorithm. It misses those features
contributing to the local characteristics of data because each principal
component only contains some levels of global characteristics of data.
In this study, we present a novel face recognition approach using
non-negative principal component analysis which is added with the
constraint of non-negative to improve data locality and contribute to
elucidating latent data structures. Experiments are performed on the
Cambridge ORL face database. We demonstrate the strong
performances of the algorithm in recognizing human faces in
comparison with PCA and NREMF approaches.
Abstract: Shot boundary detection is a fundamental step for the organization of large video data. In this paper, we propose a new method for video gradual shots detection and classification, using advantages of fractal analysis and AIS-based classifier. Proposed features are “vertical intercept" and “fractal dimension" of each frame of videos which are computed using Fourier transform coefficients. We also used a classifier based on Clonal Selection Algorithm. We have carried out our solution and assessed it according to the TRECVID2006 benchmark dataset.
Abstract: Many factors affect the success of Machine Learning
(ML) on a given task. The representation and quality of the instance
data is first and foremost. If there is much irrelevant and redundant
information present or noisy and unreliable data, then knowledge
discovery during the training phase is more difficult. It is well known
that data preparation and filtering steps take considerable amount of
processing time in ML problems. Data pre-processing includes data
cleaning, normalization, transformation, feature extraction and
selection, etc. The product of data pre-processing is the final training
set. It would be nice if a single sequence of data pre-processing
algorithms had the best performance for each data set but this is not
happened. Thus, we present the most well know algorithms for each
step of data pre-processing so that one achieves the best performance
for their data set.
Abstract: Wireless Sensor Network (WSN) comprises of sensor
nodes which are designed to sense the environment, transmit sensed
data back to the base station via multi-hop routing to reconstruct
physical phenomena. Since physical phenomena exists significant
overlaps between temporal redundancy and spatial redundancy, it is
necessary to use Redundancy Suppression Algorithms (RSA) for sensor
node to lower energy consumption by reducing the transmission
of redundancy. A conventional algorithm of RSAs is threshold-based
RSA, which sets threshold to suppress redundant data. Although
many temporal and spatial RSAs are proposed, temporal-spatial RSA
are seldom to be proposed because it is difficult to determine when
to utilize temporal or spatial RSAs. In this paper, we proposed a
novel temporal-spatial redundancy suppression algorithm, Codebookbase
Redundancy Suppression Mechanism (CRSM). CRSM adopts
vector quantization to generate a codebook, which is easily used to
implement temporal-spatial RSA. CRSM not only achieves power
saving and reliability for WSN, but also provides the predictability
of network lifetime. Simulation result shows that the network lifetime
of CRSM outperforms at least 23% of that of other RSAs.
Abstract: The paper discusses the results obtained to predict
reinforcement in singly reinforced beam using Neural Net (NN),
Support Vector Machines (SVM-s) and Tree Based Models. Major
advantage of SVM-s over NN is of minimizing a bound on the
generalization error of model rather than minimizing a bound on
mean square error over the data set as done in NN. Tree Based
approach divides the problem into a small number of sub problems to
reach at a conclusion. Number of data was created for different
parameters of beam to calculate the reinforcement using limit state
method for creation of models and validation. The results from this
study suggest a remarkably good performance of tree based and
SVM-s models. Further, this study found that these two techniques
work well and even better than Neural Network methods. A
comparison of predicted values with actual values suggests a very
good correlation coefficient with all four techniques.
Abstract: Rule Discovery is an important technique for mining
knowledge from large databases. Use of objective measures for
discovering interesting rules leads to another data mining problem,
although of reduced complexity. Data mining researchers have
studied subjective measures of interestingness to reduce the volume
of discovered rules to ultimately improve the overall efficiency of
KDD process.
In this paper we study novelty of the discovered rules as a
subjective measure of interestingness. We propose a hybrid approach
based on both objective and subjective measures to quantify novelty
of the discovered rules in terms of their deviations from the known
rules (knowledge). We analyze the types of deviation that can arise
between two rules and categorize the discovered rules according to
the user specified threshold. We implement the proposed framework
and experiment with some public datasets. The experimental results
are promising.
Abstract: Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.
Abstract: This research focuses on the effect of weight
percentage variation and size variation of MgFeSi added,
gating system design and reaction chamber design on inmold
process. By using inmold process, well-known problem of
fading is avoided because the liquid iron reacts with
magnesium in the mold and not, as usual, in the ladle. During
the pouring operation, liquid metal passes through the
chamber containing the magnesium, where the reaction of the
metal with magnesium proceeds in the absence of atmospheric
oxygen [1].In this paper, the results of microstructural
characteristic of ductile iron on this parameters are mentioned.
The mechanisms of the inmold process are also described [2].
The data obtained from this research will assist in producing
the vehicle parts and other machinery parts for different
industrial zones and government industries and in transferring
the technology to all industrial zones in Myanmar. Therefore,
the inmold technology offers many advantages over traditional
treatment methods both from a technical and environmental,
as well as an economical point of view. The main objective of
this research is to produce ductile iron castings in all industrial
sectors in Myanmar more easily with lower costs. It will also
assist the sharing of knowledge and experience related to the
ductile iron production.
Abstract: This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.
Abstract: This paper describes the smart energy monitoring system with a wireless sensor network for monitoring of electrical usage in smart house. Proposed system is composed of wireless plugs and energy control wallpad server. The wireless plug integrates an AC power socket, a relay to switch the socket ON/OFF, a Hall effect sensor to sense current of load appliance and a Kmote. The Kmote is a wireless communication interface based on TinyOS. We evaluated wireless plug in a laboratory, analyzed and presented energy consumption data from electrical appliances for 3 months in home.
Abstract: Today many developers use the Java components
collected from the Internet as external LIBs to design and
develop their own software. However, some unknown security
bugs may exist in these components, such as SQL injection bug
may comes from the components which have no specific check
for the input string by users. To check these bugs out is very
difficult without source code. So a novel method to check the
bugs in Java bytecode based on points-to dataflow analysis is in
need, which is different to the common analysis techniques base
on the vulnerability pattern check. It can be used as an assistant
tool for security analysis of Java bytecode from unknown
softwares which will be used as extern LIBs.