Abstract: In the context of spectrum surveillance, a method to
recover the code of spread spectrum signal is presented, whereas the
receiver has no knowledge of the transmitter-s spreading sequence.
The approach is based on a genetic algorithm (GA), which is forced to
model the received signal. Genetic algorithms (GAs) are well known
for their robustness in solving complex optimization problems.
Experimental results show that the method provides a good
estimation, even when the signal power is below the noise power.
Abstract: In this work, we study the problem of determining
the minimum scheduling length that can satisfy end-to-end (ETE)
traffic demand in scheduling-based multihop WSNs with cooperative
multiple-input multiple-output (MIMO) transmission scheme. Specifically,
we present a cross-layer formulation for the joint routing,
scheduling and stream control problem by incorporating various
power and rate adaptation schemes, and taking into account an
antenna beam pattern model and the signal-to-interference-and-noise
(SINR) constraint at the receiver. In the context, we also propose
column generation (CG) solutions to get rid of the complexity
requiring the enumeration of all possible sets of scheduling links.
Abstract: An artificial neural network (ANN) approach was used to model the energy consumption of wheat production. This study was conducted over 35,300 hectares of irrigated and dry land wheat fields in Canterbury in the 2007-2008 harvest year.1 In this study several direct and indirect factors have been used to create an artificial neural networks model to predict energy use in wheat production. The final model can predict energy consumption by using farm condition (size of wheat area and number paddocks), farmers- social properties (education), and energy inputs (N and P use, fungicide consumption, seed consumption, and irrigation frequency), it can also predict energy use in Canterbury wheat farms with error margin of ±7% (± 1600 MJ/ha).
Abstract: In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.
Abstract: Automatic reusability appraisal is helpful in
evaluating the quality of developed or developing reusable software
components and in identification of reusable components from
existing legacy systems; that can save cost of developing the
software from scratch. But the issue of how to identify reusable
components from existing systems has remained relatively
unexplored. In this research work, structural attributes of software
components are explored using software metrics and quality of the
software is inferred by different Neural Network based approaches,
taking the metric values as input. The calculated reusability value
enables to identify a good quality code automatically. It is found that
the reusability value determined is close to the manual analysis used
to be performed by the programmers or repository managers. So, the
developed system can be used to enhance the productivity and
quality of software development.
Abstract: Wireless mesh networks based on IEEE 802.11
technology are a scalable and efficient solution for next generation
wireless networking to provide wide-area wideband internet access to
a significant number of users. The deployment of these wireless mesh
networks may be within different authorities and without any
planning, they are potentially overlapped partially or completely in
the same service area. The aim of the proposed model is design a new
model to Enhancement Throughput of Unplanned Wireless Mesh
Networks Deployment Using Partitioning Hierarchical Cluster
(PHC), the unplanned deployment of WMNs are determinates there
performance. We use throughput optimization approach to model the
unplanned WMNs deployment problem based on partitioning
hierarchical cluster (PHC) based architecture, in this paper the
researcher used bridge node by allowing interworking traffic between
these WMNs as solution for performance degradation.
Abstract: In this paper, a uniform calculus-based approach for
synthesizing monitors checking correctness properties specified by a
large variety of logics at runtime is provided, including future and past
time logics, interval logics, state machine and parameterized temporal
logics. We present a calculus mechanism to synthesize monitors from
the logical specification for the incremental analysis of execution
traces during test and real run. The monitor detects both good and bad
prefix of a particular kind, namely those that are informative for the
property under investigation. We elaborate the procedure of calculus
as monitors.
Abstract: With the exponential rise in the number of multimedia
applications available, the best-effort service provided by the Internet
today is insufficient. Researchers have been working on new
architectures like the Next Generation Network (NGN) which, by
definition, will ensure Quality of Service (QoS) in an all-IP based
network [1]. For this approach to become a reality, reservation of
bandwidth is required per application per user. WiMAX (Worldwide
Interoperability for Microwave Access) is a wireless communication
technology which has predefined levels of QoS which can be
provided to the user [4]. IPv6 has been created as the successor for
IPv4 and resolves issues like the availability of IP addresses and
QoS. This paper provides a design to use the power of WiMAX as an
NSP (Network Service Provider) for NGN using IPv6. The use of the
Traffic Class (TC) field and the Flow Label (FL) field of IPv6 has
been explained for making QoS requests and grants [6], [7]. Using
these fields, the processing time is reduced and routing is simplified.
Also, we define the functioning of the ASN gateway and the NGN
gateway (NGNG) which are edge node interfaces in the NGNWiMAX
design. These gateways ensure QoS management through
built in functions and by certain physical resources and networking
capabilities.
Abstract: Patients with diabetes are susceptible to chronic foot
wounds which may be difficult to manage and slow to heal.
Diagnosis and treatment currently rely on the subjective judgement of
experienced professionals. An objective method of tissue assessment
is required. In this paper, a data fusion approach was taken to wound
tissue classification. The supervised Maximum Likelihood and
unsupervised Multi-Modal Expectation Maximisation algorithms
were used to classify tissues within simulated wound models by
weighting the contributions of both colour and 3D depth information.
It was found that, at low weightings, depth information could show
significant improvements in classification accuracy when compared
to classification by colour alone, particularly when using the
maximum likelihood method. However, larger weightings were
found to have an entirely negative effect on accuracy.
Abstract: For a given specific problem an efficient algorithm has been the matter of study. However, an alternative approach orthogonal to this approach comes out, which is called a reduction. In general for a given specific problem this reduction approach studies how to convert an original problem into subproblems. This paper proposes a formal modeling language to support this reduction approach in order to make a solver quickly. We show three examples from the wide area of learning problems. The benefit is a fast prototyping of algorithms for a given new problem. It is noted that our formal modeling language is not intend for providing an efficient notation for data mining application, but for facilitating a designer who develops solvers in machine learning.
Abstract: In this study, a fuzzy similarity approach for Arabic
web pages classification is presented. The approach uses a fuzzy
term-category relation by manipulating membership degree for the
training data and the degree value for a test web page. Six measures
are used and compared in this study. These measures include:
Einstein, Algebraic, Hamacher, MinMax, Special case fuzzy and
Bounded Difference approaches. These measures are applied and
compared using 50 different Arabic web pages. Einstein measure was
gave best performance among the other measures. An analysis of
these measures and concluding remarks are drawn in this study.
Abstract: The objective of this research was to investigate biodegradation of water hyacinth (Eichhornia crassipes) to produce bioethanol using dilute-acid pretreatment (1% sulfuric acid) results in high hemicellulose decomposition and using yeast (Pachysolen tannophilus) as bioethanol producing strain. A maximum ethanol yield of 1.14g/L with coefficient, 0.24g g-1; productivity, 0.015g l-1h-1 was comparable to predicted value 32.05g/L obtained by Central Composite Design (CCD). Maximum ethanol yield coefficient was comparable to those obtained through enzymatic saccharification and fermentation of acid hydrolysate using fully equipped fermentor. Although maximum ethanol concentration was low in lab scale, the improvement of lignocellulosic ethanol yield is necessary for large scale production.
Abstract: This paper addresses the problem of determining the current 3D location of a moving object and robustly tracking it from a sequence of camera images. The approach presented here uses a particle filter and does not perform any explicit triangulation. Only the color of the object to be tracked is required, but not any precisemotion model. The observation model we have developed avoids the color filtering of the entire image. That and the Monte Carlotechniques inside the particle filter provide real time performance.Experiments with two real cameras are presented and lessons learned are commented. The approach scales easily to more than two cameras and new sensor cues.
Abstract: This paper investigates the application of Particle Swarm Optimization (PSO) technique for coordinated design of a Power System Stabilizer (PSS) and a Thyristor Controlled Series Compensator (TCSC)-based controller to enhance the power system stability. The design problem of PSS and TCSC-based controllers is formulated as a time domain based optimization problem. PSO algorithm is employed to search for optimal controller parameters. By minimizing the time-domain based objective function, in which the deviation in the oscillatory rotor speed of the generator is involved; stability performance of the system is improved. To compare the capability of PSS and TCSC-based controller, both are designed independently first and then in a coordinated manner for individual and coordinated application. The proposed controllers are tested on a weakly connected power system. The eigenvalue analysis and non-linear simulation results are presented to show the effectiveness of the coordinated design approach over individual design. The simulation results show that the proposed controllers are effective in damping low frequency oscillations resulting from various small disturbances like change in mechanical power input and reference voltage setting.
Abstract: A dissimilarity measure between the empiric
characteristic functions of the subsamples associated to the different
classes in a multivariate data set is proposed. This measure can be
efficiently computed, and it depends on all the cases of each class. It
may be used to find groups of similar classes, which could be joined
for further analysis, or it could be employed to perform an
agglomerative hierarchical cluster analysis of the set of classes. The
final tree can serve to build a family of binary classification models,
offering an alternative approach to the multi-class SVM problem. We
have tested this dendrogram based SVM approach with the oneagainst-
one SVM approach over four publicly available data sets,
three of them being microarray data. Both performances have been
found equivalent, but the first solution requires a smaller number of
binary SVM models.
Abstract: This paper compares six approaches of object serialization
from qualitative and quantitative aspects. Those are object
serialization in Java, IDL, XStream, Protocol Buffers, Apache Avro,
and MessagePack. Using each approach, a common example is
serialized to a file and the size of the file is measured. The qualitative
comparison works are investigated in the way of checking whether
schema definition is required or not, whether schema compiler is
required or not, whether serialization is based on ascii or binary, and
which programming languages are supported. It is clear that there
is no best solution. Each solution makes good in the context it was
developed.
Abstract: Avionic software architecture has transit from a
federated avionics architecture to an integrated modular avionics
(IMA) .ARINC 653 (Avionics Application Standard Software Interface) is a software specification for space and time partitioning in
Safety-critical avionics Real-time operating systems. Methods to transform the abstract avionics application logic function to the
executable model have been brought up, however with less
consideration about the code generating input and output model specific for ARINC 653 platform and inner-task synchronous dynamic
interaction order sequence. In this paper, we proposed an
AADL-based model-driven design methodology to fulfill the purpose
to automatically generating Cµ executable model on ARINC 653 platform from the ARINC653 architecture which defined as AADL653 in order to facilitate the development of the avionics software constructed on ARINC653 OS. This paper presents the
mapping rules between the AADL653 elements and the elements in
Cµ language, and define the code generating rules , designs an automatic C µ code generator .Then, we use a case to illustrate our
approach. Finally, we give the related work and future research directions.
Abstract: Many research works are carried out on the analysis of
traces in a digital learning environment. These studies produce large
volumes of usage tracks from the various actions performed by a
user. However, to exploit these data, compare and improve
performance, several issues are raised. To remedy this, several works
deal with this problem seen recently. This research studied a series of
questions about format and description of the data to be shared. Our
goal is to share thoughts on these issues by presenting our experience
in the analysis of trace-based log files, comparing several approaches
used in automatic classification applied to e-learning platforms.
Finally, the obtained results are discussed.
Abstract: In this paper, application of artificial neural networks
in typical disease diagnosis has been investigated. The real procedure
of medical diagnosis which usually is employed by physicians was
analyzed and converted to a machine implementable format. Then
after selecting some symptoms of eight different diseases, a data set
contains the information of a few hundreds cases was configured and
applied to a MLP neural network. The results of the experiments and
also the advantages of using a fuzzy approach were discussed as
well. Outcomes suggest the role of effective symptoms selection and
the advantages of data fuzzificaton on a neural networks-based
automatic medical diagnosis system.
Abstract: The focus in this work is to assess which method
allows a better forecasting of malaria cases in Bujumbura ( Burundi)
when taking into account association between climatic factors and
the disease. For the period 1996-2007, real monthly data on both
malaria epidemiology and climate in Bujumbura are described and
analyzed. We propose a hierarchical approach to achieve our
objective. We first fit a Generalized Additive Model to malaria cases
to obtain an accurate predictor, which is then used to predict future
observations. Various well-known forecasting methods are compared
leading to different results. Based on in-sample mean average
percentage error (MAPE), the multiplicative exponential smoothing
state space model with multiplicative error and seasonality performed
better.