Abstract: An empirical study of web applications that use
software frameworks is presented here. The analysis is based on two
approaches. In the first, developers using such frameworks are
required, based on their experience, to assign weights to parameters
such as database connection. In the second approach, a performance
testing tool, OpenSTA, is used to compute start time and other such
measures. From such an analysis, it is concluded that open source
software is superior to proprietary software. The motivation behind
this research is to examine ways in which a quantitative assessment
can be made of software in general and frameworks in particular.
Concepts such as metrics and architectural styles are discussed along
with previously published research.
Abstract: In this article, by using fuzzy AHP and TOPSIS
technique we propose a new method for project selection problem.
After reviewing four common methods of comparing alternatives
investment (net present value, rate of return, benefit cost analysis
and payback period) we use them as criteria in AHP tree. In this
methodology by utilizing improved Analytical Hierarchy Process
by Fuzzy set theory, first we try to calculate weight of each
criterion. Then by implementing TOPSIS algorithm, assessment of
projects has been done. Obtained results have been tested in a
numerical example.
Abstract: The least mean square (LMS) algorithmis one of the
most well-known algorithms for mobile communication systems
due to its implementation simplicity. However, the main limitation
is its relatively slow convergence rate. In this paper, a booster
using the concept of Markov chains is proposed to speed up the
convergence rate of LMS algorithms. The nature of Markov
chains makes it possible to exploit the past information in the
updating process. Moreover, since the transition matrix has a
smaller variance than that of the weight itself by the central limit
theorem, the weight transition matrix converges faster than the
weight itself. Accordingly, the proposed Markov-chain based
booster thus has the ability to track variations in signal
characteristics, and meanwhile, it can accelerate the rate of
convergence for LMS algorithms. Simulation results show that the
LMS algorithm can effectively increase the convergence rate and
meantime further approach the Wiener solution, if the
Markov-chain based booster is applied. The mean square error is
also remarkably reduced, while the convergence rate is improved.
Abstract: In this paper, a new technique for fast painting with
different colors is presented. The idea of painting relies on applying
masks with different colors to the background. Fast painting is
achieved by applying these masks in the frequency domain instead of
spatial (time) domain. New colors can be generated automatically as a
result from the cross correlation operation. This idea was applied
successfully for faster specific data (face, object, pattern, and code)
detection using neural algorithms. Here, instead of performing cross
correlation between the input input data (e.g., image, or a stream of
sequential data) and the weights of neural networks, the cross
correlation is performed between the colored masks and the
background. Furthermore, this approach is developed to reduce the
computation steps required by the painting operation. The principle of
divide and conquer strategy is applied through background
decomposition. Each background is divided into small in size subbackgrounds
and then each sub-background is processed separately by
using a single faster painting algorithm. Moreover, the fastest painting
is achieved by using parallel processing techniques to paint the
resulting sub-backgrounds using the same number of faster painting
algorithms. In contrast to using only faster painting algorithm, the
speed up ratio is increased with the size of the background when using
faster painting algorithm and background decomposition. Simulation
results show that painting in the frequency domain is faster than that in
the spatial domain.
Abstract: The aim of this work was to investigate the potential of soil microorganisms and the burhead plant, as well as the combination of soil microorganisms and plants to remediate monoethylene glycol (MEG), diethylene glycol (DEG), and triethylene glycol (TEG) in synthetic wastewater. The result showed that a system containing both burhead plant and soil microorganisms had the highest efficiency in EGs removal. Around 100% of MEG and DEG and 85% of TEG were removed within 15 days of the experiments. However, the burhead plant had higher removal efficiency than soil microorganisms for MEG and DEG but the same for TEG in the study systems. The removal rate of EGs in the study system related to the molecular weight of the compounds and MEG, the smallest glycol, was removed faster than DEG and TEG by both the burhead plant and soil microorganisms in the study system.
Abstract: In the last few years, three multivariate spectral
analysis techniques namely, Principal Component Analysis (PCA),
Independent Component Analysis (ICA) and Non-negative Matrix
Factorization (NMF) have emerged as effective tools for oscillation
detection and isolation. While the first method is used in determining
the number of oscillatory sources, the latter two methods
are used to identify source signatures by formulating the detection
problem as a source identification problem in the spectral domain.
In this paper, we present a critical drawback of the underlying linear
(mixing) model which strongly limits the ability of the associated
source separation methods to determine the number of sources
and/or identify the physical source signatures. It is shown that the
assumed mixing model is only valid if each unit of the process gives
equal weighting (all-pass filter) to all oscillatory components in its
inputs. This is in contrast to the fact that each unit, in general, acts
as a filter with non-uniform frequency response. Thus, the model
can only facilitate correct identification of a source with a single
frequency component, which is again unrealistic. To overcome
this deficiency, an iterative post-processing algorithm that correctly
identifies the physical source(s) is developed. An additional issue
with the existing methods is that they lack a procedure to pre-screen
non-oscillatory/noisy measurements which obscure the identification
of oscillatory sources. In this regard, a pre-screening procedure
is prescribed based on the notion of sparseness index to eliminate
the noisy and non-oscillatory measurements from the data set used
for analysis.
Abstract: Knowledge of patterns of genetic diversity enhances
the efficiency of germplasm conservation and improvement. In this
study 96 Iranian landraces of Triticum turgidum originating from
different geographical areas of Iran, along with 18 durum cultivars
from ten countries were evaluated for variation in morphological and
high molecular weight glutenin subunit (HMW-GS) composition.
The first two principal components clearly separated the Iranian
landraces from cultivars. Three alleles were present at the Glu-A1
locus and 11 alleles at Glu-B1. In both cultivars and landraces of
durum wheat, the null allele (Glu-A1c) was observed more
frequently than the Glu-A1a and Glu-A1b alleles. Two alleles,
namely Glu-B1a (subunit 7) and Glu-B1e (subunit 20) represented
the more frequent alleles at Glu-B1 locus. The results showed that
the evaluated Iranian landraces formed an interesting source of
favourable glutenin subunits that might be very desirable in breeding
activities for improving pasta-making quality.
Abstract: To help the client to select a competent agent
construction enterprise (ACE), this study aims to investigate the
selection standards by using the Fuzzy Analytic Hierarchy Process
(FAHP) and build an evaluation mathematical model with Grey
Relational Analysis (GRA). According to the outputs of literature
review, four orderly levels are established within the model, taking the
consideration of various agent construction models in practice. Then,
the process of applying FAHP and GRA is discussed in detailed.
Finally, through a case study, this paper illustrates how to apply these
methods in getting the weights of each standard and the final
assessment result.
Abstract: This paper deals with efficient quadrature formulas involving functions that are observed only at fixed sampling points. The approach that we develop is derived from efficient continuous quadrature formulas, such as Gauss-Legendre or Clenshaw-Curtis quadrature. We select nodes at sampling positions that are as close as possible to those of the associated classical quadrature and we update quadrature weights accordingly. We supply the theoretical quadrature error formula for this new approach. We show on examples the potential gain of this approach.
Abstract: The existence of maximal durations drastically modifies the performance evaluation in Discrete Event Systems (DES). The same particularity may be found on systems where the associated constraints do not concern the time. For example weight measures, in chemical industry, are used in order to control the quantity of consumed raw materials. This parameter also takes a fundamental part in the product quality as the correct transformation process is based upon a given percentage of each essence. Weight regulation therefore increases the global productivity of the system by decreasing the quantity of rejected products. In this paper we present an approach based on mixing different characteristics theories, the fuzzy system and Petri net system to describe the behaviour. An industriel application on a tobacco manufacturing plant, where the critical parameter is the weight is presented as an illustration.
Abstract: Selection of a project among a set of possible
alternatives is a difficult task that the decision maker (DM) has to
face. In this paper, by using a fuzzy TOPSIS technique we propose a
new method for a project selection problem. After reviewing four
common methods of comparing investment alternatives (net present
value, rate of return, benefit cost analysis and payback period) we
use them as criteria in a TOPSIS technique. First we calculate the
weight of each criterion by a pairwise comparison and then we utilize
the improved TOPSIS assessment for the project selection.