Abstract: Project selection problems on management
information system (MIS) are often considered a multi-criteria
decision-making (MCDM) for a solving method. These problems
contain two aspects, such as interdependencies among criteria and
candidate projects and qualitative and quantitative factors of projects.
However, most existing methods reported in literature consider these
aspects separately even though these two aspects are simultaneously
incorporated. For this reason, we proposed a hybrid method using
analytic network process (ANP) and fuzzy logic in order to represent
both aspects. We then propose a goal programming model to conduct
an optimization for the project selection problems interpreted by a
hybrid concept. Finally, a numerical example is conducted as
verification purposes.
Abstract: ANNARIMA that combines both autoregressive integrated moving average (ARIMA) model and artificial neural network (ANN) model is a valuable tool for modeling and forecasting nonlinear time series, yet the over-fitting problem is more likely to occur in neural network models. This paper provides a hybrid methodology that combines both radial basis function (RBF) neural network and auto regression (AR) model based on binomial smoothing (BS) technique which is efficient in data processing, which is called BSRBFAR. This method is examined by using the data of Canadian Lynx data. Empirical results indicate that the over-fitting problem can be eased using RBF neural network based on binomial smoothing which is called BS-RBF, and the hybrid model–BS-RBFAR can be an effective way to improve forecasting accuracy achieved by BSRBF used separately.
Abstract: Because support interference corrections are not properly
understood, engineers mostly rely on expensive dummy measurements
or CFD calculations. This paper presents a method based on uncorrected wind tunnel measurements and fast calculation techniques
(it is a hybrid method) to calculate wall interference, support interference and residual interference (when e.g. a support member
closely approaches the wind tunnel walls) for any type of wind tunnel and support configuration. The method provides with a simple formula
for the calculation of the interference gradient. This gradient is
based on the uncorrected measurements and a successive calculation
of the slopes of the interference-free aerodynamic coefficients. For the latter purpose a new vortex-lattice routine is developed that corrects
the slopes for viscous effects. A test case of a measurement on a wing proves the value of this hybrid method as trends and orders of
magnitudes of the interference are correctly determined.
Abstract: A new hybrid coding method for compressing
animated polygonal meshes is presented. This paper assumes
the simplistic representation of the geometric data: a temporal
sequence of polygonal meshes for each discrete frame of the
animated sequence. The method utilizes a delta coding and an
octree-based method. In this hybrid method, both the octree
approach and the delta coding approach are applied to each
single frame in the animation sequence in parallel. The
approach that generates the smaller encoded file size is chosen
to encode the current frame. Given the same quality
requirement, the hybrid coding method can achieve much
higher compression ratio than the octree-only method or the
delta-only method. The hybrid approach can represent 3D
animated sequences with higher compression factors while
maintaining reasonable quality. It is easy to implement and have
a low cost encoding process and a fast decoding process, which
make it a better choice for real time application.
Abstract: Microarray data profiles gene expression on a whole
genome scale, therefore, it provides a good way to study associations
between gene expression and occurrence or progression of cancer.
More and more researchers realized that microarray data is helpful
to predict cancer sample. However, the high dimension of gene
expressions is much larger than the sample size, which makes this
task very difficult. Therefore, how to identify the significant genes
causing cancer becomes emergency and also a hot and hard research
topic. Many feature selection algorithms have been proposed in
the past focusing on improving cancer predictive accuracy at the
expense of ignoring the correlations between the features. In this
work, a novel framework (named by SGS) is presented for stable gene
selection and efficient cancer prediction . The proposed framework
first performs clustering algorithm to find the gene groups where
genes in each group have higher correlation coefficient, and then
selects the significant genes in each group with Bayesian Lasso and
important gene groups with group Lasso, and finally builds prediction
model based on the shrinkage gene space with efficient classification
algorithm (such as, SVM, 1NN, Regression and etc.). Experiment
results on real world data show that the proposed framework often
outperforms the existing feature selection and prediction methods,
say SAM, IG and Lasso-type prediction model.
Abstract: Names are important in many societies, even in technologically oriented ones which use e.g. ID systems to identify individual people. Names such as surnames are the most important as they are used in many processes, such as identifying of people and genealogical research. On the other hand variation of names can be a major problem for the identification and search for people, e.g. web search or security reasons. Name matching presumes a-priori that the recorded name written in one alphabet reflects the phonetic identity of two samples or some transcription error in copying a previously recorded name. We add to this the lode that the two names imply the same person. This paper describes name variations and some basic description of various name matching algorithms developed to overcome name variation and to find reasonable variants of names which can be used to further increasing mismatches for record linkage and name search. The implementation contains algorithms for computing a range of fuzzy matching based on different types of algorithms, e.g. composite and hybrid methods and allowing us to test and measure algorithms for accuracy. NYSIIS, LIG2 and Phonex have been shown to perform well and provided sufficient flexibility to be included in the linkage/matching process for optimising name searching.
Abstract: Developing an accurate classifier for high dimensional microarray datasets is a challenging task due to availability of small sample size. Therefore, it is important to determine a set of relevant genes that classify the data well. Traditionally, gene selection method often selects the top ranked genes according to their discriminatory power. Often these genes are correlated with each other resulting in redundancy. In this paper, we have proposed a hybrid method using feature ranking and wrapper method (Genetic Algorithm with multiclass SVM) to identify a set of relevant genes that classify the data more accurately. A new fitness function for genetic algorithm is defined that focuses on selecting the smallest set of genes that provides maximum accuracy. Experiments have been carried on four well-known datasets1. The proposed method provides better results in comparison to the results found in the literature in terms of both classification accuracy and number of genes selected.
Abstract: Medical Decision Support Systems (MDSSs) are sophisticated, intelligent systems that can provide inference due to lack of information and uncertainty. In such systems, to model the uncertainty various soft computing methods such as Bayesian networks, rough sets, artificial neural networks, fuzzy logic, inductive logic programming and genetic algorithms and hybrid methods that formed from the combination of the few mentioned methods are used. In this study, symptom-disease relationships are presented by a framework which is modeled with a formal concept analysis and theory, as diseases, objects and attributes of symptoms. After a concept lattice is formed, Bayes theorem can be used to determine the relationships between attributes and objects. A discernibility relation that forms the base of the rough sets can be applied to attribute data sets in order to reduce attributes and decrease the complexity of computation.
Abstract: This paper proposes a hybrid method for eyes localization
in facial images. The novelty is in combining techniques
that utilise colour, edge and illumination cues to improve accuracy.
The method is based on the observation that eye regions have dark
colour, high density of edges and low illumination as compared
to other parts of face. The first step in the method is to extract
connected regions from facial images using colour, edge density and
illumination cues separately. Some of the regions are then removed
by applying rules that are based on the general geometry and shape
of eyes. The remaining connected regions obtained through these
three cues are then combined in a systematic way to enhance the
identification of the candidate regions for the eyes. The geometry
and shape based rules are then applied again to further remove the
false eye regions. The proposed method was tested using images from
the PICS facial images database. The proposed method has 93.7%
and 87% accuracies for initial blobs extraction and final eye detection
respectively.
Abstract: This paper presents a hybrid approach for solving nqueen problem by combination of PSO and SA. PSO is a population based heuristic method that sometimes traps in local maximum. To solve this problem we can use SA. Although SA suffer from many iterations and long time convergence for solving some problems, By good adjusting initial parameters such as temperature and the length of temperature stages SA guarantees convergence. In this article we use discrete PSO (due to nature of n-queen problem) to achieve a good local maximum. Then we use SA to escape from local maximum. The experimental results show that our hybrid method in comparison of SA method converges to result faster, especially for high dimensions n-queen problems.
Abstract: The issue of unintentional islanding in PV grid
interconnection still remains as a challenge in grid-connected
photovoltaic (PV) systems. This paper discusses the overview of
popularly used anti-islanding detection methods, practically applied
in PV grid-connected systems. Anti-islanding methods generally can
be classified into four major groups, which include passive methods,
active methods, hybrid methods and communication base methods.
Active methods have been the preferred detection technique over the
years due to very small non-detected zone (NDZ) in small scale
distribution generation. Passive method is comparatively simpler
than active method in terms of circuitry and operations. However, it
suffers from large NDZ that significantly reduces its performance.
Communication base methods inherit the advantages of active and
passive methods with reduced drawbacks. Hybrid method which
evolved from the combination of both active and passive methods
has been proven to achieve accurate anti-islanding detection by many
researchers. For each of the studied anti-islanding methods, the
operation analysis is described while the advantages and
disadvantages are compared and discussed. It is difficult to pinpoint a
generic method for a specific application, because most of the
methods discussed are governed by the nature of application and
system dependent elements. This study concludes that the setup and
operation cost is the vital factor for anti-islanding method selection in
order to achieve minimal compromising between cost and system
quality.
Abstract: The development of the signal compression
algorithms is having compressive progress. These algorithms are
continuously improved by new tools and aim to reduce, an average,
the number of bits necessary to the signal representation by means of
minimizing the reconstruction error. The following article proposes
the compression of Arabic speech signal by a hybrid method
combining the wavelet transform and the linear prediction. The
adopted approach rests, on one hand, on the original signal
decomposition by ways of analysis filters, which is followed by the
compression stage, and on the other hand, on the application of the
order 5, as well as, the compression signal coefficients. The aim of
this approach is the estimation of the predicted error, which will be
coded and transmitted. The decoding operation is then used to
reconstitute the original signal. Thus, the adequate choice of the
bench of filters is useful to the transform in necessary to increase the
compression rate and induce an impercevable distortion from an
auditive point of view.
Abstract: Gasoline Octane Number is the standard measure of
the anti-knock properties of a motor in platforming processes, that is
one of the important unit operations for oil refineries and can be
determined with online measurement or use CFR (Cooperative Fuel
Research) engines. Online measurements of the Octane number can
be done using direct octane number analyzers, that it is too
expensive, so we have to find feasible analyzer, like ANFIS
estimators.
ANFIS is the systems that neural network incorporated in fuzzy
systems, using data automatically by learning algorithms of NNs.
ANFIS constructs an input-output mapping based both on human
knowledge and on generated input-output data pairs.
In this research, 31 industrial data sets are used (21 data for training
and the rest of the data used for generalization). Results show that,
according to this simulation, hybrid method training algorithm in
ANFIS has good agreements between industrial data and simulated
results.
Abstract: High quality requirements analysis is one of the most
crucial activities to ensure the success of a software project, so that
requirements verification for software system becomes more and more
important in Requirements Engineering (RE) and it is one of the most
helpful strategies for improving the quality of software system.
Related works show that requirement elicitation and analysis can be
facilitated by ontological approaches and semantic web technologies.
In this paper, we proposed a hybrid method which aims to verify
requirements with structural and formal semantics to detect
interactions. The proposed method is twofold: one is for modeling
requirements with the semantic web language OWL, to construct a
semantic context; the other is a set of interaction detection rules which
are derived from scenario-based analysis and represented with
semantic web rule language (SWRL). SWRL based rules are working
with rule engines like Jess to reason in semantic context for
requirements thus to detect interactions. The benefits of the proposed
method lie in three aspects: the method (i) provides systematic steps
for modeling requirements with an ontological approach, (ii) offers
synergy of requirements elicitation and domain engineering for
knowledge sharing, and (3)the proposed rules can systematically assist
in requirements interaction detection.
Abstract: As is known, one of the priority directions of research
works of natural sciences is introduction of applied section of
contemporary mathematics as approximate and numerical methods to
solving integral equation into practice. We fare with the solving of
integral equation while studying many phenomena of nature to whose
numerically solving by the methods of quadrature are mainly applied.
Taking into account some deficiency of methods of quadrature for
finding the solution of integral equation some sciences suggested of
the multistep methods with constant coefficients. Unlike these papers,
here we consider application of hybrid methods to the numerical
solution of Volterra integral equation. The efficiency of the suggested
method is proved and a concrete method with accuracy order p = 4
is constructed. This method in more precise than the corresponding
known methods.
Abstract: This article outlines a hybrid method, incorporating
multiple techniques into an evaluation process, in order to select
competitive suppliers in a supply chain. It enables a purchaser to do
single sourcing and multiple sourcing by calculating a combined
supplier score, which accounts for both qualitative and quantitative
factors that have impact on supply chain performance.
Abstract: A power cable is widely used for power supply in
power distributing networks and power transmission lines. Due to
limitations in the production, delivery and setting up power cables,
they are produced and delivered in several separate lengths. Cable
itself, consists of two cable terminations and arbitrary number of
cable joints, depending on the cable route length. Electrical stress
control is needed to prevent a dielectric breakdown at the end of the
insulation shield in both the air and cable insulation. Reliability of
cable joint depends on its materials, design, installation and operating
environment. The paper describes design and performance results for
new modeled cable joints. Design concepts, based on numerical
calculations, must be correct. An Equivalent Electrodes
Method/Boundary Elements Method-hybrid approach that allows
electromagnetic field calculations in multilayer dielectric media,
including inhomogeneous regions, is presented.