Abstract: Internet Access Technologies (IAT) provide a means
through which Internet can be accessed. The choice of a suitable
Internet technology is increasingly becoming an important issue to
ISP clients. Currently, the choice of IAT is based on discretion and
intuition of the concerned managers and the reliance on ISPs. In this
paper we propose a model and designs algorithms that are used in the
Internet access technology specification. In the proposed model, three
ranking approaches are introduced; concurrent ranking, stepwise
ranking and weighted ranking. The model ranks the IAT based on
distance measures computed in ascending order while the global
ranking system assigns weights to each IAT according to the position
held in each ranking technique, determines the total weight of a
particular IAT and ranks them in descending order. The final output
is an objective ranking of IAT in descending order.
Abstract: In this paper, we propose a fuzzy aggregate
production planning (APP) model for blending problem in a brass
factory which is the problem of computing optimal amounts of raw
materials for the total production of several types of brass in a
period. The model has deterministic and imprecise parameters
which follows triangular possibility distributions. The brass casting
APP model can not always be solved by using common approaches
used in the literature. Therefore a mathematical model is presented
for solving this problem. In the proposed model, the Lai and
Hwang-s fuzzy ranking concept is relaxed by using one constraint
instead of three constraints. An application of the brass casting
APP model in a brass factory shows that the proposed model
successfully solves the multi-blend problem in casting process and
determines the optimal raw material purchasing policies.
Abstract: Increasing growth of information volume in the
internet causes an increasing need to develop new (semi)automatic
methods for retrieval of documents and ranking them according to
their relevance to the user query. In this paper, after a brief review
on ranking models, a new ontology based approach for ranking
HTML documents is proposed and evaluated in various
circumstances. Our approach is a combination of conceptual,
statistical and linguistic methods. This combination reserves the
precision of ranking without loosing the speed. Our approach
exploits natural language processing techniques for extracting
phrases and stemming words. Then an ontology based conceptual
method will be used to annotate documents and expand the query.
To expand a query the spread activation algorithm is improved so
that the expansion can be done in various aspects. The annotated
documents and the expanded query will be processed to compute
the relevance degree exploiting statistical methods. The outstanding
features of our approach are (1) combining conceptual, statistical
and linguistic features of documents, (2) expanding the query with
its related concepts before comparing to documents, (3) extracting
and using both words and phrases to compute relevance degree, (4)
improving the spread activation algorithm to do the expansion based
on weighted combination of different conceptual relationships and
(5) allowing variable document vector dimensions. A ranking
system called ORank is developed to implement and test the
proposed model. The test results will be included at the end of the
paper.
Abstract: Supplier selection, in real situation, is affected by
several qualitative and quantitative factors and is one of the most
important activities of purchasing department. Since at the time of
evaluating suppliers against the criteria or factors, decision makers
(DMS) do not have precise, exact and complete information, supplier
selection becomes more difficult. In this case, Grey theory helps us
to deal with this problem of uncertainty. Here, we apply Technique
for Order Preference by Similarity to Ideal Solution (TOPSIS)
method to evaluate and select the best supplier by using interval
fuzzy numbers. Through this article, we compare TOPSIS with some
other approaches and afterward demonstrate that the concept of
TOPSIS is very important for ranking and selecting right supplier.
Abstract: Location selection is one of the most important
decision making process which requires to consider several criteria
based on the mission and the strategy. This study-s object is to
provide a decision support model in order to help the bank selecting
the most appropriate location for a bank-s branch considering a case
study in Turkey. The object of the bank is to select the most
appropriate city for opening a branch among six alternatives in the
South-Eastern of Turkey. The model in this study was consisted of
five main criteria which are Demographic, Socio-Economic, Sectoral
Employment, Banking and Trade Potential and twenty one subcriteria
which represent the bank-s mission and strategy. Because of
the multi-criteria structure of the problem and the fuzziness in the
comparisons of the criteria, fuzzy AHP is used and for the ranking of
the alternatives, TOPSIS method is used.
Abstract: Many real-world optimization problems involve multiple conflicting objectives and the use of evolutionary algorithms to solve the problems has attracted much attention recently. This paper investigates the application of multi-objective optimization technique for the design of a Thyristor Controlled Series Compensator (TCSC)-based controller to enhance the performance of a power system. The design objective is to improve both rotor angle stability and system voltage profile. A Genetic Algorithm (GA) based solution technique is applied to generate a Pareto set of global optimal solutions to the given multi-objective optimisation problem. Further, a fuzzy-based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto solution set. Simulation results are presented to show the effectiveness and robustness of the proposed approach.
Abstract: Video-on-demand (VOD) is designed by using content delivery networks (CDN) to minimize the overall operational cost and to maximize scalability. Estimation of the viewing pattern (i.e., the relationship between the number of viewings and the ranking of VOD contents) plays an important role in minimizing the total operational cost and maximizing the performance of the VOD systems. In this paper, we have analyzed a large body of commercial VOD viewing data and found that the viewing rank distribution fits well with the parabolic fractal distribution. The weighted linear model fitting function is used to estimate the parameters (coefficients) of the parabolic fractal distribution. This paper presents an analytical basis for designing an optimal hierarchical VOD contents distribution system in terms of its cost and performance.
Abstract: Automatic segmentation of skin lesions is the first step
towards the automated analysis of malignant melanoma. Although
numerous segmentation methods have been developed, few studies
have focused on determining the most effective color space for
melanoma application. This paper proposes an automatic segmentation
algorithm based on color space analysis and clustering-based histogram
thresholding, a process which is able to determine the optimal
color channel for detecting the borders in dermoscopy images. The
algorithm is tested on a set of 30 high resolution dermoscopy images.
A comprehensive evaluation of the results is provided, where borders
manually drawn by four dermatologists, are compared to automated
borders detected by the proposed algorithm, applying three previously
used metrics of accuracy, sensitivity, and specificity and a new metric
of similarity. By performing ROC analysis and ranking the metrics,
it is demonstrated that the best results are obtained with the X and
XoYoR color channels, resulting in an accuracy of approximately
97%. The proposed method is also compared with two state-of-theart
skin lesion segmentation methods.
Abstract: The aim of this paper is to rank the impact of Object
Oriented(OO) metrics in fault prediction modeling using Artificial
Neural Networks(ANNs). Past studies on empirical validation of
object oriented metrics as fault predictors using ANNs have focused
on the predictive quality of neural networks versus standard
statistical techniques. In this empirical study we turn our attention to
the capability of ANNs in ranking the impact of these explanatory
metrics on fault proneness. In ANNs data analysis approach, there is
no clear method of ranking the impact of individual metrics. Five
ANN based techniques are studied which rank object oriented
metrics in predicting fault proneness of classes. These techniques are
i) overall connection weights method ii) Garson-s method iii) The
partial derivatives methods iv) The Input Perturb method v) the
classical stepwise methods. We develop and evaluate different
prediction models based on the ranking of the metrics by the
individual techniques. The models based on overall connection
weights and partial derivatives methods have been found to be most
accurate.
Abstract: In this paper, we present two new ranking and unranking
algorithms for k-ary trees represented by x-sequences in Gray
code order. These algorithms are based on a gray code generation algorithm
developed by Ahrabian et al.. In mentioned paper, a recursive
backtracking generation algorithm for x-sequences corresponding to
k-ary trees in Gray code was presented. This generation algorithm
is based on Vajnovszki-s algorithm for generating binary trees in
Gray code ordering. Up to our knowledge no ranking and unranking
algorithms were given for x-sequences in this ordering. we present
ranking and unranking algorithms with O(kn2) time complexity for
x-sequences in this Gray code ordering
Abstract: Nowadays social media are important tools for web
resource discovery. The performance and capabilities of web searches
are vital, especially search results from social research paper
bookmarking. This paper proposes a new algorithm for ranking
method that is a combination of similarity ranking with paper posted
time or CSTRank. The paper posted time is static ranking for
improving search results. For this particular study, the paper posted
time is combined with similarity ranking to produce a better ranking
than other methods such as similarity ranking or SimRank. The
retrieval performance of combination rankings is evaluated using
mean values of NDCG. The evaluation in the experiments implies
that the chosen CSTRank ranking by using weight score at ratio 90:10
can improve the efficiency of research paper searching on social
bookmarking websites.
Abstract: In this paper, we present a methodology for finding
authoritative researchers by analyzing academic Web sites. We show
a case study in which we concentrate on a set of Czech computer
science departments- Web sites. We analyze the relations between
them via hyperlinks and find the most important ones using several
common ranking algorithms. We then examine the contents of the
research papers present on these sites and determine the most
authoritative Czech authors.
Abstract: This paper presents anapproach of hybridizing two or more artificial intelligence (AI) techniques which arebeing used to
fuzzify the workstress level ranking and categorize the rating accordingly. The use of two or more techniques (hybrid approach)
has been considered in this case, as combining different techniques may lead to neutralizing each other-s weaknesses generating a
superior hybrid solution. Recent researches have shown that there is a
need for a more valid and reliable tools, for assessing work stress. Thus artificial intelligence techniques have been applied in this
instance to provide a solution to a psychological application. An overview about the novel and autonomous interactive model for analysing work-stress that has been developedusing multi-agent
systems is also presented in this paper. The establishment of the intelligent multi-agent decision analyser (IMADA) using hybridized technique of neural networks and fuzzy logic within the multi-agent based framework is also described.
Abstract: Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.
Abstract: In the proposed method for Web page-ranking, a
novel theoretic model is introduced and tested by examples of order
relationships among IP addresses. Ranking is induced using a
convexity feature, which is learned according to these examples
using a self-organizing procedure. We consider the problem of selforganizing
learning from IP data to be represented by a semi-random
convex polygon procedure, in which the vertices correspond to IP
addresses. Based on recent developments in our regularization
theory for convex polygons and corresponding Euclidean distance
based methods for classification, we develop an algorithmic
framework for learning ranking functions based on a Computational
Geometric Theory. We show that our algorithm is generic, and
present experimental results explaining the potential of our approach.
In addition, we explain the generality of our approach by showing its
possible use as a visualization tool for data obtained from diverse
domains, such as Public Administration and Education.
Abstract: Developing an accurate classifier for high dimensional microarray datasets is a challenging task due to availability of small sample size. Therefore, it is important to determine a set of relevant genes that classify the data well. Traditionally, gene selection method often selects the top ranked genes according to their discriminatory power. Often these genes are correlated with each other resulting in redundancy. In this paper, we have proposed a hybrid method using feature ranking and wrapper method (Genetic Algorithm with multiclass SVM) to identify a set of relevant genes that classify the data more accurately. A new fitness function for genetic algorithm is defined that focuses on selecting the smallest set of genes that provides maximum accuracy. Experiments have been carried on four well-known datasets1. The proposed method provides better results in comparison to the results found in the literature in terms of both classification accuracy and number of genes selected.
Abstract: The paper contains a review of the literature in terms of the critical analysis of methodologies of university ranking systems. Furthermore, the initiatives supported by the European Commission (U-Map, U-Multirank) and CHE Ranking are described. Special attention is paid to the tendencies in the development of ranking systems. According to the author, the ranking organizations should abandon the classic form of ranking, namely a hierarchical ordering of universities from “the best" to “the worse". In the empirical part of this paper, using one of the method of cluster analysis called k-means clustering, the author presents university classifications of the top universities from the Shanghai Jiao Tong University-s (SJTU) Academic Ranking of World Universities (ARWU).
Abstract: In the last few years, the Semantic Web gained scientific acceptance as a means of relationships identification in knowledge base, widely known by semantic association. Query about complex relationships between entities is a strong requirement for many applications in analytical domains. In bioinformatics for example, it is critical to extract exchanges between proteins. Currently, the widely known result of such queries is to provide paths between connected entities from data graph. However, they do not always give good results while facing the user need by the best association or a set of limited best association, because they only consider all existing paths but ignore the path evaluation. In this paper, we present an approach for supporting association discovery queries. Our proposal includes (i) a query language PmSPRQL which provides a multiparadigm query expressions for association extraction and (ii) some quantification measures making easy the process of association ranking. The originality of our proposal is demonstrated by a performance evaluation of our approach on real world datasets.
Abstract: A novel calibration approach that aims to reduce
ASM2d parameter subsets and decrease the model complexity is
presented. This approach does not require high computational
demand and reduces the number of modeling parameters required to
achieve the ASMs calibration by employing a sensitivity and iteration
methodology. Parameter sensitivity is a crucial factor and the
iteration methodology enables refinement of the simulation parameter
values. When completing the iteration process, parameters values are
determined in descending order of their sensitivities. The number of
iterations required is equal to the number of model parameters of the
parameter significance ranking. This approach was used for the
ASM2d model to the evaluated EBPR phosphorus removal and it was
successful. Results of the simulation provide calibration parameters.
These included YPAO, YPO4, YPHA, qPHA, qPP, μPAO, bPAO, bPP, bPHA,
KPS, YA, μAUT, bAUT, KO2 AUT, and KNH4 AUT. Those parameters were
corresponding to the experimental data available.
Abstract: Although the usefulness of fuzzy databases has been
pointed out in several works, they are not fully developed in numerous
domains. A task that is mostly disregarded and which is the topic
of this paper is the determination of suitable inequalities for fuzzy
sets in fuzzy query languages. This paper examines which kinds
of fuzzy inequalities exist at all. Afterwards, different procedures
are presented that appear theoretically appropriate. By being applied
to various examples, their strengths and weaknesses are revealed.
Furthermore, an algorithm for an efficient computation of the selected
fuzzy inequality is shown.