Abstract: This study performs a comparative analysis of the 21 Greek Universities in terms of their public funding, awarded for covering their operating expenditure. First it introduces a DEA/MCDM model that allocates the fund into four expenditure factors in the most favorable way for each university. Then, it presents a common, consensual assessment model to reallocate the amounts, remaining in the same level of total public budget. From the analysis it derives that a number of universities cannot justify the public funding in terms of their size and operational workload. For them, the sufficient reduction of their public funding amount is estimated as a future target. Due to the lack of precise data for a number of expenditure criteria, the analysis is based on a mixed crisp-ordinal data set.
Abstract: Text categorization is the problem of classifying text
documents into a set of predefined classes. In this paper, we
investigated three approaches to build a meta-classifier in order to
increase the classification accuracy. The basic idea is to learn a metaclassifier
to optimally select the best component classifier for each
data point. The experimental results show that combining classifiers
can significantly improve the accuracy of classification and that our
meta-classification strategy gives better results than each individual
classifier. For 7083 Reuters text documents we obtained a
classification accuracies up to 92.04%.
Abstract: This study reveals that anti-immigrant policies in
Europe result from a process of securitization, and that, within this
process, radical right parties have been formulating discourses and
approaches through a construction process by using some common
security themes. These security themes can be classified as national
security, economic security, cultural security and internal security.
The frequency with which radical right parties use these themes may
vary according to the specific historical, social and cultural
characteristics of a particular country.
Abstract: This paper presents a simplified version of Data Envelopment Analysis (DEA) - a conventional approach to evaluating the performance and ranking of competitive objects characterized by two groups of factors acting in opposite directions: inputs and outputs. DEA with a Perfect Object (DEA PO) augments the group of actual objects with a virtual Perfect Object - the one having greatest outputs and smallest inputs. It allows for obtaining an explicit analytical solution and making a step to an absolute efficiency. This paper develops this approach further and introduces a DEA model with Partially Perfect Objects. DEA PPO consecutively eliminates the smallest relative inputs or greatest relative outputs, and applies DEA PO to the reduced collections of indicators. The partial efficiency scores are combined to get the weighted efficiency score. The computational scheme remains simple, like that of DEA PO, but the advantage of the DEA PPO is taking into account all of the inputs and outputs for each actual object. Firm evaluation is considered as an example.
Abstract: This paper presents a new steganography approach suitable for Arabic texts. It can be classified under steganography feature coding methods. The approach hides secret information bits within the letters benefiting from their inherited points. To note the specific letters holding secret bits, the scheme considers the two features, the existence of the points in the letters and the redundant Arabic extension character. We use the pointed letters with extension to hold the secret bit 'one' and the un-pointed letters with extension to hold 'zero'. This steganography technique is found attractive to other languages having similar texts to Arabic such as Persian and Urdu.
Abstract: Electrospinning is a broadly used technology to obtain
polymeric nanofibers ranging from several micrometers down to
several hundred nanometers for a wide range of applications. It offers
unique capabilities to produce nanofibers with controllable porous
structure. With smaller pores and higher surface area than regular
fibers, electrospun fibers have been successfully applied in various
fields, such as, nanocatalysis, tissue engineering scaffolds, protective
clothing, filtration, biomedical, pharmaceutical, optical electronics,
healthcare, biotechnology, defense and security, and environmental
engineering. In this study, polyurethane nanofibers were obtained
under different electrospinning parameters. Fiber morphology and
diameter distribution were investigated in order to understand them
as a function of process parameters.
Abstract: This paper reports a case study on how a conceptual
and analytical thinking approach was used in Art and Design Department at Multimedia University (Malaysia) in addressing the
issues of one nation and its impact in the society through artworks. The art project was designed for students to increase the know-how
and develop creative thinking in design and communication. Goals of the design project were: (1) to develop creative thinking in design
and communication, (2) to increase student understanding on the
process of problem solving for design work, and (3) to use design
elements and principles to generate interest, attention and emotional responses. An exhibition entitled "One Nation" was showcased to
local and international viewers consisting of the general public, professionals, academics, artists and students. Findings indicate that the project supported several visual art standards, as well as
generated awareness in the society. This project may be of interest to
current and future art educators and others interested in the potential
of utilizing global issues as content for art, community and environment studies for the purpose of educational art.
Abstract: Software Development Risks Identification (SDRI),
using Fault Tree Analysis (FTA), is a proposed technique to identify
not only the risk factors but also the causes of the appearance of the
risk factors in software development life cycle. The method is based
on analyzing the probable causes of software development failures
before they become problems and adversely affect a project. It uses
Fault tree analysis (FTA) to determine the probability of a particular
system level failures that are defined by A Taxonomy for Sources of
Software Development Risk to deduce failure analysis in which an
undesired state of a system by using Boolean logic to combine a
series of lower-level events. The major purpose of this paper is to use
the probabilistic calculations of Fault Tree Analysis approach to
determine all possible causes that lead to software development risk
occurrence
Abstract: Software estimation accuracy is among the greatest
challenges for software developers. This study aimed at building and
evaluating a neuro-fuzzy model to estimate software projects
development time. The forty-one modules developed from ten
programs were used as dataset. Our proposed approach is compared
with fuzzy logic and neural network model and Results show that the
value of MMRE (Mean of Magnitude of Relative Error) applying
neuro-fuzzy was substantially lower than MMRE applying fuzzy
logic and neural network.
Abstract: The paper proposes an approach using genetic algorithm for computing the region based image similarity. The image is denoted using a set of segmented regions reflecting color and texture properties of an image. An image is associated with a family of image features corresponding to the regions. The resemblance of two images is then defined as the overall similarity between two families of image features, and quantified by a similarity measure, which integrates properties of all the regions in the images. A genetic algorithm is applied to decide the most plausible matching. The performance of the proposed method is illustrated using examples from an image database of general-purpose images, and is shown to produce good results.
Abstract: Reconfigurable optical add/drop multiplexers
(ROADMs) can be classified into three categories based on their
underlying switching technologies. Category I consists of a single
large optical switch; category II is composed of a number of small
optical switches aligned in parallel; and category III has a single
optical switch and only one wavelength being added/dropped. In this
paper, to evaluate the wavelength-routing capability of ROADMs of
category-II in dynamic optical networks,the dynamic traffic models
are designed based on Bernoulli, Poisson distributions for smooth
and regular types of traffic. Through Analytical and Simulation
results, the routing power of cat-II of ROADM networks for two
traffic models are determined.
Abstract: Limited competition has been a serious concern in infrastructure procurement. Importantly, however, there are normally a number of potential bidders initially showing interest in proposed projects. This paper focuses on tackling the question why these initially interested bidders fade out. An empirical problem is that no bids of fading-out firms are observable. They could decide not to enter the process at the beginning of the tendering or may be technically disqualified at any point in the selection process. The paper applies the double selection model to procurement data from road development projects in developing countries and shows that competition ends up restricted, because bidders are self-selective and auctioneers also tend to limit participation depending on the size of contracts.Limited competition would likely lead to high infrastructure procurement costs, threatening fiscal sustainability and economic growth.
Abstract: Contamination of heavy metals in tin tailings has
caused an interest in the scientific approach of their remediation. One
of the approaches is through phytoremediation, which is using tree
species to extract the heavy metals from the contaminated soils. Tin
tailings comprise of slime and sand tailings. This paper reports only
on the finding of the four timber species namely Acacia mangium,
Hopea odorata, Intsia palembanica and Swietenia macrophylla on
the removal of cadmium (Cd) and lead (Pb) from the slime tailings.
The methods employed for sampling and soil analysis are established
methods. Six trees of each species were randomly selected from a
0.25 ha plot for extraction and determination of their heavy metals.
The soil samples were systematically collected according to 5 x 5 m
grid from each plot. Results showed that the concentration of heavy
metals in soils and trees varied according to species. Higher
concentration of heavy metals was found in the stem than the
primary roots of all the species. A. Mangium accumulated the highest
total amount of Pb per hectare basis.
Abstract: The research on the effectiveness of environmental
assessment (EA) is a milestone effort to evaluate the state of the field,
including many contributors related with a lot of countries since more
than two decades. In the 1960s, there was a surge of interest between
modern industrialized countries over unexpected opposite effects of
technical invention. The interest led to choice of approaches for
assessing and prediction the impressions of technology and
advancement for social and economic, state health and safety, solidity
and the circumstances. These are consisting of risk assessment,
technology assessment, environmental impact assessment and costbenefit
analysis. In this research contribution, the authors have
described the research status for environmental assessment in
cumulative environmental system. This article discusses the methods
for cumulative effect assessment (CEA).
Abstract: Rapid economic development and population growth
in Malaysia had accelerated the generation of solid waste. This issue
gives pressure for effective management of municipal solid waste
(MSW) to take place in Malaysia due to the increased cost of landfill.
This paper discusses optimal planning of waste-to-energy (WTE)
using a combinatorial simulation and optimization model through
mixed integer linear programming (MILP) approach. The proposed
multi-period model is tested in Iskandar Malaysia (IM) as case study
for a period of 12 years (2011 -2025) to illustrate the economic
potential and tradeoffs involved in this study. In this paper, 3
scenarios have been used to demonstrate the applicability of the
model: (1) Incineration scenario (2) Landfill scenario (3) Optimal
scenario. The model revealed that the minimum cost of electricity
generation from 9,995,855 tonnes of MSW is estimated as USD
387million with a total electricity generation of 50MW /yr in the
optimal scenario.
Abstract: Although so far, many methods for ranking fuzzy numbers
have been discussed broadly, most of them contained some shortcomings,
such as requirement of complicated calculations, inconsistency
with human intuition and indiscrimination. The motivation of
this study is to develop a model for ranking fuzzy numbers based
on the lexicographical ordering which provides decision-makers with
a simple and efficient algorithm to generate an ordering founded on
a precedence. The main emphasis here is put on the ease of use
and reliability. The effectiveness of the proposed method is finally
demonstrated by including a comprehensive comparing different
ranking methods with the present one.
Abstract: This paper examines the problem of designing a robust H8 state-feedback controller for a class of nonlinear two-time scale systems with Markovian Jumps described by a Takagi-Sugeno (TS) fuzzy model. Based on a linear matrix inequality (LMI) approach, LMI-based sufficient conditions for the uncertain Markovian jump nonlinear two-time scale systems to have an H8 performance are derived. The proposed approach does not involve the separation of states into slow and fast ones and it can be applied not only to standard, but also to nonstandard nonlinear two-time scale systems. A numerical example is provided to illustrate the design developed in this paper.
Abstract: In this article we explore the application of a formal
proof system to verification problems in cryptography. Cryptographic
properties concerning correctness or security of some cryptographic
algorithms are of great interest. Beside some basic lemmata, we
explore an implementation of a complex function that is used in
cryptography. More precisely, we describe formal properties of this
implementation that we computer prove. We describe formalized
probability distributions (σ-algebras, probability spaces and conditional
probabilities). These are given in the formal language of the
formal proof system Isabelle/HOL. Moreover, we computer prove
Bayes- Formula. Besides, we describe an application of the presented
formalized probability distributions to cryptography. Furthermore,
this article shows that computer proofs of complex cryptographic
functions are possible by presenting an implementation of the Miller-
Rabin primality test that admits formal verification. Our achievements
are a step towards computer verification of cryptographic primitives.
They describe a basis for computer verification in cryptography.
Computer verification can be applied to further problems in cryptographic
research, if the corresponding basic mathematical knowledge
is available in a database.
Abstract: In this paper, we propose a robust scheme to work face alignment and recognition under various influences. For face representation, illumination influence and variable expressions are the important factors, especially the accuracy of facial localization and face recognition. In order to solve those of factors, we propose a robust approach to overcome these problems. This approach consists of two phases. One phase is preprocessed for face images by means of the proposed illumination normalization method. The location of facial features can fit more efficient and fast based on the proposed image blending. On the other hand, based on template matching, we further improve the active shape models (called as IASM) to locate the face shape more precise which can gain the recognized rate in the next phase. The other phase is to process feature extraction by using principal component analysis and face recognition by using support vector machine classifiers. The results show that this proposed method can obtain good facial localization and face recognition with varied illumination and local distortion.
Abstract: In this study, the locations and areas of commercial
accumulations were detected by using digital yellow page data. An
original buffering method that can accurately create polygons of
commercial accumulations is proposed in this paper.; by using this
method, distribution of commercial accumulations can be easily
created and monitored over a wide area. The locations, areas, and
time-series changes of commercial accumulations in the South Kanto
region can be monitored by integrating polygons of commercial
accumulations with the time-series data of digital yellow page data.
The circumstances of commercial accumulations were shown to vary
according to areas, that is, highly- urbanized regions such as the city
center of Tokyo and prefectural capitals, suburban areas near large
cities, and suburban and rural areas.