Abstract: In this paper, we propose a supervised method for
color image classification based on a multilevel sigmoidal neural
network (MSNN) model. In this method, images are classified into
five categories, i.e., “Car", “Building", “Mountain", “Farm" and
“Coast". This classification is performed without any segmentation
processes. To verify the learning capabilities of the proposed method,
we compare our MSNN model with the traditional Sigmoidal Neural
Network (SNN) model. Results of comparison have shown that the
MSNN model performs better than the traditional SNN model in the
context of training run time and classification rate. Both color
moments and multi-level wavelets decomposition technique are used
to extract features from images. The proposed method has been
tested on a variety of real and synthetic images.
Abstract: This paper introduces a novel approach to estimate the
clique potentials of Gibbs Markov random field (GMRF) models
using the Support Vector Machines (SVM) algorithm and the Mean
Field (MF) theory. The proposed approach is based on modeling the
potential function associated with each clique shape of the GMRF
model as a Gaussian-shaped kernel. In turn, the energy function of
the GMRF will be in the form of a weighted sum of Gaussian
kernels. This formulation of the GMRF model urges the use of the
SVM with the Mean Field theory applied for its learning for
estimating the energy function. The approach has been tested on
synthetic texture images and is shown to provide satisfactory results
in retrieving the synthesizing parameters.
Abstract: Throughout this paper, a relatively new technique, the Tabu search variable selection model, is elaborated showing how it can be efficiently applied within the financial world whenever researchers come across the selection of a subset of variables from a whole set of descriptive variables under analysis. In the field of financial prediction, researchers often have to select a subset of variables from a larger set to solve different type of problems such as corporate bankruptcy prediction, personal bankruptcy prediction, mortgage, credit scoring and the Arbitrage Pricing Model (APM). Consequently, to demonstrate how the method operates and to illustrate its usefulness as well as its superiority compared to other commonly used methods, the Tabu search algorithm for variable selection is compared to two main alternative search procedures namely, the stepwise regression and the maximum R 2 improvement method. The Tabu search is then implemented in finance; where it attempts to predict corporate bankruptcy by selecting the most appropriate financial ratios and thus creating its own prediction score equation. In comparison to other methods, mostly the Altman Z-Score model, the Tabu search model produces a higher success rate in predicting correctly the failure of firms or the continuous running of existing entities.
Abstract: Abu Dhabi is one of the fastest developed cities in the region. On top of all the current and future environmental challenges, Abu Dhabi aims to be among the top governments in the world in sustainable development. Abu Dhabi plans to create an attractive, livable and sustainable managed urban environment in which all necessary services and infrastructure are provided in a sustainable and timely manner. Abu Dhabi is engaged in a difficult challenge to develop credible environmental indicators that would assess the ambitious environmental targets. The aim of those indicators is to provide reliable guidance to decision makers and the public concerning key factors that determine the state of urban environment and identify major areas for policy intervention. In order to ensure sustainable development in UAE in general, and of Abu Dhabi City in particular, relevant and contextual environmental indicators need to be carefully considered. These indicators provide a gauge at a national government scale of how close countries are to establish environmental policy goals. The environment indicators assist city decision-making in such areas as identification of significant environmental aspects and observation of environmental performance trends. Those can help to find ways of reducing environmental pollution and in improving eco-efficiency. This paper outlines recent strategies implemented in Abu Dhabi that aims to improve the sustainable performance of the city-s built environment. The paper explores the variety of current and possible indicators at different levels and their roles in the development of the city.
Abstract: This paper focuses on analyzing medical diagnostic data using classification rules in data mining and context reduction in formal concept analysis. It helps in finding redundancies among the various medical examination tests used in diagnosis of a disease. Classification rules have been derived from positive and negative association rules using the Concept lattice structure of the Formal Concept Analysis. Context reduction technique given in Formal Concept Analysis along with classification rules has been used to find redundancies among the various medical examination tests. Also it finds out whether expensive medical tests can be replaced by some cheaper tests.
Abstract: A great deal of research works in the field information
systems security has been based on a positivist paradigm. Applying
the reductionism concept of the positivist paradigm for information
security means missing the bigger picture and thus, the lack of holism
which could be one of the reasons why security is still overlooked,
comes as an afterthought or perceived from a purely technical
dimension. We need to reshape our thinking and attitudes towards
security especially in a complex and dynamic environment such as e-
Business to develop a holistic understanding of e-Business security in
relation to its context as well as considering all the stakeholders in
the problem area. In this paper we argue the suitability and need for
more inductive interpretive approach and qualitative research method
to investigate e-Business security. Our discussion is based on a
holistic framework of enquiry, nature of the research problem, the
underling theoretical lens and the complexity of e-Business
environment. At the end we present a research strategy for
developing a holistic framework for understanding of e-Business
security problems in the context of developing countries based on an
interdisciplinary inquiry which considers their needs and
requirements.
Abstract: In the context of channel coding, the Generalized Belief Propagation (GBP) is an iterative algorithm used to recover the transmission bits sent through a noisy channel. To ensure a reliable transmission, we apply a map on the bits, that is called a code. This code induces artificial correlations between the bits to send, and it can be modeled by a graph whose nodes are the bits and the edges are the correlations. This graph, called Tanner graph, is used for most of the decoding algorithms like Belief Propagation or Gallager-B. The GBP is based on a non unic transformation of the Tanner graph into a so called region-graph. A clear advantage of the GBP over the other algorithms is the freedom in the construction of this graph. In this article, we explain a particular construction for specific graph topologies that involves relevant performance of the GBP. Moreover, we investigate the behavior of the GBP considered as a dynamic system in order to understand the way it evolves in terms of the time and in terms of the noise power of the channel. To this end we make use of classical measures and we introduce a new measure called the hyperspheres method that enables to know the size of the attractors.
Abstract: Wavelets have provided the researchers with
significant positive results, by entering the texture defect detection domain. The weak point of wavelets is that they are one-dimensional
by nature so they are not efficient enough to describe and analyze two-dimensional functions. In this paper we present a new method to
detect the defect of texture images by using curvelet transform.
Simulation results of the proposed method on a set of standard
texture images confirm its correctness. Comparing the obtained results indicates the ability of curvelet transform in describing
discontinuity in two-dimensional functions compared to wavelet
transform
Abstract: Much has been written about the difficulties students
have with producing traditional dissertations. This includes both
native English speakers (L1) and students with English as a second
language (L2). The main emphasis of these papers has been on the
structure of the dissertation, but in all cases, even when electronic
versions are discussed, the dissertation is still in what most would
regard as a traditional written form.
Master of Science Degrees in computing disciplines require
students to gain technical proficiency and apply their knowledge to a
range of scenarios. The basis of this paper is that if a dissertation is a
means of showing that such a student has met the criteria for a pass,
which should be based on the learning outcomes of the dissertation
module, does meeting those outcomes require a student to
demonstrate their skills in a solely text based form, particularly in a
highly technical research project? Could it be possible for a student
to produce a series of related artifacts which form a cohesive package
that meets the learning out comes of the dissertation?
Abstract: Transliteration is frequently used especially in writing geographic denominations, personal names (onyms) etc. Proper names (onyms) of all languages must sound similarly in translated works as well as in scientific projects and works written in mother tongue, because we can get introduced with the nation, its history, culture, traditions and other spiritual values through the onyms of that nation. Therefore it is necessary to systematize the different transliterations of onyms of foreign languages. This paper is dedicated to the problem of making the project of transliterating Kazakh onyms into Arabic. In order to achieve this goal we use scientific or practical types of transliteration. Because in this type of transliteration provides easy reading writing source language's texts in the target language without any diacritical symbols, it is limited by the target language's alphabetic system.
Abstract: LabVIEW and SIMULINK are two most widely used
graphical programming environments for designing digital signal
processing and control systems. Unlike conventional text-based
programming languages such as C, Cµ and MATLAB, graphical
programming involves block-based code developments, allowing a
more efficient mechanism to build and analyze control systems. In
this paper a LabVIEW environment has been employed as a
graphical user interface for monitoring the operation of a controlled
distillation column, by visualizing both the closed loop performance
and the user selected control conditions, while the column dynamics
has been modeled under the SIMULINK environment. This tool has
been applied to the PID based decoupled control of a binary
distillation column. By means of such integrated environments the
control designer is able to monitor and control the plant behavior and
optimize the response when both, the quality improvement of
distillation products and the operation efficiency tasks, are
considered.
Abstract: Geographical Information Systems are an integral part
of planning in modern technical systems. Nowadays referred to as
Spatial Decision Support Systems, as they allow synergy database
management systems and models within a single user interface
machine and they are important tools in spatial design for
evaluating policies and programs at all levels of administration.
This work refers to the creation of a Geographical Information
System in the context of a broader research in the area of influence
of an under construction station of the new metro in the Greek
city of Thessaloniki, which included statistical and multivariate
data analysis and diagrammatic representation, mapping and
interpretation of the results.
Abstract: This paper presents work characterizing finite element
performance boundaries within which live, interactive finite element
modeling is feasible on current and emerging systems. These results
are based on wide-ranging tests performed using a prototype finite
element program implemented specifically for this study, thereby enabling
the unified investigation of numerous direct and iterative solver
strategies and implementations in a variety of modeling contexts.
The results are intended to be useful for researchers interested in
interactive analysis by providing baseline performance estimates, to
give guidance in matching solution strategies to problem domains,
and to spur further work addressing the challenge of extending the
present boundaries.
Abstract: This research uses computational linguistics, an area of study that employs a computer to process natural language, and aims at discerning the patterns that exist in declarative sentences used in technical texts. The approach is mathematical, and the focus is on instructional texts found on web pages. The technique developed by the author and named the MAYA Semantic Technique is used here and organized into four stages. In the first stage, the parts of speech in each sentence are identified. In the second stage, the subject of the sentence is determined. In the third stage, MAYA performs a frequency analysis on the remaining words to determine the verb and its object. In the fourth stage, MAYA does statistical analysis to determine the content of the web page. The advantage of the MAYA Semantic Technique lies in its use of mathematical principles to represent grammatical operations which assist processing and accuracy if performed on unambiguous text. The MAYA Semantic Technique is part of a proposed architecture for an entire web-based intelligent tutoring system. On a sample set of sentences, partial semantics derived using the MAYA Semantic Technique were approximately 80% accurate. The system currently processes technical text in one domain, namely Cµ programming. In this domain all the keywords and programming concepts are known and understood.
Abstract: Text categorization - the assignment of natural language documents to one or more predefined categories based on their semantic content - is an important component in many information organization and management tasks. Performance of neural networks learning is known to be sensitive to the initial weights and architecture. This paper discusses the use multilayer neural network initialization with decision tree classifier for improving text categorization accuracy. An adaptation of the algorithm is proposed in which a decision tree from root node until a final leave is used for initialization of multilayer neural network. The experimental evaluation demonstrates this approach provides better classification accuracy with Reuters-21578 corpus, one of the standard benchmarks for text categorization tasks. We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.
Abstract: Gluconic acid is one of interesting chemical products
in industries such as detergents, leather, photographic, textile, and
especially in food and pharmaceutical industries. Fermentation is an
advantageous process to produce gluconic acid. Mathematical
modeling is important in the design and operation of fermentation
process. In fact, kinetic data must be available for modeling. The
kinetic parameters of gluconic acid production by Aspergillus niger
in batch culture was studied in this research at initial substrate
concentration of 150, 200 and 250 g/l. The kinetic models used were
logistic equation for growth, Luedeking-Piret equation for gluconic
acid formation, and Luedeking-Piret-like equation for glucose
consumption. The Kinetic parameters in the model were obtained by
minimizing non linear least squares curve fitting.
Abstract: Processing the data by computers and performing
reasoning tasks is an important aim in Computer Science. Semantic
Web is one step towards it. The use of ontologies to enhance the
information by semantically is the current trend. Huge amount of
domain specific, unstructured on-line data needs to be expressed in
machine understandable and semantically searchable format.
Currently users are often forced to search manually in the results
returned by the keyword-based search services. They also want to use
their native languages to express what they search. In this paper, an
ontology-based automated question answering system on software
test documents domain is presented. The system allows users to enter
a question about the domain by means of natural language and
returns exact answer of the questions. Conversion of the natural
language question into the ontology based query is the challenging
part of the system. To be able to achieve this, a new algorithm
regarding free text to ontology based search engine query conversion
is proposed. The algorithm is based on investigation of suitable
question type and parsing the words of the question sentence.
Abstract: One of the common problems encountered in software
engineering is addressing and responding to the changing nature of
requirements. While several approaches have been devised to address
this issue, ranging from instilling resistance to changing requirements
in order to mitigate impact to project schedules, to developing an
agile mindset towards requirements, the approach discussed in this
paper is one of conceptualizing the delta in requirement and
modeling it, in order to plan a response to it. To provide some
context here, change is first formally identified and categorized as
either formal change or informal change. While agile methodology
facilitates informal change, the approach discussed in this paper
seeks to develop the idea of facilitating formal change. To collect,
document meta-requirements that represent the phenomena of change
would be a pro-active measure towards building a realistic cognition
of the requirements entity that can further be harnessed in the
software engineering process.
Abstract: Location-aware computing is a type of pervasive
computing that utilizes user-s location as a dominant factor for
providing urban services and application-related usages. One of the
important urban services is navigation instruction for wayfinders in a
city especially when the user is a tourist. The services which are
presented to the tourists should provide adapted location aware
instructions. In order to achieve this goal, the main challenge is to
find spatial relevant objects and location-dependent information. The
aim of this paper is the development of a reusable location-aware
model to handle spatial relevancy parameters in urban location-aware
systems. In this way we utilized ontology as an approach which could
manage spatial relevancy by defining a generic model. Our
contribution is the introduction of an ontological model based on the
directed interval algebra principles. Indeed, it is assumed that the
basic elements of our ontology are the spatial intervals for the user
and his/her related contexts. The relationships between them would
model the spatial relevancy parameters. The implementation language
for the model is OWLs, a web ontology language. The achieved
results show that our proposed location-aware model and the
application adaptation strategies provide appropriate services for the
user.
Abstract: The purpose of this research is to study the concepts
of multiple Cartesian product, variety of multiple algebras and to
present some examples. In the theory of multiple algebras, like other
theories, deriving new things and concepts from the things and
concepts available in the context is important. For example, the first
were obtained from the quotient of a group modulo the equivalence
relation defined by a subgroup of it. Gratzer showed that every
multiple algebra can be obtained from the quotient of a universal
algebra modulo a given equivalence relation.
The purpose of this study is examination of multiple algebras and
basic relations defined on them as well as introduction to some
algebraic structures derived from multiple algebras. Among the
structures obtained from multiple algebras, this article studies submultiple
algebras, quotients of multiple algebras and the Cartesian
product of multiple algebras.