Abstract: This paper describes a novel approach for deriving
modules from protein-protein interaction networks, which combines
functional information with topological properties of the network.
This approach is based on weighted clustering coefficient, which
uses weights representing the functional similarities between the
proteins. These weights are calculated according to the semantic
similarity between the proteins, which is based on their Gene
Ontology terms. We recently proposed an algorithm for identification
of functional modules, called SWEMODE (Semantic WEights for
MODule Elucidation), that identifies dense sub-graphs containing
functionally similar proteins. The rational underlying this approach is
that each module can be reduced to a set of triangles (protein triplets
connected to each other). Here, we propose considering semantic
similarity weights of all triangle-forming edges between proteins. We
also apply varying semantic similarity thresholds between
neighbours of each node that are not neighbours to each other (and
hereby do not form a triangle), to derive new potential triangles to
include in module-defining procedure. The results show an
improvement of pure topological approach, in terms of number of
predicted modules that match known complexes.
Abstract: Conceptualization strengthens intelligent systems in generalization skill, effective knowledge representation, real-time inference, and managing uncertain and indefinite situations in addition to facilitating knowledge communication for learning agents situated in real world. Concept learning introduces a way of abstraction by which the continuous state is formed as entities called concepts which are connected to the action space and thus, they illustrate somehow the complex action space. Of computational concept learning approaches, action-based conceptualization is favored because of its simplicity and mirror neuron foundations in neuroscience. In this paper, a new biologically inspired concept learning approach based on the probabilistic framework is proposed. This approach exploits and extends the mirror neuron-s role in conceptualization for a reinforcement learning agent in nondeterministic environments. In the proposed method, instead of building a huge numerical knowledge, the concepts are learnt gradually from rewards through interaction with the environment. Moreover the probabilistic formation of the concepts is employed to deal with uncertain and dynamic nature of real problems in addition to the ability of generalization. These characteristics as a whole distinguish the proposed learning algorithm from both a pure classification algorithm and typical reinforcement learning. Simulation results show advantages of the proposed framework in terms of convergence speed as well as generalization and asymptotic behavior because of utilizing both success and failures attempts through received rewards. Experimental results, on the other hand, show the applicability and effectiveness of the proposed method in continuous and noisy environments for a real robotic task such as maze as well as the benefits of implementing an incremental learning scenario in artificial agents.
Abstract: The Far From Most Strings Problem (FFMSP) is to obtain a string which is far from as many as possible of a given set of strings. All the input and the output strings are of the same length, and two strings are said to be far if their hamming distance is greater than or equal to a given positive integer. FFMSP belongs to the class of sequences consensus problems which have applications in molecular biology. The problem is NP-hard; it does not admit a constant-ratio approximation either, unless P = NP. Therefore, in addition to exact and approximate algorithms, (meta)heuristic algorithms have been proposed for the problem in recent years. On the other hand, in the recent years, hybrid algorithms have been proposed and successfully used for many hard problems in a variety of domains. In this paper, a new metaheuristic algorithm, called Constructive Beam and Local Search (CBLS), is investigated for the problem, which is a hybridization of constructive beam search and local search algorithms. More specifically, the proposed algorithm consists of two phases, the first phase is to obtain several candidate solutions via the constructive beam search and the second phase is to apply local search to the candidate solutions obtained by the first phase. The best solution found is returned as the final solution to the problem. The proposed algorithm is also similar to memetic algorithms in the sense that both use local search to further improve individual solutions. The CBLS algorithm is compared with the most recent published algorithm for the problem, GRASP, with significantly positive results; the improvement is by order of magnitudes in most cases.
Abstract: The paper shows how the CASMAS modeling language,
and its associated pervasive computing architecture, can be
used to facilitate continuity of care by providing members of patientcentered
communities of care with a support to cooperation and
knowledge sharing through the usage of electronic documents and
digital devices. We consider a scenario of clearly fragmented care to
show how proper mechanisms can be defined to facilitate a better
integration of practices and information across heterogeneous care
networks. The scenario is declined in terms of architectural components
and cooperation-oriented mechanisms that make the support
reactive to the evolution of the context where these communities
operate.
Abstract: Content-Based Image Retrieval has been a major area
of research in recent years. Efficient image retrieval with high
precision would require an approach which combines usage of both
the color and texture features of the image. In this paper we propose
a method for enhancing the capabilities of texture based feature
extraction and further demonstrate the use of these enhanced texture
features in Texture-Based Color Image Retrieval.
Abstract: This paper presents a simple and effective method for approximate indexing of instances for instance based learning. The method uses an interval tree to determine a good starting search point for the nearest neighbor. The search stops when an early stopping criterion is met. The method proved to be very effective especially when only the first nearest neighbor is required.
Abstract: In this paper, a new method is proposed to find the fuzzy optimal solution of fuzzy assignment problems by representing all the parameters as triangular fuzzy numbers. The advantages of the pro-posed method are also discussed. To illustrate the proposed method a fuzzy assignment problem is solved by using the proposed method and the obtained results are discussed. The proposed method is easy to understand and to apply for finding the fuzzy optimal solution of fuzzy assignment problems occurring in real life situations.
Abstract: Over recent years, the number of building integrated photovoltaic (BIPV) installations for home systems have been increasing in Malaysia. The paper concerns an analysis - as part of current Research and Development (R&D) efforts - to integrate photovoltaics as an architectural feature of a detached house in the new satellite township of Putrajaya, Malaysia. The analysis was undertaken using calculation and simulation tools to optimize performance of BIPV home system. In this study, a the simulation analysis was undertaken for selected bungalow units based on a long term recorded weather data for city of Kuala Lumpur. The simulation and calculation was done with consideration of a PV panels' tilt and direction, shading effect and economical considerations. A simulation of the performance of a grid connected BIPV house in Kuala Lumpur was undertaken. This case study uses a 60 PV modules with power output of 2.7 kW giving an average of PV electricity output is 255 kWh/month..
Abstract: This paper describes a novel method for automatic
estimation of the contours of weld defect in radiography images.
Generally, the contour detection is the first operation which we apply
in the visual recognition system. Our approach can be described as a
region based maximum likelihood formulation of parametric
deformable contours. This formulation provides robustness against
the poor image quality, and allows simultaneous estimation of the
contour parameters together with other parameters of the model.
Implementation is performed by a deterministic iterative algorithm
with minimal user intervention. Results testify for the very good
performance of the approach especially in synthetic weld defect
images.
Abstract: One of the most growing areas in the embedded community is multimedia devices. Multimedia devices incorporate a number of complicated functions for their operation, like motion estimation. A multitude of different implementations have been proposed to reduce motion estimation complexity, such as spiral search. We have studied the implementations of spiral search and identified areas of improvement. We propose a modified spiral search algorithm, with lower computational complexity compared to the original spiral search. We have implemented our algorithm on an embedded ARM based architecture, with custom memory hierarchy. The resulting system yields energy consumption reduction up to 64% and performance increase up to 77%, with a small penalty of 2.3 dB, in average, of video quality compared with the original spiral search algorithm.
Abstract: Image watermarking has become an important tool for
intellectual property protection and authentication. In this paper a
watermarking technique is suggested that incorporates two
watermarks in a host image for improved protection and robustness.
A watermark, in form of a PN sequence (will be called the secondary
watermark), is embedded in the wavelet domain of a primary
watermark before being embedded in the host image. The technique
has been tested using Lena image as a host and the camera man as
the primary watermark. The embedded PN sequence was detectable
through correlation among other five sequences where a PSNR of
44.1065 dB was measured. Furthermore, to test the robustness of the
technique, the watermarked image was exposed to four types of
attacks, namely compression, low pass filtering, salt and pepper noise
and luminance change. In all cases the secondary watermark was
easy to detect even when the primary one is severely distorted.
Abstract: This paper attempts to explore a new method to
improve the teaching of algorithmic for beginners. It is well known
that algorithmic is a difficult field to teach for teacher and complex to
assimilate for learner. These difficulties are due to intrinsic
characteristics of this field and to the manner that teachers (the
majority) apprehend its bases. However, in a Technology Enhanced
Learning environment (TEL), assessment, which is important and
indispensable, is the most delicate phase to implement, for all
problems that generate (noise...). Our objective registers in the
confluence of these two axes. For this purpose, EASEL focused
essentially to elaborate an assessment approach of algorithmic
competences in a TEL environment. This approach consists in
modeling an algorithmic solution according to basic and elementary
operations which let learner draw his/her own step with all autonomy
and independently to any programming language. This approach
assures a trilateral assessment: summative, formative and diagnostic
assessment.
Abstract: Regression testing is a maintenance activity applied to
modified software to provide confidence that the changed parts are
correct and that the unchanged parts have not been adversely affected
by the modifications. Regression test selection techniques reduce the
cost of regression testing, by selecting a subset of an existing test
suite to use in retesting modified programs. This paper presents the
first general regression-test-selection technique, which based on code
and allows selecting test cases for any programs written in any
programming language. Then it handles incomplete program. We
also describe RTSDiff, a regression-test-selection system that
implements the proposed technique. The results of the empirical
studied that performed in four programming languages java, C#, Cµ
and Visual basic show that the efficiency and effective in reducing
the size of test suit.
Abstract: MANEMO is the integration of Network Mobility
(NEMO) and Mobile Ad Hoc Network (MANET). A MANEMO
node has an interface to both a MANET and NEMO network, and
therefore should choose the optimal interface for packet delivery,
however such a handover between interfaces will introduce packet
loss. We define the steps necessary for a MANEMO handover,
using Mobile IP and NEMO to signal the new binding to the
relevant Home Agent(s). The handover steps aim to minimize the
packet loss by avoiding waiting for Duplicate Address Detection
and Neighbour Unreachability Detection. We present expressions for
handover delay and packet loss, and then use numerical examples to
evaluate a MANEMO handover. The analysis shows how the packet
loss depends on level of nesting within NEMO, the delay between
Home Agents and the load on the MANET, and hence can be used
to developing optimal MANEMO handover algorithms.
Abstract: Dynamic location referencing method is an important technology to shield map differences. These method references objects of the road network by utilizing condensed selection of its real-world geographic properties stored in a digital map database, which overcomes the defections existing in pre-coded location referencing methods. The high attributes completeness requirements and complicated reference point selection algorithm are the main problems of recent researches. Therefore, a dynamic location referencing algorithm combining intersection points selected at the extremities compulsively and road link points selected according to link partition principle was proposed. An experimental system based on this theory was implemented. The tests using Beijing digital map database showed satisfied results and thus verified the feasibility and practicability of this method.
Abstract: Data warehousing success is not high enough. User
dissatisfaction and failure to adhere to time frames and budgets are
too common. Most traditional information systems practices are
rooted in hard systems thinking. Today, the great systems thinkers
are forgotten by information systems developers. A data warehouse
is still a system and it is worth investigating whether systems
thinkers such as Churchman can enhance our practices today. This
paper investigates data warehouse development practices from a
systems thinking perspective. An empirical investigation is done in
order to understand the everyday practices of data warehousing
professionals from a systems perspective. The paper presents a
model for the application of Churchman-s systems approach in data
warehouse development.
Abstract: The shortest path routing problem is a multiobjective nonlinear optimization problem with constraints. This problem has been addressed by considering Quality of service parameters, delay and cost objectives separately or as a weighted sum of both objectives. Multiobjective evolutionary algorithms can find multiple pareto-optimal solutions in one single run and this ability makes them attractive for solving problems with multiple and conflicting objectives. This paper uses an elitist multiobjective evolutionary algorithm based on the Non-dominated Sorting Genetic Algorithm (NSGA), for solving the dynamic shortest path routing problem in computer networks. A priority-based encoding scheme is proposed for population initialization. Elitism ensures that the best solution does not deteriorate in the next generations. Results for a sample test network have been presented to demonstrate the capabilities of the proposed approach to generate well-distributed pareto-optimal solutions of dynamic routing problem in one single run. The results obtained by NSGA are compared with single objective weighting factor method for which Genetic Algorithm (GA) was applied.
Abstract: This paper explores the scalability issues associated
with solving the Named Entity Recognition (NER) problem using
Support Vector Machines (SVM) and high-dimensional features. The
performance results of a set of experiments conducted using binary
and multi-class SVM with increasing training data sizes are
examined. The NER domain chosen for these experiments is the
biomedical publications domain, especially selected due to its
importance and inherent challenges. A simple machine learning
approach is used that eliminates prior language knowledge such as
part-of-speech or noun phrase tagging thereby allowing for its
applicability across languages. No domain-specific knowledge is
included. The accuracy measures achieved are comparable to those
obtained using more complex approaches, which constitutes a
motivation to investigate ways to improve the scalability of multiclass
SVM in order to make the solution more practical and useable.
Improving training time of multi-class SVM would make support
vector machines a more viable and practical machine learning
solution for real-world problems with large datasets. An initial
prototype results in great improvement of the training time at the
expense of memory requirements.
Abstract: The Neuro-Fuzzy hybridization scheme has become
of research interest in pattern classification over the past decade. The
present paper proposes a novel Modified Adaptive Fuzzy Inference
Engine (MAFIE) for pattern classification. A modified Apriori
algorithm technique is utilized to reduce a minimal set of decision
rules based on input output data sets. A TSK type fuzzy inference
system is constructed by the automatic generation of membership
functions and rules by the fuzzy c-means clustering and Apriori
algorithm technique, respectively. The generated adaptive fuzzy
inference engine is adjusted by the least-squares fit and a conjugate
gradient descent algorithm towards better performance with a
minimal set of rules. The proposed MAFIE is able to reduce the
number of rules which increases exponentially when more input
variables are involved. The performance of the proposed MAFIE is
compared with other existing applications of pattern classification
schemes using Fisher-s Iris and Wisconsin breast cancer data sets and
shown to be very competitive.
Abstract: Domain-specific languages describe specific solutions to problems in the application domain. Traditionally they form a solution composing black-box abstractions together. This, usually, involves non-deep transformations over the target model. In this paper we argue that it is potentially powerful to operate with grey-box abstractions to build a domain-specific software system. We present parametric code templates as grey-box abstractions and conceptual tools to encapsulate and manipulate these templates. Manipulations introduce template-s merging routines and can be defined in a generic way. This involves reasoning mechanisms at the code templates level. We introduce the concept of Neurath Modelling Language (NML) that operates with parametric code templates and specifies a visualisation mapping mechanism for target models. Finally we provide an example of calculating a domain-specific software system with predefined NML elements.