Abstract: This paper presents a new methodology to study power and energy consumption in mechatronic systems early in the development process. This new approach makes use of two modeling languages to represent and simulate embedded control software and electromechanical subsystems in the discrete event and continuous time domain respectively within a single co-model. This co-model enables an accurate representation of power and energy consumption and facilitates the analysis and development of both software and electro-mechanical subsystems in parallel. This makes the engineers aware of energy-wise implications of different design alternatives and enables early trade-off analysis from the beginning of the analysis and design activities.
Abstract: Knowledge is a key asset for any organisation to
sustain competitive advantages, but it is difficult to identify and
represent knowledge which is needed to perform activities in
business processes. The effective knowledge management and
support for relevant business activities definitely gives a huge impact
to the performance of the organisation as a whole. This is because
that knowledge have the functions of directing, coordinating and
controlling actions within business processes. The study has
introduced organisational morphology, a norm-based approach by
applying semiotic theories which emphasise on the representation of
knowledge in norms. This approach is concerned with the
identification of activities into three categories: substantive,
communication and control activities. All activities are directed by
norms; hence three types of norms exist; each is associated to a
category of activities. The paper describes the approach briefly and
illustrates the application of this approach through a case study of
academic activities in higher education institutions. The result of the
study shows that the approach provides an effective way to profile
business knowledge and the profile enables the understanding and
specification of business requirements of an organisation.
Abstract: In order to answer the general question: “What does a simple agent with a limited life-time require for constructing a useful representation of the environment?" we propose a robot platform including the simplest probabilistic sensory and motor layers. Then we use the platform as a test-bed for evaluation of the navigational capabilities of the robot with different “brains". We claim that a protocognitive behavior is not a consequence of highly sophisticated sensory–motor organs but instead emerges through an increment of the internal complexity and reutilization of the minimal sensory information. We show that the most fundamental robot element, the short-time memory, is essential in obstacle avoidance. However, in the simplest conditions of no obstacles the straightforward memoryless robot is usually superior. We also demonstrate how a low level action planning, involving essentially nonlinear dynamics, provides a considerable gain to the robot performance dynamically changing the robot strategy. Still, however, for very short life time the brainless robot is superior. Accordingly we suggest that small organisms (or agents) with short life-time does not require complex brains and even can benefit from simple brain-like (reflex) structures. To some extend this may mean that controlling blocks of modern robots are too complicated comparative to their life-time and mechanical abilities.
Abstract: Process planning and production scheduling play
important roles in manufacturing systems. In this paper a multiobjective
mixed integer linear programming model is presented for
the integrated planning and scheduling of multi-product. The aim is
to find a set of high-quality trade-off solutions. This is a
combinatorial optimization problem with substantially large solution
space, suggesting that it is highly difficult to find the best solutions
with the exact search method. To account for it, a PSO-based
algorithm is proposed by fully utilizing the capability of the
exploration search and fast convergence. To fit the continuous PSO
in the discrete modeled problem, a solution representation is used in
the algorithm. The numerical experiments have been performed to
demonstrate the effectiveness of the proposed algorithm.
Abstract: As a result of the daily workflow in the design
development departments of companies, databases containing huge
numbers of 3D geometric models are generated. According to the
given problem engineers create CAD drawings based on their design
ideas and evaluate the performance of the resulting design, e.g. by
computational simulations. Usually, new geometries are built either
by utilizing and modifying sets of existing components or by adding
single newly designed parts to a more complex design.
The present paper addresses the two facets of acquiring
components from large design databases automatically and providing
a reasonable overview of the parts to the engineer. A unified
framework based on the topographic non-negative matrix
factorization (TNMF) is proposed which solves both aspects
simultaneously. First, on a given database meaningful components
are extracted into a parts-based representation in an unsupervised
manner. Second, the extracted components are organized and
visualized on square-lattice 2D maps. It is shown on the example of
turbine-like geometries that these maps efficiently provide a wellstructured
overview on the database content and, at the same time,
define a measure for spatial similarity allowing an easy access and
reuse of components in the process of design development.
Abstract: XML is an important standard of data exchange and
representation. As a mature database system, using relational database
to support XML data may bring some advantages. But storing XML in
relational database has obvious redundancy that wastes disk space,
bandwidth and disk I/O when querying XML data. For the efficiency
of storage and query XML, it is necessary to use compressed XML
data in relational database. In this paper, a compressed relational
database technology supporting XML data is presented. Original
relational storage structure is adaptive to XPath query process. The
compression method keeps this feature. Besides traditional relational
database techniques, additional query process technologies on
compressed relations and for special structure for XML are presented.
In this paper, technologies for XQuery process in compressed
relational database are presented..
Abstract: Decision support systems are usually based on
multidimensional structures which use the concept of hypercube.
Dimensions are the axes on which facts are analyzed and form a
space where a fact is located by a set of coordinates at the
intersections of members of dimensions. Conventional
multidimensional structures deal with discrete facts linked to discrete
dimensions. However, when dealing with natural continuous
phenomena the discrete representation is not adequate. There is a
need to integrate spatiotemporal continuity within multidimensional
structures to enable analysis and exploration of continuous field data.
Research issues that lead to the integration of spatiotemporal
continuity in multidimensional structures are numerous. In this paper,
we discuss research issues related to the integration of continuity in
multidimensional structures, present briefly a multidimensional
model for continuous field data. We also define new aggregation
operations. The model and the associated operations and measures
are validated by a prototype.
Abstract: Feature-based registration is an effective technique for clinical use, because it can greatly reduce computational costs. However, this technique, which estimates the transformation by using feature points extracted from two images, may cause misalignments. To handle with this limitation, we propose to extract the salient edges and extracted control points (CP) of medical images by using efficiency of multiresolution representation of data nonsubsampled contourlet transform (NSCT) that finds the best feature points. The MR images were first decomposed using the NSCT, and then Edge and CP were extracted from bandpass directional subband of NSCT coefficients and some proposed rules. After edge and CP extraction, mutual information was adopted for the registration of feature points and translation parameters are calculated by using particle swarm optimization (PSO). The experimental results showed that the proposed method produces totally accurate performance for registration medical CT-MR images.
Abstract: With the tremendous growth of World Wide Web
(WWW) data, there is an emerging need for effective information
retrieval at the document level. Several query languages such as
XML-QL, XPath, XQL, Quilt and XQuery are proposed in recent
years to provide faster way of querying XML data, but they still lack of
generality and efficiency. Our approach towards evolving a framework
for querying semistructured documents is based on formal query
algebra. Two elements are introduced in the proposed framework:
first, a generic and flexible data model for logical representation of
semistructured data and second, a set of operators for the manipulation
of objects defined in the data model. In additional to accommodating
several peculiarities of semistructured data, our model offers novel
features such as bidirectional paths for navigational querying and
partitions for data transformation that are not available in other
proposals.
Abstract: In this paper, we propose an approach for the classification of fingerprint databases. It is based on the fact that a fingerprint image is composed of regular texture regions that can be successfully represented by co-occurrence matrices. So, we first extract the features based on certain characteristics of the cooccurrence matrix and then we use these features to train a neural network for classifying fingerprints into four common classes. The obtained results compared with the existing approaches demonstrate the superior performance of our proposed approach.
Abstract: Cellular automata have been used for design of cryptosystems. Recently some secret sharing schemes based on linear memory cellular automata have been introduced which are used for both text and image. In this paper, we illustrate that these secret sharing schemes are vulnerable to dishonest participants- collusion. We propose a cheating model for the secret sharing schemes based on linear memory cellular automata. For this purpose we present a novel uniform model for representation of all secret sharing schemes based on cellular automata. Participants can cheat by means of sending bogus shares or bogus transition rules. Cheaters can cooperate to corrupt a shared secret and compute a cheating value added to it. Honest participants are not aware of cheating and suppose the incorrect secret as the valid one. We prove that cheaters can recover valid secret by removing the cheating value form the corrupted secret. We provide methods of calculating the cheating value.
Abstract: XML is becoming a de facto standard for online data exchange. Existing XML filtering techniques based on a publish/subscribe model are focused on the highly structured data marked up with XML tags. These techniques are efficient in filtering the documents of data-centric XML but are not effective in filtering the element contents of the document-centric XML. In this paper, we propose an extended XPath specification which includes a special matching character '%' used in the LIKE operation of SQL in order to solve the difficulty of writing some queries to adequately filter element contents using the previous XPath specification. We also present a novel technique for filtering a collection of document-centric XMLs, called Pfilter, which is able to exploit the extended XPath specification. We show several performance studies, efficiency and scalability using the multi-query processing time (MQPT).
Abstract: Source code retrieval is of immense importance in the software engineering field. The complex tasks of retrieving and extracting information from source code documents is vital in the development cycle of the large software systems. The two main subtasks which result from these activities are code duplication prevention and plagiarism detection. In this paper, we propose a Mohamed Amine Ouddan, and Hassane Essafi source code retrieval system based on two-level fingerprint representation, respectively the structural and the semantic information within a source code. A sequence alignment technique is applied on these fingerprints in order to quantify the similarity between source code portions. The specific purpose of the system is to detect plagiarism and duplicated code between programs written in different programming languages belonging to the same class, such as C, Cµ, Java and CSharp. These four languages are supported by the actual version of the system which is designed such that it may be easily adapted for any programming language.
Abstract: This paper presents unified theory for local (Savitzky-
Golay) and global polynomial smoothing. The algebraic framework
can represent any polynomial approximation and is seamless from
low degree local, to high degree global approximations. The representation
of the smoothing operator as a projection onto orthonormal
basis functions enables the computation of: the covariance matrix
for noise propagation through the filter; the noise gain and; the
frequency response of the polynomial filters. A virtually perfect Gram
polynomial basis is synthesized, whereby polynomials of degree
d = 1000 can be synthesized without significant errors. The perfect
basis ensures that the filters are strictly polynomial preserving. Given
n points and a support length ls = 2m + 1 then the smoothing
operator is strictly linear phase for the points xi, i = m+1. . . n-m.
The method is demonstrated on geometric surfaces data lying on an
invariant 2D lattice.
Abstract: Representation and description of object shapes by the
slopes of their contours or borders are proposed. The idea is to capture
the essence of the features that make it easier for a shape to be
stored, transmitted, compared and recognized. These features must
be independent of translation, rotation and scaling of the shape. A
approach is proposed to obtain high performance, efficiency and to
merge the boundaries into sequence of straight line segments with
the fewest possible segments. Evaluation on the performance of the
proposed method is based on its comparison with established method
of object shape description.
Abstract: Traditional principal components analysis (PCA)
techniques for face recognition are based on batch-mode training
using a pre-available image set. Real world applications require that
the training set be dynamic of evolving nature where within the
framework of continuous learning, new training images are
continuously added to the original set; this would trigger a costly
continuous re-computation of the eigen space representation via
repeating an entire batch-based training that includes the old and new
images. Incremental PCA methods allow adding new images and
updating the PCA representation. In this paper, two incremental
PCA approaches, CCIPCA and IPCA, are examined and compared.
Besides, different learning and testing strategies are proposed and
applied to the two algorithms. The results suggest that batch PCA is
inferior to both incremental approaches, and that all CCIPCAs are
practically equivalent.
Abstract: Railway Stations are prone to emergency due to
various reasons and proper monitor of railway stations are of
immense importance from various angles. A Petri-net representation
of a web-service-based Emergency management system has been
proposed in this paper which will help in monitoring situation of
train, track, signal etc. and in case of any emergency, necessary
resources can be dispatched.
Abstract: Bioinformatics and Cheminformatics use computer as disciplines providing tools for acquisition, storage, processing, analysis, integrate data and for the development of potential applications of biological and chemical data. A chemical database is one of the databases that exclusively designed to store chemical information. NMRShiftDB is one of the main databases that used to represent the chemical structures in 2D or 3D structures. SMILES format is one of many ways to write a chemical structure in a linear format. In this study we extracted Antimicrobial Structures in SMILES format from NMRShiftDB and stored it in our Local Data Warehouse with its corresponding information. Additionally, we developed a searching tool that would response to user-s query using the JME Editor tool that allows user to draw or edit molecules and converts the drawn structure into SMILES format. We applied Quick Search algorithm to search for Antimicrobial Structures in our Local Data Ware House.
Abstract: Mostly the real life signals are time varying in nature. For proper characterization of such signals, time-frequency representation is required. The STFT (short-time Fourier transform) is a classical tool used for this purpose. The limitation of the STFT is its fixed time-frequency resolution. Thus, an enhanced version of the STFT, which is based on the cross-level sampling, is devised. It can adapt the sampling frequency and the window function length by following the input signal local variations. Therefore, it provides an adaptive resolution time-frequency representation of the input. The computational complexity of the proposed STFT is deduced and compared to the classical one. The results show a significant gain of the computational efficiency and hence of the processing power. The processing error of the proposed technique is also discussed.
Abstract: Ontology-based modelling of multi-formatted
software application content is a challenging area in content
management. When the number of software content unit is huge and
in continuous process of change, content change management is
important. The management of content in this context requires
targeted access and manipulation methods. We present a novel
approach to deal with model-driven content-centric information
systems and access to their content. At the core of our approach is an
ontology-based semantic annotation technique for diversely
formatted content that can improve the accuracy of access and
systems evolution. Domain ontologies represent domain-specific
concepts and conform to metamodels. Different ontologies - from
application domain ontologies to software ontologies - capture and
model the different properties and perspectives on a software content
unit. Interdependencies between domain ontologies, the artifacts and
the content are captured through a trace model. The annotation traces
are formalised and a graph-based system is selected for the
representation of the annotation traces.