Abstract: The standard investigational method for obstructive
sleep apnea syndrome (OSAS) diagnosis is polysomnography (PSG),
which consists of a simultaneous, usually overnight recording of
multiple electro-physiological signals related to sleep and
wakefulness. This is an expensive, encumbering and not a readily
repeated protocol, and therefore there is need for simpler and easily
implemented screening and detection techniques. Identification of
apnea/hypopnea events in the screening recordings is the key factor
for the diagnosis of OSAS. The analysis of a solely single-lead
electrocardiographic (ECG) signal for OSAS diagnosis, which may
be done with portable devices, at patient-s home, is the challenge of
the last years. A novel artificial neural network (ANN) based
approach for feature extraction and automatic identification of
respiratory events in ECG signals is presented in this paper. A
nonlinear principal component analysis (NLPCA) method was
considered for feature extraction and support vector machine for
classification/recognition. An alternative representation of the
respiratory events by means of Kohonen type neural network is
discussed. Our prospective study was based on OSAS patients of the
Clinical Hospital of Pneumology from Iaşi, Romania, males and
females, as well as on non-OSAS investigated human subjects. Our
computed analysis includes a learning phase based on cross signal
PSG annotation.
Abstract: As more people from non-technical backgrounds
are becoming directly involved with large-scale ontology
development, the focal point of ontology research has shifted
from the more theoretical ontology issues to problems
associated with the actual use of ontologies in real-world,
large-scale collaborative applications. Recently the National
Science Foundation funded a large collaborative ontology
development project for which a new formal ontology model,
the Ontology Abstract Machine (OAM), was developed to
satisfy some unique functional and data representation
requirements. This paper introduces the OAM model and the
related algorithms that enable maintenance of an ontology that
supports node-based user access. The successful software
implementation of the OAM model and its subsequent
acceptance by a large research community proves its validity
and its real-world application value.
Abstract: In text categorization problem the most used method
for documents representation is based on words frequency vectors
called VSM (Vector Space Model). This representation is based only
on words from documents and in this case loses any “word context"
information found in the document. In this article we make a
comparison between the classical method of document representation
and a method called Suffix Tree Document Model (STDM) that is
based on representing documents in the Suffix Tree format. For the
STDM model we proposed a new approach for documents
representation and a new formula for computing the similarity
between two documents. Thus we propose to build the suffix tree
only for any two documents at a time. This approach is faster, it has
lower memory consumption and use entire document representation
without using methods for disposing nodes. Also for this method is
proposed a formula for computing the similarity between documents,
which improves substantially the clustering quality. This
representation method was validated using HAC - Hierarchical
Agglomerative Clustering. In this context we experiment also the
stemming influence in the document preprocessing step and highlight
the difference between similarity or dissimilarity measures to find
“closer" documents.
Abstract: This article describes Uruk, the virtual museum of
Iraq that we developed for visual exploration and retrieval of image
collections. The system largely exploits the loosely-structured
hierarchy of XML documents that provides a useful representation
method to store semi-structured or unstructured data, which does not
easily fit into existing database. The system offers users the
capability to mine and manage the XML-based image collections
through a web-based Graphical User Interface (GUI). Typically, at an
interactive session with the system, the user can browse a visual
structural summary of the XML database in order to select interesting
elements. Using this intermediate result, queries combining structure
and textual references can be composed and presented to the system.
After query evaluation, the full set of answers is presented in a visual
and structured way.
Abstract: This paper gives an overview of how an OWL
ontology has been created to represent template knowledge models
defined in CML that are provided by CommonKADS.
CommonKADS is a mature knowledge engineering methodology
which proposes the use of template knowledge model for knowledge
modelling. The aim of developing this ontology is to present the
template knowledge model in a knowledge representation language
that can be easily understood and shared in the knowledge
engineering community. Hence OWL is used as it has become a
standard for ontology and also it already has user friendly tools for
viewing and editing.
Abstract: This research paper presents some methods to assess the performance of Wigner Ville Distribution for Time-Frequency representation of non-stationary signals, in comparison with the other representations like STFT, Spectrogram etc. The simultaneous timefrequency resolution of WVD is one of the important properties which makes it preferable for analysis and detection of linear FM and transient signals. There are two algorithms proposed here to assess the resolution and to compare the performance of signal detection. First method is based on the measurement of area under timefrequency plot; in case of a linear FM signal analysis. A second method is based on the instantaneous power calculation and is used in case of transient, non-stationary signals. The implementation is explained briefly for both methods with suitable diagrams. The accuracy of the measurements is validated to show the better performance of WVD representation in comparison with STFT and Spectrograms.
Abstract: Term Extraction, a key data preparation step in Text
Mining, extracts the terms, i.e. relevant collocation of words,
attached to specific concepts (e.g. genetic-algorithms and decisiontrees
are terms associated to the concept “Machine Learning" ). In
this paper, the task of extracting interesting collocations is achieved
through a supervised learning algorithm, exploiting a few
collocations manually labelled as interesting/not interesting. From
these examples, the ROGER algorithm learns a numerical function,
inducing some ranking on the collocations. This ranking is optimized
using genetic algorithms, maximizing the trade-off between the false
positive and true positive rates (Area Under the ROC curve). This
approach uses a particular representation for the word collocations,
namely the vector of values corresponding to the standard statistical
interestingness measures attached to this collocation. As this
representation is general (over corpora and natural languages),
generality tests were performed by experimenting the ranking
function learned from an English corpus in Biology, onto a French
corpus of Curriculum Vitae, and vice versa, showing a good
robustness of the approaches compared to the state-of-the-art Support
Vector Machine (SVM).
Abstract: A system for market identification (SMI) is presented.
The resulting representations are multivariable dynamic demand
models. The market specifics are analyzed. Appropriate models and
identification techniques are chosen. Multivariate static and dynamic
models are used to represent the market behavior. The steps of the
first stage of SMI, named data preprocessing, are mentioned. Next,
the second stage, which is the model estimation, is considered in more
details. Stepwise linear regression (SWR) is used to determine the
significant cross-effects and the orders of the model polynomials. The
estimates of the model parameters are obtained by a numerically stable
estimator. Real market data is used to analyze SMI performance.
The main conclusion is related to the applicability of multivariate
dynamic models for representation of market systems.
Abstract: Sparse representation which can represent high dimensional
data effectively has been successfully used in computer vision
and pattern recognition problems. However, it doesn-t consider the
label information of data samples. To overcome this limitation,
we develop a novel dimensionality reduction algorithm namely
dscriminatively regularized sparse subspace learning(DR-SSL) in this
paper. The proposed DR-SSL algorithm can not only make use of
the sparse representation to model the data, but also can effective
employ the label information to guide the procedure of dimensionality
reduction. In addition,the presented algorithm can effectively deal
with the out-of-sample problem.The experiments on gene-expression
data sets show that the proposed algorithm is an effective tool for
dimensionality reduction and gene-expression data classification.
Abstract: The job shop scheduling problem (JSSP) is well known as one of the most difficult combinatorial optimization problems. This paper presents a hybrid genetic algorithm for the JSSP with the objective of minimizing makespan. The efficiency of the genetic algorithm is enhanced by integrating it with a local search method. The chromosome representation of the problem is based on operations. Schedules are constructed using a procedure that generates full active schedules. In each generation, a local search heuristic based on Nowicki and Smutnicki-s neighborhood is applied to improve the solutions. The approach is tested on a set of standard instances taken from the literature and compared with other approaches. The computation results validate the effectiveness of the proposed algorithm.
Abstract: Effective knowledge support relies on providing
operation-relevant knowledge to workers promptly and accurately. A
knowledge flow represents an individual-s or a group-s
knowledge-needs and referencing behavior of codified knowledge
during operation performance. The flow has been utilized to facilitate
organizational knowledge support by illustrating workers-
knowledge-needs systematically and precisely. However,
conventional knowledge-flow models cannot work well in cooperative
teams, which team members usually have diverse knowledge-needs in
terms of roles. The reason is that those models only provide one single
view to all participants and do not reflect individual knowledge-needs
in flows. Hence, we propose a role-based knowledge-flow view model
in this work. The model builds knowledge-flow views (or virtual
knowledge flows) by creating appropriate virtual knowledge nodes
and generalizing knowledge concepts to required concept levels. The
customized views could represent individual role-s knowledge-needs
in teamwork context. The novel model indicates knowledge-needs in
condensed representation from a roles perspective and enhances the
efficiency of cooperative knowledge support in organizations.
Abstract: In the paper, the relative performances on spectral
classification of short exon and intron sequences of the human and
eleven model organisms is studied. In the simulations, all
combinations of sixteen one-sequence numerical representations, four
threshold values, and four window lengths are considered. Sequences
of 150-base length are chosen and for each organism, a total of
16,000 sequences are used for training and testing. Results indicate
that an appropriate combination of one-sequence numerical
representation, threshold value, and window length is essential for
arriving at top spectral classification results. For fixed-length
sequences, the precisions on exon and intron classification obtained
for different organisms are not the same because of their genomic
differences. In general, precision increases as sequence length
increases.
Abstract: In the globalization process, when the struggle for minds and values of the people is taking place, the impact of the virtual space can cause unexpected effects and consequences in the process of adjustment of young people in this world. Their special significance is defined by unconscious influence on the underlying process of meaning and therefore the values preached by them are much more effective and affect both the personal characteristics and the peculiarities of adjustment process. Related to this the challenge is to identify factors influencing the reflection characteristics of virtual subjects and measures their impact on the personal characteristics of the students.
Abstract: In this paper we propose a computational model for the representation and processing of morpho-phonological phenomena in a natural language, like Modern Greek. We aim at a unified treatment of inflection, compounding, and word-internal phonological changes, in a model that is used for both analysis and generation. After discussing certain difficulties cuase by well-known finitestate approaches, such as Koskenniemi-s two-level model [7] when applied to a computational treatment of compounding, we argue that a morphology-based model provides a more adequate account of word-internal phenomena. Contrary to the finite state approaches that cannot handle hierarchical word constituency in a satisfactory way, we propose a unification-based word grammar, as the nucleus of our strategy, which takes into consideration word representations that are based on affixation and [stem stem] or [stem word] compounds. In our formalism, feature-passing operations are formulated with the use of the unification device, and phonological rules modeling the correspondence between lexical and surface forms apply at morpheme boundaries. In the paper, examples from Modern Greek illustrate our approach. Morpheme structures, stress, and morphologically conditioned phoneme changes are analyzed and generated in a principled way.
Abstract: This paper focuses upon three such painters working in
France from this time and their representations both of their host
country in which they found themselves displaced, and of their
homeland which they represent through refracted memories from their
new perspective in Europe. What is their representation of France and
China´╝ÅTaiwan? Is it Otherness or an origin?
This paper also attempts to explore the three artists- diasporic lives
and to redefine their transnational identities. Hou Chin-lang, the
significance of his multiple-split images serve to highlight the intricate
relationships between his work and the surrounding family, and to
reveal his identity of his Taiwan “homeland". Yin Xin takes paintings
from the Western canon and subjects them to a process of
transformation through Chinese imagery. In the same period, Lin
Li-ling, transforms the transnational spirit of Yin Xin to symbolic
codes with neutered female bodies and tatoos, thus creates images that
challenge the boundaries of both gender and nationality.
Abstract: General requirements for knowledge representation in
the form of logic rules, applicable to design and control of industrial
processes, are formulated. Characteristic behavior of decision trees
(DTs) and rough sets theory (RST) in rules extraction from recorded
data is discussed and illustrated with simple examples. The
significance of the models- drawbacks was evaluated, using
simulated and industrial data sets. It is concluded that performance of
DTs may be considerably poorer in several important aspects,
compared to RST, particularly when not only a characterization of a
problem is required, but also detailed and precise rules are needed,
according to actual, specific problems to be solved.
Abstract: Concept maps can be generated manually or
automatically. It is important to recognize differences of the two
types of concept maps. The automatically generated concept maps
are dynamic, interactive, and full of associations between the terms
on the maps and the underlying documents. Through a specific
concept mapping system, Visual Concept Explorer (VCE), this paper
discusses how automatically generated concept maps are different
from manually generated concept maps and how different
applications and learning opportunities might be created with the
automatically generated concept maps. The paper presents several
examples of learning strategies that take advantages of the
automatically generated concept maps for concept learning and
exploration.
Abstract: Knowledge sharing in general and the contextual
access to knowledge in particular, still represent a key challenge in
the knowledge management framework. Researchers on semantic
web and human machine interface study techniques to enhance this
access. For instance, in semantic web, the information retrieval is
based on domain ontology. In human machine interface, keeping
track of user's activity provides some elements of the context that can
guide the access to information. We suggest an approach based on
these two key guidelines, whilst avoiding some of their weaknesses.
The approach permits a representation of both the context and the
design rationale of a project for an efficient access to knowledge. In
fact, the method consists of an information retrieval environment
that, in the one hand, can infer knowledge, modeled as a semantic
network, and on the other hand, is based on the context and the
objectives of a specific activity (the design). The environment we
defined can also be used to gather similar project elements in order to
build classifications of tasks, problems, arguments, etc. produced in a
company. These classifications can show the evolution of design
strategies in the company.
Abstract: Existing literature ondesign reasoning seems to give
either one sided accounts on expert design behaviour based on
internal processing. In the same way ecological theoriesseem to
focus one sidedly on external elementsthat result in a lack of unifying
design cognition theory. Although current extended design cognition
studies acknowledge the intellectual interaction between internal and
external resources, there still seems to be insufficient understanding
of the complexities involved in such interactive processes. As
such,this paper proposes a novelmulti-directional model for design
researchers tomap the complex and dynamic conduct controlling
behaviour in which both the computational and ecological
perspectives are integrated in a vertical manner. A clear distinction
between identified intentional and emerging physical drivers, and
relationships between them during the early phases of experts- design
process, is demonstrated by presenting a case study in which the
model was employed.
Abstract: This document details the process of developing a
wireless device that captures the basic movements of the foot (plantar
flexion, dorsal flexion, abduction, adduction.), and the knee
movement (flexion). It implements a motion capture system by using
a hardware based on optical fiber sensors, due to the advantages in
terms of scope, noise immunity and speed of data transmission and
reception. The operating principle used by this system is the detection
and transmission of joint movement by mechanical elements and
their respective measurement by optical ones (in this case infrared).
Likewise, Visual Basic software is used for reception, analysis and
signal processing of data acquired by the device, generating a 3D
graphical representation in real time of each movement. The result is
a boot in charge of capturing the movement, a transmission module
(Implementing Xbee Technology) and a receiver module for
receiving information and sending it to the PC for their respective
processing.
The main idea with this device is to help on topics such as
bioengineering and medicine, by helping to improve the quality of
life and movement analysis.