Abstract: The model-based approach to user interface design
relies on developing separate models capturing various aspects about
users, tasks, application domain, presentation and dialog structures.
This paper presents a task modeling approach for user interface
design and aims at exploring mappings between task, domain and
presentation models. The basic idea of our approach is to identify
typical configurations in task and domain models and to investigate
how they relate each other. A special emphasis is put on applicationspecific
functions and mappings between domain objects and
operational task structures. In this respect, we will address two
layers in task decomposition: a functional (planning) layer and an
operational layer.
Abstract: This research paper presents some methods to assess the performance of Wigner Ville Distribution for Time-Frequency representation of non-stationary signals, in comparison with the other representations like STFT, Spectrogram etc. The simultaneous timefrequency resolution of WVD is one of the important properties which makes it preferable for analysis and detection of linear FM and transient signals. There are two algorithms proposed here to assess the resolution and to compare the performance of signal detection. First method is based on the measurement of area under timefrequency plot; in case of a linear FM signal analysis. A second method is based on the instantaneous power calculation and is used in case of transient, non-stationary signals. The implementation is explained briefly for both methods with suitable diagrams. The accuracy of the measurements is validated to show the better performance of WVD representation in comparison with STFT and Spectrograms.
Abstract: Term Extraction, a key data preparation step in Text
Mining, extracts the terms, i.e. relevant collocation of words,
attached to specific concepts (e.g. genetic-algorithms and decisiontrees
are terms associated to the concept “Machine Learning" ). In
this paper, the task of extracting interesting collocations is achieved
through a supervised learning algorithm, exploiting a few
collocations manually labelled as interesting/not interesting. From
these examples, the ROGER algorithm learns a numerical function,
inducing some ranking on the collocations. This ranking is optimized
using genetic algorithms, maximizing the trade-off between the false
positive and true positive rates (Area Under the ROC curve). This
approach uses a particular representation for the word collocations,
namely the vector of values corresponding to the standard statistical
interestingness measures attached to this collocation. As this
representation is general (over corpora and natural languages),
generality tests were performed by experimenting the ranking
function learned from an English corpus in Biology, onto a French
corpus of Curriculum Vitae, and vice versa, showing a good
robustness of the approaches compared to the state-of-the-art Support
Vector Machine (SVM).
Abstract: A system for market identification (SMI) is presented.
The resulting representations are multivariable dynamic demand
models. The market specifics are analyzed. Appropriate models and
identification techniques are chosen. Multivariate static and dynamic
models are used to represent the market behavior. The steps of the
first stage of SMI, named data preprocessing, are mentioned. Next,
the second stage, which is the model estimation, is considered in more
details. Stepwise linear regression (SWR) is used to determine the
significant cross-effects and the orders of the model polynomials. The
estimates of the model parameters are obtained by a numerically stable
estimator. Real market data is used to analyze SMI performance.
The main conclusion is related to the applicability of multivariate
dynamic models for representation of market systems.
Abstract: Sparse representation which can represent high dimensional
data effectively has been successfully used in computer vision
and pattern recognition problems. However, it doesn-t consider the
label information of data samples. To overcome this limitation,
we develop a novel dimensionality reduction algorithm namely
dscriminatively regularized sparse subspace learning(DR-SSL) in this
paper. The proposed DR-SSL algorithm can not only make use of
the sparse representation to model the data, but also can effective
employ the label information to guide the procedure of dimensionality
reduction. In addition,the presented algorithm can effectively deal
with the out-of-sample problem.The experiments on gene-expression
data sets show that the proposed algorithm is an effective tool for
dimensionality reduction and gene-expression data classification.
Abstract: The job shop scheduling problem (JSSP) is well known as one of the most difficult combinatorial optimization problems. This paper presents a hybrid genetic algorithm for the JSSP with the objective of minimizing makespan. The efficiency of the genetic algorithm is enhanced by integrating it with a local search method. The chromosome representation of the problem is based on operations. Schedules are constructed using a procedure that generates full active schedules. In each generation, a local search heuristic based on Nowicki and Smutnicki-s neighborhood is applied to improve the solutions. The approach is tested on a set of standard instances taken from the literature and compared with other approaches. The computation results validate the effectiveness of the proposed algorithm.
Abstract: Effective knowledge support relies on providing
operation-relevant knowledge to workers promptly and accurately. A
knowledge flow represents an individual-s or a group-s
knowledge-needs and referencing behavior of codified knowledge
during operation performance. The flow has been utilized to facilitate
organizational knowledge support by illustrating workers-
knowledge-needs systematically and precisely. However,
conventional knowledge-flow models cannot work well in cooperative
teams, which team members usually have diverse knowledge-needs in
terms of roles. The reason is that those models only provide one single
view to all participants and do not reflect individual knowledge-needs
in flows. Hence, we propose a role-based knowledge-flow view model
in this work. The model builds knowledge-flow views (or virtual
knowledge flows) by creating appropriate virtual knowledge nodes
and generalizing knowledge concepts to required concept levels. The
customized views could represent individual role-s knowledge-needs
in teamwork context. The novel model indicates knowledge-needs in
condensed representation from a roles perspective and enhances the
efficiency of cooperative knowledge support in organizations.
Abstract: In the paper, the relative performances on spectral
classification of short exon and intron sequences of the human and
eleven model organisms is studied. In the simulations, all
combinations of sixteen one-sequence numerical representations, four
threshold values, and four window lengths are considered. Sequences
of 150-base length are chosen and for each organism, a total of
16,000 sequences are used for training and testing. Results indicate
that an appropriate combination of one-sequence numerical
representation, threshold value, and window length is essential for
arriving at top spectral classification results. For fixed-length
sequences, the precisions on exon and intron classification obtained
for different organisms are not the same because of their genomic
differences. In general, precision increases as sequence length
increases.
Abstract: In the globalization process, when the struggle for minds and values of the people is taking place, the impact of the virtual space can cause unexpected effects and consequences in the process of adjustment of young people in this world. Their special significance is defined by unconscious influence on the underlying process of meaning and therefore the values preached by them are much more effective and affect both the personal characteristics and the peculiarities of adjustment process. Related to this the challenge is to identify factors influencing the reflection characteristics of virtual subjects and measures their impact on the personal characteristics of the students.
Abstract: In this paper we propose a computational model for the representation and processing of morpho-phonological phenomena in a natural language, like Modern Greek. We aim at a unified treatment of inflection, compounding, and word-internal phonological changes, in a model that is used for both analysis and generation. After discussing certain difficulties cuase by well-known finitestate approaches, such as Koskenniemi-s two-level model [7] when applied to a computational treatment of compounding, we argue that a morphology-based model provides a more adequate account of word-internal phenomena. Contrary to the finite state approaches that cannot handle hierarchical word constituency in a satisfactory way, we propose a unification-based word grammar, as the nucleus of our strategy, which takes into consideration word representations that are based on affixation and [stem stem] or [stem word] compounds. In our formalism, feature-passing operations are formulated with the use of the unification device, and phonological rules modeling the correspondence between lexical and surface forms apply at morpheme boundaries. In the paper, examples from Modern Greek illustrate our approach. Morpheme structures, stress, and morphologically conditioned phoneme changes are analyzed and generated in a principled way.
Abstract: This paper focuses upon three such painters working in
France from this time and their representations both of their host
country in which they found themselves displaced, and of their
homeland which they represent through refracted memories from their
new perspective in Europe. What is their representation of France and
China´╝ÅTaiwan? Is it Otherness or an origin?
This paper also attempts to explore the three artists- diasporic lives
and to redefine their transnational identities. Hou Chin-lang, the
significance of his multiple-split images serve to highlight the intricate
relationships between his work and the surrounding family, and to
reveal his identity of his Taiwan “homeland". Yin Xin takes paintings
from the Western canon and subjects them to a process of
transformation through Chinese imagery. In the same period, Lin
Li-ling, transforms the transnational spirit of Yin Xin to symbolic
codes with neutered female bodies and tatoos, thus creates images that
challenge the boundaries of both gender and nationality.
Abstract: General requirements for knowledge representation in
the form of logic rules, applicable to design and control of industrial
processes, are formulated. Characteristic behavior of decision trees
(DTs) and rough sets theory (RST) in rules extraction from recorded
data is discussed and illustrated with simple examples. The
significance of the models- drawbacks was evaluated, using
simulated and industrial data sets. It is concluded that performance of
DTs may be considerably poorer in several important aspects,
compared to RST, particularly when not only a characterization of a
problem is required, but also detailed and precise rules are needed,
according to actual, specific problems to be solved.
Abstract: Concept maps can be generated manually or
automatically. It is important to recognize differences of the two
types of concept maps. The automatically generated concept maps
are dynamic, interactive, and full of associations between the terms
on the maps and the underlying documents. Through a specific
concept mapping system, Visual Concept Explorer (VCE), this paper
discusses how automatically generated concept maps are different
from manually generated concept maps and how different
applications and learning opportunities might be created with the
automatically generated concept maps. The paper presents several
examples of learning strategies that take advantages of the
automatically generated concept maps for concept learning and
exploration.
Abstract: Knowledge sharing in general and the contextual
access to knowledge in particular, still represent a key challenge in
the knowledge management framework. Researchers on semantic
web and human machine interface study techniques to enhance this
access. For instance, in semantic web, the information retrieval is
based on domain ontology. In human machine interface, keeping
track of user's activity provides some elements of the context that can
guide the access to information. We suggest an approach based on
these two key guidelines, whilst avoiding some of their weaknesses.
The approach permits a representation of both the context and the
design rationale of a project for an efficient access to knowledge. In
fact, the method consists of an information retrieval environment
that, in the one hand, can infer knowledge, modeled as a semantic
network, and on the other hand, is based on the context and the
objectives of a specific activity (the design). The environment we
defined can also be used to gather similar project elements in order to
build classifications of tasks, problems, arguments, etc. produced in a
company. These classifications can show the evolution of design
strategies in the company.
Abstract: In this article the accumulated results out of the effects
and length of the manufacture and production projects in the
university and research standard have been settled with the usefulness
definition of the process of project management for the accessibility
to the proportional pattern in the “time and action" stages. Studies
show that many problems confronted by the researchers in these
projects are connected to the non-profiting of: 1) autonomous timing
for gathering the educational theme, 2) autonomous timing for
planning and pattern, presenting before the construction, and 3)
autonomous timing for manufacture and sample presentation from the
output. The result of this study indicates the division of every
manufacture and production projects into three smaller autonomous
projects from its kind, budget and autonomous expenditure, shape
and order of the stages for the management of these kinds of projects.
In this case study real result are compared with theoretical results.
Abstract: Existing literature ondesign reasoning seems to give
either one sided accounts on expert design behaviour based on
internal processing. In the same way ecological theoriesseem to
focus one sidedly on external elementsthat result in a lack of unifying
design cognition theory. Although current extended design cognition
studies acknowledge the intellectual interaction between internal and
external resources, there still seems to be insufficient understanding
of the complexities involved in such interactive processes. As
such,this paper proposes a novelmulti-directional model for design
researchers tomap the complex and dynamic conduct controlling
behaviour in which both the computational and ecological
perspectives are integrated in a vertical manner. A clear distinction
between identified intentional and emerging physical drivers, and
relationships between them during the early phases of experts- design
process, is demonstrated by presenting a case study in which the
model was employed.
Abstract: This document details the process of developing a
wireless device that captures the basic movements of the foot (plantar
flexion, dorsal flexion, abduction, adduction.), and the knee
movement (flexion). It implements a motion capture system by using
a hardware based on optical fiber sensors, due to the advantages in
terms of scope, noise immunity and speed of data transmission and
reception. The operating principle used by this system is the detection
and transmission of joint movement by mechanical elements and
their respective measurement by optical ones (in this case infrared).
Likewise, Visual Basic software is used for reception, analysis and
signal processing of data acquired by the device, generating a 3D
graphical representation in real time of each movement. The result is
a boot in charge of capturing the movement, a transmission module
(Implementing Xbee Technology) and a receiver module for
receiving information and sending it to the PC for their respective
processing.
The main idea with this device is to help on topics such as
bioengineering and medicine, by helping to improve the quality of
life and movement analysis.
Abstract: In the power quality analysis non-stationary nature
of voltage distortions require some precise and powerful analytical
techniques. The time-frequency representation (TFR) provides a
powerful method for identification of the non-stationary of the
signals. This paper investigates a comparative study on two
techniques for analysis and visualization of voltage distortions with
time-varying amplitudes. The techniques include the Discrete
Wavelet Transform (DWT), and the S-Transform. Several power
quality problems are analyzed using both the discrete wavelet
transform and S–transform, showing clearly the advantage of the S–
transform in detecting, localizing, and classifying the power quality
problems.
Abstract: One of the main research directions in CAD/CAM
machining area is the reducing of machining time.
The feedrate scheduling is one of the advanced techniques that
allows keeping constant the uncut chip area and as sequel to keep
constant the main cutting force. They are two main ways for feedrate
optimization. The first consists in the cutting force monitoring, which
presumes to use complex equipment for the force measurement and
after this, to set the feedrate regarding the cutting force variation. The
second way is to optimize the feedrate by keeping constant the
material removal rate regarding the cutting conditions.
In this paper there is proposed a new approach using an extended
database that replaces the system model.
The feedrate scheduling is determined based on the identification
of the reconfigurable machine tool, and the feed value determination
regarding the uncut chip section area, the contact length between tool
and blank and also regarding the geometrical roughness.
The first stage consists in the blank and tool monitoring for the
determination of actual profiles. The next stage is the determination
of programmed tool path that allows obtaining the piece target
profile.
The graphic representation environment models the tool and blank
regions and, after this, the tool model is positioned regarding the
blank model according to the programmed tool path. For each of
these positions the geometrical roughness value, the uncut chip area
and the contact length between tool and blank are calculated. Each of
these parameters are compared with the admissible values and
according to the result the feed value is established.
We can consider that this approach has the following advantages:
in case of complex cutting processes the prediction of cutting force is
possible; there is considered the real cutting profile which has
deviations from the theoretical profile; the blank-tool contact length
limitation is possible; it is possible to correct the programmed tool
path so that the target profile can be obtained.
Applying this method, there are obtained data sets which allow the
feedrate scheduling so that the uncut chip area is constant and, as a
result, the cutting force is constant, which allows to use more
efficiently the machine tool and to obtain the reduction of machining
time.
Abstract: This article concerns the presentation of an integrated
method for detection of steganographic content embedded by new
unknown programs. The method is based on data mining and
aggregated hypothesis testing. The article contains the theoretical
basics used to deploy the proposed detection system and the
description of improvement proposed for the basic system idea.
Further main results of experiments and implementation details are
collected and described. Finally example results of the tests are
presented.