Abstract: Over the past few years, XML (eXtensible Mark-up
Language) has emerged as the standard for information
representation and data exchange over the Internet. This paper
provides a kick-start for new researches venturing in XML databases
field. We survey the storage representation for XML document,
review the XML query processing and optimization techniques with
respect to the particular storage instance. Various optimization
technologies have been developed to solve the query retrieval and
updating problems. Towards the later year, most researchers
proposed hybrid optimization techniques. Hybrid system opens the
possibility of covering each technology-s weakness by its strengths.
This paper reviews the advantages and limitations of optimization
techniques.
Abstract: Even it has been recognized that Shape Memory
Alloys (SMA) have a significant potential for deployment actuators,
the number of applications of SMA-based actuators to the present
day is still quite small, due to the need of deep understanding of the
thermo-mechanical behavior of SMA, causing an important need for
a mathematical model able to describe all thermo-mechanical
properties of SMA by relatively simple final set of constitutive
equations. SMAs offer attractive potentials such as: reversible strains
of several percent, generation of high recovery stresses and high
power / weight ratios. The paper tries to provide an overview of the
shape memory functions and a presentation of the designed and
developed temperature control system used for a gripper actuated by
two pairs of differential SMA active springs. An experimental setup
was established, using electrical energy for actuator-s springs heating
process. As for holding the temperature of the SMA springs at certain
level for a long time was developed a control system in order to
avoid the active elements overheating.
Abstract: The major building block of most elliptic curve cryptosystems
are computation of multi-scalar multiplication. This paper
proposes a novel algorithm for simultaneous multi-scalar multiplication,
that is by employing addition chains. The previously known
methods utilizes double-and-add algorithm with binary representations.
In order to accomplish our purpose, an efficient empirical
method for finding addition chains for multi-exponents has been
proposed.
Abstract: A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.
Abstract: The development of the signal compression
algorithms is having compressive progress. These algorithms are
continuously improved by new tools and aim to reduce, an average,
the number of bits necessary to the signal representation by means of
minimizing the reconstruction error. The following article proposes
the compression of Arabic speech signal by a hybrid method
combining the wavelet transform and the linear prediction. The
adopted approach rests, on one hand, on the original signal
decomposition by ways of analysis filters, which is followed by the
compression stage, and on the other hand, on the application of the
order 5, as well as, the compression signal coefficients. The aim of
this approach is the estimation of the predicted error, which will be
coded and transmitted. The decoding operation is then used to
reconstitute the original signal. Thus, the adequate choice of the
bench of filters is useful to the transform in necessary to increase the
compression rate and induce an impercevable distortion from an
auditive point of view.
Abstract: Being creative in an educational environment, such as in the university, has many times been downplayed by bureaucracy, human inadequacy and physical hindrance. These factors control, stifle and subsequently condemn this natural phenomenon which is normally exuded by the tertiary community. If taken in a positive light, creativity has always led to many new discoveries and inventions. These creations are then gradually developed for the university reputation and achievements, in all fields of studies from the sciences to the humanities. This paper attempts to explore, through more than twenty years of observation, issues that stifle the university citizenry – academicians and students- – creativity. It also scrutinizes how enhancement of such creativity can be further supported by bureaucracy simplicity, encouraging and developing human potential and constructing uncompromising physical infrastructure and administrative support. These ideals – all of which can help to promote creativity, increases the productivity of the university community in aspects of teaching, research, publication, innovation and commercialization; be it at national as well as at international arena for the good of human and societal growth and development. This discursive presentation hopes to address another issue on promoting university community creativity through several deliverables which require cooperation from every quarter of the institution so that being creative continues to be promoted for sustainable human capital growth and development of the country, if not, the global community.
Abstract: Ontology is a terminology which is used in artificial
intelligence with different meanings. Ontology researching has an
important role in computer science and practical applications,
especially distributed knowledge systems. In this paper we present an
ontology which is called Computational Object Knowledge Base
Ontology. It has been used in designing some knowledge base
systems for solving problems such as the system that supports
studying knowledge and solving analytic geometry problems, the
program for studying and solving problems in Plane Geometry, the
knowledge system in linear algebra.
Abstract: Standards for learning objects focus primarily on
content presentation. They were already extended to support automatic evaluation but it is limited to exercises with a predefined
set of answers. The existing standards lack the metadata required by specialized evaluators to handle types of exercises with an indefinite
set of solutions. To address this issue existing learning object standards were extended to the particular requirements of a
specialized domain. A definition of programming problems as learning objects, compatible both with Learning Management Systems and with systems performing automatic evaluation of
programs, is presented in this paper. The proposed definition includes
metadata that cannot be conveniently represented using existing standards, such as: the type of automatic evaluation; the requirements
of the evaluation engine; and the roles of different assets - tests cases, program solutions, etc. The EduJudge project and its main services
are also presented as a case study on the use of the proposed definition of programming problems as learning objects.
Abstract: Interactive public displays give access as an
innovative media to promote enhanced communication between
people and information. However, digital public displays are subject
to a few constraints, such as content presentation. Content
presentation needs to be developed to be more interesting to attract
people’s attention and motivate people to interact with the display. In
this paper, we proposed idea to implement contents with interaction
elements for vision-based digital public display. Vision-based
techniques are applied as a sensor to detect passers-by and theme
contents are suggested to attract their attention for encouraging them
to interact with the announcement content. Virtual object, gesture
detection and projection installation are applied for attracting
attention from passers-by. Preliminary study showed positive
feedback of interactive content designing towards the public display.
This new trend would be a valuable innovation as delivery of
announcement content and information communication through this
media is proven to be more engaging.
Abstract: Gradual patterns have been studied for many years as
they contain precious information. They have been integrated in
many expert systems and rule-based systems, for instance to reason
on knowledge such as “the greater the number of turns, the greater
the number of car crashes”. In many cases, this knowledge has been
considered as a rule “the greater the number of turns → the greater
the number of car crashes” Historically, works have thus been
focused on the representation of such rules, studying how implication
could be defined, especially fuzzy implication. These rules were
defined by experts who were in charge to describe the systems they
were working on in order to turn them to operate automatically. More
recently, approaches have been proposed in order to mine databases
for automatically discovering such knowledge. Several approaches
have been studied, the main scientific topics being: how to determine
what is an relevant gradual pattern, and how to discover them as
efficiently as possible (in terms of both memory and CPU usage).
However, in some cases, end-users are not interested in raw level
knowledge, and are rather interested in trends. Moreover, it may be
the case that no relevant pattern can be discovered at a low level of
granularity (e.g. city), whereas some can be discovered at a higher
level (e.g. county). In this paper, we thus extend gradual pattern
approaches in order to consider multiple level gradual patterns. For
this purpose, we consider two aggregation policies, namely
horizontal and vertical.
Abstract: This paper examines two policy spaces–the ARC and TVA–and their spatialized politics. The research observes that the regional concept informs public policy and can contribute to the formation of stable policy initiatives. Using the subsystem framework to understand the political viability of policy regimes, the authors conclude policy geographies that appeal to traditional definitions of regions are more stable over time. In contrast, geographies that fail to reflect pre-existing representations of space are engaged in more competitive subsystem politics. The paper demonstrates that the spatial practices of policy regions and their directional politics influence the political viability of programs. The paper concludes that policy spaces should institutionalize pre-existing geographies–not manufacture new ones.
Abstract: Both software applications and their development environment are becoming more and more distributed. This trend impacts not only the way software computes, but also how it looks. This article proposes a Human Computer Interface (HCI) template from three representative applications we have developed. These applications include a Multi-Agent System based software, a 3D Internet computer game with distributed game world logic, and a programming language environment used in constructing distributed neural network and its visualizations. HCI concepts that are common to these applications are described in abstract terms in the template. These include off-line presentation of global entities, entities inside a hierarchical namespace, communication and languages, reconfiguration of entity references in a graph, impersonation and access right, etc. We believe the metaphor that underlies an HCI concept as well as the relationships between a bunch of HCI concepts are crucial to the design of software systems and vice versa.
Abstract: A cancelable palmprint authentication system
proposed in this paper is specifically designed to overcome the
limitations of the contemporary biometric authentication system. In
this proposed system, Geometric and pseudo Zernike moments are
employed as feature extractors to transform palmprint image into a
lower dimensional compact feature representation. Before moment
computation, wavelet transform is adopted to decompose palmprint
image into lower resolution and dimensional frequency subbands.
This reduces the computational load of moment calculation
drastically. The generated wavelet-moment based feature
representation is used to generate cancelable verification key with a
set of random data. This private binary key can be canceled and
replaced. Besides that, this key also possesses high data capture
offset tolerance, with highly correlated bit strings for intra-class
population. This property allows a clear separation of the genuine
and imposter populations, as well as zero Equal Error Rate
achievement, which is hardly gained in the conventional biometric
based authentication system.
Abstract: Data mining uses a variety of techniques each of which is useful for some particular task. It is important to have a deep understanding of each technique and be able to perform sophisticated analysis. In this article we describe a tool built to simulate a variation of the Kohonen network to perform unsupervised clustering and support the entire data mining process up to results visualization. A graphical representation helps the user to find out a strategy to optmize classification by adding, moving or delete a neuron in order to change the number of classes. The tool is also able to automatically suggest a strategy for number of classes optimization.The tool is used to classify macroeconomic data that report the most developed countries? import and export. It is possible to classify the countries based on their economic behaviour and use an ad hoc tool to characterize the commercial behaviour of a country in a selected class from the analysis of positive and negative features that contribute to classes formation.
Abstract: In the recent past Learning Classifier Systems have
been successfully used for data mining. Learning Classifier System
(LCS) is basically a machine learning technique which combines
evolutionary computing, reinforcement learning, supervised or
unsupervised learning and heuristics to produce adaptive systems. A
LCS learns by interacting with an environment from which it
receives feedback in the form of numerical reward. Learning is
achieved by trying to maximize the amount of reward received. All
LCSs models more or less, comprise four main components; a finite
population of condition–action rules, called classifiers; the
performance component, which governs the interaction with the
environment; the credit assignment component, which distributes the
reward received from the environment to the classifiers accountable
for the rewards obtained; the discovery component, which is
responsible for discovering better rules and improving existing ones
through a genetic algorithm. The concatenate of the production rules
in the LCS form the genotype, and therefore the GA should operate
on a population of classifier systems. This approach is known as the
'Pittsburgh' Classifier Systems. Other LCS that perform their GA at
the rule level within a population are known as 'Mitchigan' Classifier
Systems. The most predominant representation of the discovered
knowledge is the standard production rules (PRs) in the form of IF P
THEN D. The PRs, however, are unable to handle exceptions and do
not exhibit variable precision. The Censored Production Rules
(CPRs), an extension of PRs, were proposed by Michalski and
Winston that exhibit variable precision and supports an efficient
mechanism for handling exceptions. A CPR is an augmented
production rule of the form: IF P THEN D UNLESS C, where
Censor C is an exception to the rule. Such rules are employed in
situations, in which conditional statement IF P THEN D holds
frequently and the assertion C holds rarely. By using a rule of this
type we are free to ignore the exception conditions, when the
resources needed to establish its presence are tight or there is simply
no information available as to whether it holds or not. Thus, the IF P
THEN D part of CPR expresses important information, while the
UNLESS C part acts only as a switch and changes the polarity of D
to ~D. In this paper Pittsburgh style LCSs approach is used for
automated discovery of CPRs. An appropriate encoding scheme is
suggested to represent a chromosome consisting of fixed size set of
CPRs. Suitable genetic operators are designed for the set of CPRs
and individual CPRs and also appropriate fitness function is proposed
that incorporates basic constraints on CPR. Experimental results are
presented to demonstrate the performance of the proposed learning
classifier system.
Abstract: Information sharing and exchange, rather than
information processing, is what characterizes information
technology in the 21st century. Ontologies, as shared common
understanding, gain increasing attention, as they appear as the
most promising solution to enable information sharing both at
a semantic level and in a machine-processable way. Domain
Ontology-based modeling has been exploited to provide
shareability and information exchange among diversified,
heterogeneous applications of enterprises.
Contextual ontologies are “an explicit specification of
contextual conceptualization". That is: ontology is
characterized by concepts that have multiple representations
and they may exist in several contexts. Hence, contextual
ontologies are a set of concepts and relationships, which are
seen from different perspectives. Contextualization is to allow
for ontologies to be partitioned according to their contexts.
The need for contextual ontologies in enterprise modeling
has become crucial due to the nature of today's competitive
market. Information resources in enterprise is distributed and
diversified and is in need to be shared and communicated
locally through the intranet and globally though the internet.
This paper discusses the roles that ontologies play in an
enterprise modeling, and how ontologies assist in building a
conceptual model in order to provide communicative and
interoperable information systems. The issue of enterprise
modeling based on contextual domain ontology is also
investigated, and a framework is proposed for an enterprise
model that consists of various applications.
Abstract: In this paper, we present local image descriptor using
VQ-SIFT for more effective and efficient image retrieval. Instead of
SIFT's weighted orientation histograms, we apply vector quantization
(VQ) histogram as an alternate representation for SIFT features.
Experimental results show that SIFT features using VQ-based local
descriptors can achieve better image retrieval accuracy than the
conventional algorithm while the computational cost is significantly
reduced.
Abstract: A time-domain numerical model within the
framework of transmission line modeling (TLM) is developed to
simulate electromagnetic pulse propagation inside multiple
microcavities forming photonic crystal (PhC) structures. The model
developed is quite general and is capable of simulating complex
electromagnetic problems accurately. The field quantities can be
mapped onto a passive electrical circuit equivalent what ensures that
TLM is provably stable and conservative at a local level.
Furthermore, the circuit representation allows a high level of
hybridization of TLM with other techniques and lumped circuit
models of components and devices. A photonic crystal structure
formed by rods (or blocks) of high-permittivity dieletric material
embedded in a low-dielectric background medium is simulated as an
example. The model developed gives vital spatio-temporal
information about the signal, and also gives spectral information over
a wide frequency range in a single run. The model has wide
applications in microwave communication systems, optical
waveguides and electromagnetic materials simulations.
Abstract: Different methods containing biometric algorithms are
presented for the representation of eigenfaces detection including
face recognition, are identification and verification. Our theme of this
research is to manage the critical processing stages (accuracy, speed,
security and monitoring) of face activities with the flexibility of
searching and edit the secure authorized database. In this paper we
implement different techniques such as eigenfaces vector reduction
by using texture and shape vector phenomenon for complexity
removal, while density matching score with Face Boundary Fixation
(FBF) extracted the most likelihood characteristics in this media
processing contents. We examine the development and performance
efficiency of the database by applying our creative algorithms in both
recognition and detection phenomenon. Our results show the
performance accuracy and security gain with better achievement than
a number of previous approaches in all the above processes in an
encouraging mode.