Abstract: This paper proposes an Interactive Chinese Character
Learning System (ICCLS) based on pictorial evolution as an
edutainment concept in computer-based learning of language. The
advantage of the language origination itself is taken as a learning
platform due to the complexity in Chinese language as compared to
other types of languages. Users especially children enjoy more by
utilize this learning system because they are able to memories the
Chinese Character easily and understand more of the origin of the
Chinese character under pleasurable learning environment, compares
to traditional approach which children need to rote learning Chinese
Character under un-pleasurable environment. Skeletonization is used
as the representation of Chinese character and object with an animated
pictograph evolution to facilitate the learning of the language. Shortest
skeleton path matching technique is employed for fast and accurate
matching in our implementation. User is required to either write a
word or draw a simple 2D object in the input panel and the matched
word and object will be displayed as well as the pictograph evolution
to instill learning. The target of computer-based learning system is for
pre-school children between 4 to 6 years old to learn Chinese
characters in a flexible and entertaining manner besides utilizing
visual and mind mapping strategy as learning methodology.
Abstract: This paper looks into areas not covered by prominent
Agent-Oriented Software Engineering (AOSE) methodologies.
Extensive paper review led to the identification of two issues, first
most of these methodologies almost neglect semantic web and
ontology. Second, as expected, each one has its strength and
weakness and may focus on some phases of the development
lifecycle but not all of the phases. The work presented here builds
extensions to a highly regarded AOSE methodology (MaSE) in order
to cover the areas that this methodology does not concentrate on. The
extensions include introducing an ontology stage for semantic
representation and integrating early requirement specification from a
methodology which mainly focuses on that. The integration involved
developing transformation rules (with the necessary handling of nonmatching
notions) between the two sets of representations and
building the software which automates the transformation. The
application of this integration on a case study is also presented in the
paper. The main flow of MaSE stages was changed to smoothly
accommodate the new additions.
Abstract: Hybrid algorithm is the hot issue in Computational
Intelligence (CI) study. From in-depth discussion on Simulation
Mechanism Based (SMB) classification method and composite patterns,
this paper presents the Mamdani model based Adaptive Neural
Fuzzy Inference System (M-ANFIS) and weight updating formula in
consideration with qualitative representation of inference consequent
parts in fuzzy neural networks. M-ANFIS model adopts Mamdani
fuzzy inference system which has advantages in consequent part.
Experiment results of applying M-ANFIS to evaluate traffic Level
of service show that M-ANFIS, as a new hybrid algorithm in computational
intelligence, has great advantages in non-linear modeling,
membership functions in consequent parts, scale of training data and
amount of adjusted parameters.
Abstract: The internet has become an attractive avenue for
global e-business, e-learning, knowledge sharing, etc. Due to
continuous increase in the volume of web content, it is not practically
possible for a user to extract information by browsing and integrating
data from a huge amount of web sources retrieved by the existing
search engines. The semantic web technology enables advancement
in information extraction by providing a suite of tools to integrate
data from different sources. To take full advantage of semantic web,
it is necessary to annotate existing web pages into semantic web
pages. This research develops a tool, named OWIE (Ontology-based
Web Information Extraction), for semantic web annotation using
domain specific ontologies. The tool automatically extracts
information from html pages with the help of pre-defined ontologies
and gives them semantic representation. Two case studies have been
conducted to analyze the accuracy of OWIE.
Abstract: The contribution deals with analysis of identity style
at adolescents (N=463) at the age from 16 to 19 (the average age is
17,7 years). We used the Identity Style Inventory by Berzonsky,
distinguishing three basic, measured identity styles: informational,
normative, diffuse-avoidant identity style and also commitment. The
informational identity style influencing on personal adaptability,
coping strategies, quality of life and the normative identity style, it
means the style in which an individual takes on models of authorities
at self-defining were found to have the highest representation in the
studied group of adolescents by higher scores at girls in comparison
with boys. The normative identity style positively correlates with the
informational identity style. The diffuse-avoidant identity style was
found to be positively associated with maladaptive decisional
strategies, neuroticism and depressive reactions. There is the style,
in which the individual shifts aside defining his personality. In our
research sample the lowest score represents it and negatively
correlates with commitment, it means with coping strategies, thrust in
oneself and the surrounding world. The age of adolescents did not
significantly differentiate representation of identity style. We were
finding the model, in which informational and normative identity
style had positive relationship and the informational and diffuseavoidant
style had negative relationship, which were determinated
with commitment. In the same time the commitment is influenced
with other outside factors.
Abstract: Facial features are frequently used to represent local
properties of a human face image in computer vision applications. In
this paper, we present a fast algorithm that can extract the facial
features online such that they can give a satisfying representation of a
face image. It includes one step for a coarse detection of each facial
feature by AdaBoost and another one to increase the accuracy of the
found points by Active Shape Models (ASM) in the regions of interest.
The resulted facial features are evaluated by matching with artificial
face models in the applications of physiognomy. The distance measure
between the features and those in the fate models from the database is
carried out by means of the Hausdorff distance. In the experiment, the
proposed method shows the efficient performance in facial feature
extractions and online system of physiognomy.
Abstract: CIM is the standard formalism for modeling management
information developed by the Distributed Management Task
Force (DMTF) in the context of its WBEM proposal, designed to
provide a conceptual view of the managed environment. In this
paper, we propose the inclusion of formal knowledge representation
techniques, based on Description Logics (DLs) and the Web Ontology
Language (OWL), in CIM-based conceptual modeling, and then we
examine the benefits of such a decision. The proposal is specified as a
CIM metamodel level mapping to a highly expressive subset of DLs
capable of capturing all the semantics of the models. The paper shows
how the proposed mapping can be used for automatic reasoning
about the management information models, as a design aid, by means
of new-generation CASE tools, thanks to the use of state-of-the-art
automatic reasoning systems that support the proposed logic and use
algorithms that are sound and complete with respect to the semantics.
Such a CASE tool framework has been developed by the authors and
its architecture is also introduced. The proposed formalization is not
only useful at design time, but also at run time through the use of
rational autonomous agents, in response to a need recently recognized
by the DMTF.
Abstract: One of the most ancient humankind concerns is knowledge formalization i.e. what a concept is. Concept Analysis, a branch of analytical philosophy, relies on the purpose of decompose the elements, relations and meanings of a concept. This paper aims at presenting a method to make a concept analysis obtaining a knowledge representation suitable to be processed by a computer system using either object-oriented or ontology technologies. Security notion is, usually, known as a set of different concepts related to “some kind of protection". Our method concludes that a more general framework for the concept, despite it is dynamic, is possible and any particular definition (instantiation) depends on the elements used by its construction instead of the concept itself.
Abstract: Under-representation of women in leadership positions" is still a general phenomenon in Germany despite the high number of implemented measures. The under-representation of female executives in the aviation sector is even worse. In this context our research hypothesis is that the representation and acceptance of women in management positions is determined by corporate culture.
Abstract: In our current political climate of assessment and
accountability initiatives we are failing to prepare our children for a
participatory role in the creative economy. The field of education is
increasingly falling prey to didactic methodologies which train a
nation of competent test takers, foregoing the opportunity to educate
students to find problems and develop multiple solutions. No where is
this more evident than in the area of art education. Due to a myriad of
issues including budgetary shortfalls, time constraints and a general
misconception that anyone who enjoys the arts is capable of teaching
the arts, our students are not developing the skills they require to
become fully literate in critical thinking and creative processing.
Although art integrated curriculum is increasingly being viewed as a
reform strategy for motivating students by offering alternative
presentation of concepts and representation of knowledge acquisition,
misinformed administrators are often excluding the art teacher from
the integration equation. The paper to follow addresses the problem
of the need for divergent thinking and conceptualization in our
schools. Furthermore, this paper explores the role of education, and
specifically, art education in the development of a creatively literate
citizenry.
Abstract: This research proposes an algorithm for the simulation
of time-periodic unsteady problems via the solution unsteady Euler
and Navier-Stokes equations. This algorithm which is called Time
Spectral method uses a Fourier representation in time and hence
solve for the periodic state directly without resolving transients
(which consume most of the resources in a time-accurate scheme).
Mathematical tools used here are discrete Fourier transformations. It
has shown tremendous potential for reducing the computational cost
compared to conventional time-accurate methods, by enforcing
periodicity and using Fourier representation in time, leading to
spectral accuracy. The accuracy and efficiency of this technique is
verified by Euler and Navier-Stokes calculations for pitching airfoils.
Because of flow turbulence nature, Baldwin-Lomax turbulence
model has been used at viscous flow analysis. The results presented
by the Time Spectral method are compared with experimental data. It
has shown tremendous potential for reducing the computational cost
compared to the conventional time-accurate methods, by enforcing
periodicity and using Fourier representation in time, leading to
spectral accuracy, because results verify the small number of time
intervals per pitching cycle required to capture the flow physics.
Abstract: In this paper, we propose a robust scheme to work face alignment and recognition under various influences. For face representation, illumination influence and variable expressions are the important factors, especially the accuracy of facial localization and face recognition. In order to solve those of factors, we propose a robust approach to overcome these problems. This approach consists of two phases. One phase is preprocessed for face images by means of the proposed illumination normalization method. The location of facial features can fit more efficient and fast based on the proposed image blending. On the other hand, based on template matching, we further improve the active shape models (called as IASM) to locate the face shape more precise which can gain the recognized rate in the next phase. The other phase is to process feature extraction by using principal component analysis and face recognition by using support vector machine classifiers. The results show that this proposed method can obtain good facial localization and face recognition with varied illumination and local distortion.
Abstract: We show that Chebyshev Polynomials are a practical representation of computable functions on the computable reals. The paper presents error estimates for common operations and demonstrates that Chebyshev Polynomial methods would be more efficient than Taylor Series methods for evaluation of transcendental functions.
Abstract: The enthusiasm for gluten avoidance in a growing
market is met by improvements in sensitive detection methods for
analysing gluten content. Paradoxically, manufacturers employ no
such systems in the production process but continue to market their
product as gluten free, a significant risk posed to an undetermined
coeliac population. This paper resonates with an immunological
response that causes gastrointestinal scarring and villous atrophy with
the conventional description of personal injury. This thesis divulges
into evaluating potential inadequacies of gluten labelling laws which
not only present a diagnostic challenge for general practitioners in the
UK but it also exposes a less than adequate form of available legal
protection to those who suffer adverse reactions as a result of gluten
digestion. Central to this discussion is whether a claim brought in
misrepresentation, negligence and/or under the Consumer Protection
Act 1987 could be sustained. An interesting comparison is then made
with the legal regimes of neighboring jurisdictions furthering the
theme of a legally un-catered for gluten kingdom.
Abstract: Text categorization is the problem of classifying text
documents into a set of predefined classes. After a preprocessing
step, the documents are typically represented as large sparse vectors.
When training classifiers on large collections of documents, both the
time and memory restrictions can be quite prohibitive. This justifies
the application of feature selection methods to reduce the
dimensionality of the document-representation vector. In this paper,
three feature selection methods are evaluated: Random Selection,
Information Gain (IG) and Support Vector Machine feature selection
(called SVM_FS). We show that the best results were obtained with
SVM_FS method for a relatively small dimension of the feature
vector. Also we present a novel method to better correlate SVM
kernel-s parameters (Polynomial or Gaussian kernel).
Abstract: Stochastic models of biological networks are well established in systems biology, where the computational treatment of such models is often focused on the solution of the so-called chemical master equation via stochastic simulation algorithms. In contrast to this, the development of storage-efficient model representations that are directly suitable for computer implementation has received significantly less attention. Instead, a model is usually described in terms of a stochastic process or a "higher-level paradigm" with graphical representation such as e.g. a stochastic Petri net. A serious problem then arises due to the exponential growth of the model-s state space which is in fact a main reason for the popularity of stochastic simulation since simulation suffers less from the state space explosion than non-simulative numerical solution techniques. In this paper we present transition class models for the representation of biological network models, a compact mathematical formalism that circumvents state space explosion. Transition class models can also serve as an interface between different higher level modeling paradigms, stochastic processes and the implementation coded in a programming language. Besides, the compact model representation provides the opportunity to apply non-simulative solution techniques thereby preserving the possible use of stochastic simulation. Illustrative examples of transition class representations are given for an enzyme-catalyzed substrate conversion and a part of the bacteriophage λ lysis/lysogeny pathway.
Abstract: The aim of this study is to describe the associations
between the temperamental traits and the narrative emotional
expression. The Temperament Questionnaire was used: The FCB-TI
of Zawadzki & Strelau. A sample of 85 persons described three
emotional situations: love. hate, and anxiety. This study analyzes the
verbal form of expression by means of a written account of
emotions. The relationship between the narratives of love, hate and
anxiety and temperament characteristics were studied. Results
indicate that vigorousness (VI), perseverance (PE), sensory
sensitivity (SS), emotional reactivity (ER), endurance (EN) and
activeness (AC) have a significant impact on the emotional
expression in narratives. The temperamental traits are linked to the
form of emotional language. It means that temperament has an
impact on cognitive representations of emotions.
Abstract: An ontology is widely used in many kinds of applications as a knowledge representation tool for domain knowledge. However, even though an ontology schema is well prepared by domain experts, it is tedious and cost-intensive to add instances into the ontology. The most confident and trust-worthy way to add instances into the ontology is to gather instances from tables in the related Web pages. In automatic populating of instances, the primary task is to find the most proper concept among all possible concepts within the ontology for a given table. This paper proposes a novel method for this problem by defining the similarity between the table and the concept using the overlap of their properties. According to a series of experiments, the proposed method achieves 76.98% of accuracy. This implies that the proposed method is a plausible way for automatic ontology population from Web tables.
Abstract: Aspect Oriented Programming promises many
advantages at programming level by incorporating the cross cutting
concerns into separate units, called aspects. Join Points are
distinguishing features of Aspect Oriented Programming as they
define the points where core requirements and crosscutting concerns
are (inter)connected. Currently, there is a problem of multiple
aspects- composition at the same join point, which introduces the
issues like ordering and controlling of these superimposed aspects.
Dynamic strategies are required to handle these issues as early as
possible. State chart is an effective modeling tool to capture dynamic
behavior at high level design. This paper provides methodology to
formulate the strategies for multiple aspect composition at high level,
which helps to better implement these strategies at coding level. It
also highlights the need of designing shared join point at high level,
by providing the solutions of these issues using state chart diagrams
in UML 2.0. High level design representation of shared join points
also helps to implement the designed strategy in systematic way.
Abstract: Phylogenetic tree is a graphical representation of the
evolutionary relationship among three or more genes or organisms.
These trees show relatedness of data sets, species or genes
divergence time and nature of their common ancestors. Quality of a
phylogenetic tree requires parsimony criterion. Various approaches
have been proposed for constructing most parsimonious trees. This
paper is concerned about calculating and optimizing the changes of
state that are needed called Small Parsimony Algorithms. This paper
has proposed enhanced small parsimony algorithm to give better
score based on number of evolutionary changes needed to produce
the observed sequence changes tree and also give the ancestor of the
given input.