Abstract: This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Abstract: One of the main advantages of the LO paradigm is to
allow the availability of good quality, shareable learning material
through the Web. The effectiveness of the retrieval process requires a
formal description of the resources (metadata) that closely fits the
user-s search criteria; in spite of the huge international efforts in this
field, educational metadata schemata often fail to fulfil this
requirement. This work aims to improve the situation, by the
definition of a metadata model capturing specific didactic features of
shareable learning resources. It classifies LOs into “teacher-oriented"
and “student-oriented" categories, in order to describe the role a LO
is to play when it is integrated into the educational process. This
article describes the model and a first experimental validation process
that has been carried out in a controlled environment.
Abstract: This paper explores the effectiveness of machine
learning techniques in detecting firms that issue fraudulent financial
statements (FFS) and deals with the identification of factors
associated to FFS. To this end, a number of experiments have been
conducted using representative learning algorithms, which were
trained using a data set of 164 fraud and non-fraud Greek firms in the
recent period 2001-2002. The decision of which particular method to
choose is a complicated problem. A good alternative to choosing
only one method is to create a hybrid forecasting system
incorporating a number of possible solution methods as components
(an ensemble of classifiers). For this purpose, we have implemented
a hybrid decision support system that combines the representative
algorithms using a stacking variant methodology and achieves better
performance than any examined simple and ensemble method. To
sum up, this study indicates that the investigation of financial
information can be used in the identification of FFS and underline the
importance of financial ratios.
Abstract: While many studies have conducted the achievement
gap between groups of students in school districts, few studies have
utilized resilience research to investigate achievement gaps within
classrooms. This paper aims to summarize and discuss some recent
studies Waxman, Padr├│n, and their colleagues conducted, in which
they examined learning environment differences between resilient
and nonresilient students in reading and mathematics classrooms.
The classes consist of predominantly Hispanic elementary school
students from low-income families. These studies all incorporated
learning environment questionnaires and systematic observation
methods. Significant differences were found between resilient and
nonresilient students on their classroom learning environments and
classroom behaviors. The observation results indicate that the amount
and quality of teacher and student academic interaction are two of the
most influential variables that promote student outcomes. This paper
concludes by suggesting the following teacher practices to promote
resiliency in schools: (a) using feedback from classroom observation
and learning environment measures, (b) employing explicit teaching
practices; and (c) understanding students on a social and personal
level.
Abstract: In Knowledge Structure Graph, each course unit
represents a phase of learning activities. Both learning portfolios and
Knowledge Structure Graphs contain learning information of students
and let teachers know which content are difficulties and fails. The
study purposes "Dual Mode On-line Learning Diagnosis System" that
integrates two search methods: learning portfolio and knowledge
structure. Teachers can operate the proposed system and obtain the
information of specific students without any computer science
background. The teachers can find out failed students in advance and
provide remedial learning resources.
Abstract: Injection molding is a very complicated process to
monitor and control. With its high complexity and many process
parameters, the optimization of these systems is a very challenging
problem. To meet the requirements and costs demanded by the
market, there has been an intense development and research with the
aim to maintain the process under control. This paper outlines the
latest advances in necessary algorithms for plastic injection process
and monitoring, and also a flexible data acquisition system that
allows rapid implementation of complex algorithms to assess their
correct performance and can be integrated in the quality control
process. This is the main topic of this paper. Finally, to demonstrate
the performance achieved by this combination, a real case of use is
presented.
Abstract: The new framework the Higher Education is
immersed in involves a complete change in the way lecturers must
teach and students must learn. Whereas the lecturer was the main
character in traditional education, the essential goal now is to
increase the students' participation in the process. Thus, one of the
main tasks of lecturers in this new context is to design activities of
different nature in order to encourage such participation. Seminars
are one of the activities included in this environment. They are active
sessions that enable going in depth into specific topics as support of
other activities. They are characterized by some features such as
favoring interaction between students and lecturers or improving
their communication skills. Hence, planning and organizing strategic
seminars is indeed a great challenge for lecturers with the aim of
acquiring knowledge and abilities. This paper proposes a method
using Artificial Intelligence techniques to obtain student profiles
from their marks and preferences. The goal of building such profiles
is twofold. First, it facilitates the task of splitting the students into
different groups, each group with similar preferences and learning
difficulties. Second, it makes it easy to select adequate topics to be a
candidate for the seminars. The results obtained can be either a
guarantee of what the lecturers could observe during the development
of the course or a clue to reconsider new methodological strategies in
certain topics.
Abstract: Concept maps can be generated manually or
automatically. It is important to recognize differences of the two
types of concept maps. The automatically generated concept maps
are dynamic, interactive, and full of associations between the terms
on the maps and the underlying documents. Through a specific
concept mapping system, Visual Concept Explorer (VCE), this paper
discusses how automatically generated concept maps are different
from manually generated concept maps and how different
applications and learning opportunities might be created with the
automatically generated concept maps. The paper presents several
examples of learning strategies that take advantages of the
automatically generated concept maps for concept learning and
exploration.
Abstract: The main objective of this paper is to contribute the
existing knowledge transfer and IT Outsourcing literature
specifically in the context of Malaysia by reviewing the current
practices of e-government IT outsourcing in Malaysia including the
issues and challenges faced by the public agencies in transferring the
knowledge during the engagement. This paper discusses various
factors and different theoretical model of knowledge transfer starting
from the traditional model to the recent model suggested by the
scholars. The present paper attempts to align organizational
knowledge from the knowledge-based view (KBV) and
organizational learning (OL) lens. This review could help shape the
direction of both future theoretical and empirical studies on inter-firm
knowledge transfer specifically on how KBV and OL perspectives
could play significant role in explaining the complex relationships
between the client and vendor in inter-firm knowledge transfer and
the role of organizational management information system and
Transactive Memory System (TMS) to facilitate the organizational
knowledge transferring process. Conclusion is drawn and further
research is suggested.
Abstract: In the automotive industry test drives are being conducted
during the development of new vehicle models or as a part of
quality assurance of series-production vehicles. The communication
on the in-vehicle network, data from external sensors, or internal
data from the electronic control units is recorded by automotive
data loggers during the test drives. The recordings are used for fault
analysis. Since the resulting data volume is tremendous, manually
analysing each recording in great detail is not feasible.
This paper proposes to use machine learning to support domainexperts
by preventing them from contemplating irrelevant data and
rather pointing them to the relevant parts in the recordings. The
underlying idea is to learn the normal behaviour from available
recordings, i.e. a training set, and then to autonomously detect
unexpected deviations and report them as anomalies.
The one-class support vector machine “support vector data description”
is utilised to calculate distances of feature vectors. SVDDSUBSEQ
is proposed as a novel approach, allowing to classify subsequences
in multivariate time series data. The approach allows to
detect unexpected faults without modelling effort as is shown with
experimental results on recordings from test drives.
Abstract: One major issue that is regularly cited as a block to
the widespread use of online assessments in eLearning, is that of the
authentication of the student and the level of confidence that an
assessor can have that the assessment was actually completed by that
student. Currently, this issue is either ignored, in which case
confidence in the assessment and any ensuing qualification is
damaged, or else assessments are conducted at central, controlled
locations at specified times, losing the benefits of the distributed
nature of the learning programme. Particularly as we move towards
constructivist models of learning, with intentions towards achieving
heutagogic learning environments, the benefits of a properly
managed online assessment system are clear. Here we discuss some
of the approaches that could be adopted to address these issues,
looking at the use of existing security and biometric techniques,
combined with some novel behavioural elements. These approaches
offer the opportunity to validate the student on accessing an
assessment, on submission, and also during the actual production of
the assessment. These techniques are currently under development in
the DECADE project, and future work will evaluate and report their
use..
Abstract: This questionnaire-based study, aimed to measure and
compare the awareness of English reading strategies among EFL
learners at Bangkok University (BU) classified by their gender, field
of study, and English learning experience. Proportional stratified
random sampling was employed to formulate a sample of 380 BU
students. The data were statistically analyzed in terms of the mean
and standard deviation. t-Test analysis was used to find differences in
awareness of reading strategies between two groups (-male and
female- /-science and social-science students). In addition, one-way
analysis of variance (ANOVA) was used to compare reading strategy
awareness among BU students with different lengths of English
learning experience. The results of this study indicated that the
overall awareness of reading strategies of EFL learners at BU was at
a high level (ðÑ = 3.60) and that there was no statistically significant
difference between males and females, and among students who have
different lengths of English learning experience at the significance
level of 0.05. However, significant differences among students
coming from different fields of study were found at the same level of
significance.
Abstract: Iterative learning control aims to achieve zero tracking
error of a specific command. This is accomplished by iteratively
adjusting the command given to a feedback control system, based on
the tracking error observed in the previous iteration. One would like
the iterations to converge to zero tracking error in spite of any error
present in the model used to design the learning law. First, this need
for stability robustness is discussed, and then the need for robustness
of the property that the transients are well behaved. Methods of
producing the needed robustness to parameter variations and to
singular perturbations are presented. Then a method involving
reverse time runs is given that lets the world behavior produce the
ILC gains in such a way as to eliminate the need for a mathematical
model. Since the real world is producing the gains, there is no issue
of model error. Provided the world behaves linearly, the approach
gives an ILC law with both stability robustness and good transient
robustness, without the need to generate a model.
Abstract: Text categorization (the assignment of texts in natural language into predefined categories) is an important and extensively studied problem in Machine Learning. Currently, popular techniques developed to deal with this task include many preprocessing and learning algorithms, many of which in turn require tuning nontrivial internal parameters. Although partial studies are available, many authors fail to report values of the parameters they use in their experiments, or reasons why these values were used instead of others. The goal of this work then is to create a more thorough comparison of preprocessing parameters and their mutual influence, and report interesting observations and results.
Abstract: Since the one-to-one word translator does not have the
facility to translate pragmatic aspects of Javanese, the parallel text
alignment model described uses a phrase pair combination. The
algorithm aligns the parallel text automatically from the beginning to
the end of each sentence. Even though the results of the phrase pair
combination outperform the previous algorithm, it is still inefficient.
Recording all possible combinations consume more space in the
database and time consuming. The original algorithm is modified by
applying the edit distance coefficient to improve the data-storage
efficiency. As a result, the data-storage consumption is 90% reduced
as well as its learning period (42s).
Abstract: Data mining is the process of sifting through large
volumes of data, analyzing data from different perspectives and
summarizing it into useful information. One of the widely used
desktop applications for data mining is the Weka tool which is
nothing but a collection of machine learning algorithms implemented
in Java and open sourced under the General Public License (GPL). A
web service is a software system designed to support interoperable
machine to machine interaction over a network using SOAP
messages. Unlike a desktop application, a web service is easy to
upgrade, deliver and access and does not occupy any memory on the
system. Keeping in mind the advantages of a web service over a
desktop application, in this paper we are demonstrating how this Java
based desktop data mining application can be implemented as a web
service to support data mining across the internet.
Abstract: Self-directed learning (SDL) was developed initially
for adult learning. Guglielmino constructed a scale to measure SDL.
Recent researchers have applied this concept to children. Although
there are sufficient theoretical evidences to present the possibility of
applying this concept to children, empirical evidences were not
provided. This study aimed to examine the quality of SDL and
construct a scale to measure SDL among young children. A modified
scale of Guglielmino-s scale was constructed and piloted with 183
subjects of age 9. Findings suggest that the qualities of SDL in young
ages are apparently congruent with that of adults.
Abstract: The main purpose of this study was to determine the predictors of academic achievement of student Information and Communications Technologies (ICT) teachers with different learning styles. Participants were 148 student ICT teachers from Ankara University. Participants were asked to fill out a personal information sheet, the Turkish version of Kolb-s Learning Style Inventory, Weinstein-s Learning and Study Strategies Inventory, Schommer's Epistemological Beliefs Questionnaire, and Eysenck-s Personality Questionnaire. Stepwise regression analyses showed that the statistically significant predictors of the academic achievement of the accommodators were attitudes and high school GPAs; of the divergers was anxiety; of the convergers were gender, epistemological beliefs, and motivation; and of the assimilators were gender, personality, and test strategies. Implications for ICT teaching-learning processes and teacher education are discussed.
Abstract: Hopfield model of associative memory is studied in this work. In particular, two main problems that it possesses: the apparition of spurious patterns in the learning phase, implying the well-known effect of storing the opposite pattern, and the problem of its reduced capacity, meaning that it is not possible to store a great amount of patterns without increasing the error probability in the retrieving phase. In this paper, a method to avoid spurious patterns is presented and studied, and an explanation of the previously mentioned effect is given. Another technique to increase the capacity of a network is proposed here, based on the idea of using several reference points when storing patterns. It is studied in depth, and an explicit formula for the capacity of the network with this technique is provided.
Abstract: This article concerns the presentation of an integrated
method for detection of steganographic content embedded by new
unknown programs. The method is based on data mining and
aggregated hypothesis testing. The article contains the theoretical
basics used to deploy the proposed detection system and the
description of improvement proposed for the basic system idea.
Further main results of experiments and implementation details are
collected and described. Finally example results of the tests are
presented.