Abstract: The Information and Communication Technologies
(ICTs), and the Wide World Web (WWW) have fundamentally
altered the practice of teaching and learning world wide. Many
universities, organizations, colleges and schools are trying to apply
the benefits of the emerging ICT. In the early nineties the term
learning object was introduced into the instructional technology
vernacular; the idea being that educational resources could be broken
into modular components for later combination by instructors,
learners, and eventually computes into larger structures that would
support learning [1]. However in many developing countries, the use
of ICT is still in its infancy stage and the concept of learning object
is quite new. This paper outlines the learning object design
considerations for developing countries depending on learning
environment.
Abstract: With deep development of software reuse, componentrelated
technologies have been widely applied in the development of
large-scale complex applications. Component identification (CI) is
one of the primary research problems in software reuse, by analyzing
domain business models to get a set of business components with high
reuse value and good reuse performance to support effective reuse.
Based on the concept and classification of CI, its technical stack is
briefly discussed from four views, i.e., form of input business models,
identification goals, identification strategies, and identification
process. Then various CI methods presented in literatures are
classified into four types, i.e., domain analysis based methods,
cohesion-coupling based clustering methods, CRUD matrix based
methods, and other methods, with the comparisons between these
methods for their advantages and disadvantages. Additionally, some
insufficiencies of study on CI are discussed, and the causes are
explained subsequently. Finally, it is concluded with some
significantly promising tendency about research on this problem.
Abstract: Gradual patterns have been studied for many years as
they contain precious information. They have been integrated in
many expert systems and rule-based systems, for instance to reason
on knowledge such as “the greater the number of turns, the greater
the number of car crashes”. In many cases, this knowledge has been
considered as a rule “the greater the number of turns → the greater
the number of car crashes” Historically, works have thus been
focused on the representation of such rules, studying how implication
could be defined, especially fuzzy implication. These rules were
defined by experts who were in charge to describe the systems they
were working on in order to turn them to operate automatically. More
recently, approaches have been proposed in order to mine databases
for automatically discovering such knowledge. Several approaches
have been studied, the main scientific topics being: how to determine
what is an relevant gradual pattern, and how to discover them as
efficiently as possible (in terms of both memory and CPU usage).
However, in some cases, end-users are not interested in raw level
knowledge, and are rather interested in trends. Moreover, it may be
the case that no relevant pattern can be discovered at a low level of
granularity (e.g. city), whereas some can be discovered at a higher
level (e.g. county). In this paper, we thus extend gradual pattern
approaches in order to consider multiple level gradual patterns. For
this purpose, we consider two aggregation policies, namely
horizontal and vertical.
Abstract: In recent years, many researches to mine the exploding Web world, especially User Generated Content (UGC) such as
weblogs, for knowledge about various phenomena and events in the physical world have been done actively, and also Web services
with the Web-mined knowledge have begun to be developed for
the public. However, there are few detailed investigations on how accurately Web-mined data reflect physical-world data. It must be
problematic to idolatrously utilize the Web-mined data in public Web services without ensuring their accuracy sufficiently. Therefore,
this paper introduces the simplest Web Sensor and spatiotemporallynormalized
Web Sensor to extract spatiotemporal data about a target
phenomenon from weblogs searched by keyword(s) representing the
target phenomenon, and tries to validate the potential and reliability of the Web-sensed spatiotemporal data by four kinds of granularity
analyses of coefficient correlation with temperature, rainfall, snowfall,
and earthquake statistics per day by region of Japan Meteorological
Agency as physical-world data: spatial granularity (region-s population
density), temporal granularity (time period, e.g., per day vs. per week), representation granularity (e.g., “rain" vs. “heavy rain"), and
media granularity (weblogs vs. microblogs such as Tweets).