Abstract: Service-oriented systems have become popular and
presented many advantages in develop and maintain process. The
coupling is the most important attribute of services when they are
integrated into a system. In this paper, we propose a suite of metrics
to evaluate service-s quality according to its ability of coupling. We
use the coupling metrics to measure the maintainability, reliability,
testability, and reusability of services. Our proposed metrics are
operated in run-time which bring more exact results.
Abstract: The paper presents the results of the European EIE
project “Realising the potential for small scale renewable energy
sources in the home – Kyotointhehome". The project's global aim is
to inform and educate teachers, students and their families so that
they can realise the need and can assess the potential for energy
efficiency (EE) measures and renewable energy sources (RES) in
their homes. The project resources were translated and trialled by 16
partners in 10 European countries.
A web-based methodology which will enable families to assess
how RES can be incorporated into energy efficient homes was
accomplished. The web application “KYOTOINHOME" will help
the citizens to identify what they can do to help their community
meet the Kyoto target for greenhouse gas reductions and prevent
global warming. This application provides useful information on how
the citizens can use renewable energy sources in their home to
provide space heating and cooling, hot water and electricity. A
methodology for assessing heat loss in a dwelling and application of
heat pump system was elaborated and will be implemented this year.
For schools, we developed a set of practical activities concerned with
preventing climate change through using renewable energy sources.
Complementary resources will also developed in the Romanian
research project “Romania Contribution to the European Targets
Regarding the Development of Renewable Energy Sources" -
PROMES.
Abstract: e-Government is already in its second decade. Prerequisite for further development and adaptation to new realities is the optimal management of administrative information and knowledge production by those involved, i.e. the public sector, citizens and businesses. Nowadays, the amount of information displayed or distributed on the Internet has reached enormous dimensions, resulting in serious difficulties when extracting and managing knowledge. The semantic web is expected to play an important role in solving this problem and the technologies that support it. In this article, we address some relevant issues.
Abstract: This paper presents a semi-supervised learning algorithm called Iterative-Cross Training (ICT) to solve the Web pages classification problems. We apply Inductive logic programming (ILP) as a strong learner in ICT. The objective of this research is to evaluate the potential of the strong learner in order to boost the performance of the weak learner of ICT. We compare the result with the supervised Naive Bayes, which is the well-known algorithm for the text classification problem. The performance of our learning algorithm is also compare with other semi-supervised learning algorithms which are Co-Training and EM. The experimental results show that ICT algorithm outperforms those algorithms and the performance of the weak learner can be enhanced by ILP system.
Abstract: We demonstrate through a sample application, Ebanking,
that the Web Service Modelling Language Ontology component
can be used as a very powerful object-oriented database design
language with logic capabilities. Its conceptual syntax allows the
definition of class hierarchies, and logic syntax allows the definition
of constraints in the database. Relations, which are available for
modelling relations of three or more concepts, can be connected to
logical expressions, allowing the implicit specification of database
content. Using a reasoning tool, logic queries can also be made
against the database in simulation mode.
Abstract: When studying electronics, hands-on experience is considered to be very valuable for a better understanding of the concepts of electricity and electronics. Students lacking sufficient time in the lab are often put at disadvantage. A way to overcome this, is by using interactive multimedia in a virtual environment. Instead of proposing another new ad-hoc simulator for e-learning, we propose in this paper an e-learning platform integrating the SPICE simulator as a web service. This enables to make use of all the functions of the de-facto standard simulator SPICE inelectronics when developing new simulations.
Abstract: The state of the art in instructional design for
computer-assisted learning has been strongly influenced by advances
in information technology, Internet and Web-based systems. The
emphasis of educational systems has shifted from training to
learning. The course delivered has also been changed from large
inflexible content to sequential small chunks of learning objects. The
concepts of learning objects together with the advanced technologies
of Web and communications support the reusability, interoperability,
and accessibility design criteria currently exploited by most learning
systems. These concepts enable just-in-time learning. We propose to
extend theses design criteria further to include the learnability
concept that will help adapting content to the needs of learners. The
learnability concept offers a better personalization leading to the
creation and delivery of course content more appropriate to
performance and interest of each learner. In this paper we present a
new framework of learning environments containing knowledge
discovery as a tool to automatically learn patterns of learning
behavior from learners' profiles and history.
Abstract: Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity classes and is now-a-days considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). Though this state of the art machine learning technique has been widely applied to NER in several well-studied languages, the use of this technique to Indian languages (ILs) is very new. The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the four different named (NE) classes, such as Person name, Location name, Organization name and Miscellaneous name. We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes 1, defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL) 2. In addition, we have manually annotated 150K wordforms of the Bengali news corpus, developed from the web-archive of a leading Bengali newspaper. We have also developed an unsupervised algorithm in order to generate the lexical context patterns from a part of the unlabeled Bengali news corpus. Lexical patterns have been used as the features of SVM in order to improve the system performance. The NER system has been tested with the gold standard test sets of 35K, and 60K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the recall, precision, and f-score values of 88.61%, 80.12%, and 84.15%, respectively, for Bengali and 80.23%, 74.34%, and 77.17%, respectively, for Hindi. Results show the improvement in the f-score by 5.13% with the use of context patterns. Statistical analysis, ANOVA is also performed to compare the performance of the proposed NER system with that of the existing HMM based system for both the languages.
Abstract: In Southeast Asia, during the dry season (August to
October) forest fires in Indonesia emit pollutants into the atmosphere.
For two years during this period, a total of 67 samples of 2.5 μm
particulate matters were collected and analyzed for total mass and
elemental composition with ICP - MS after microwave digestion. A
study of 60 elements measured during these periods suggest that the
concentration of most of elements, even those usually related to
crustal source, are extremely high and unpredictable during the haze
period. In By contrast, trace element concentration in non - haze
months is more stable and covers a lower range. Other unexpected
events and their effects on the findings are discussed.
Abstract: With the rapid growth in business size, today's businesses orient towards electronic technologies. Amazon.com and e-bay.com are some of the major stakeholders in this regard. Unfortunately the enormous size and hugely unstructured data on the web, even for a single commodity, has become a cause of ambiguity for consumers. Extracting valuable information from such an everincreasing data is an extremely tedious task and is fast becoming critical towards the success of businesses. Web content mining can play a major role in solving these issues. It involves using efficient algorithmic techniques to search and retrieve the desired information from a seemingly impossible to search unstructured data on the Internet. Application of web content mining can be very encouraging in the areas of Customer Relations Modeling, billing records, logistics investigations, product cataloguing and quality management. In this paper we present a review of some very interesting, efficient yet implementable techniques from the field of web content mining and study their impact in the area specific to business user needs focusing both on the customer as well as the producer. The techniques we would be reviewing include, mining by developing a knowledge-base repository of the domain, iterative refinement of user queries for personalized search, using a graphbased approach for the development of a web-crawler and filtering information for personalized search using website captions. These techniques have been analyzed and compared on the basis of their execution time and relevance of the result they produced against a particular search.
Abstract: Recent advancements in sensor technologies and
Wireless Body Area Networks (WBANs) have led to the
development of cost-effective healthcare devices which can be used
to monitor and analyse a person-s physiological parameters from
remote locations. These advancements provides a unique opportunity
to overcome current healthcare challenges of low quality service
provisioning, lack of easy accessibility to service varieties, high costs
of services and increasing population of the elderly experienced
globally. This paper reports on a prototype implementation of an
architecture that seamlessly integrates Wireless Body Area Network
(WBAN) with Web services (WS) to proactively collect
physiological data of remote patients to recommend diagnostic
services. Technologies based upon WBAN and WS can provide
ubiquitous accessibility to a variety of services by allowing
distributed healthcare resources to be massively reused to provide
cost-effective services without individuals physically moving to the
locations of those resources. In addition, these technologies can
reduce costs of healthcare services by allowing individuals to access
services to support their healthcare. The prototype uses WBAN body
sensors implemented on arduino fio platforms to be worn by the
patient and an android smart phone as a personal server. The
physiological data are collected and uploaded through GPRS/internet
to the Medical Health Server (MHS) to be analysed. The prototype
monitors the activities, location and physiological parameters such as
SpO2 and Heart Rate of the elderly and patients in rehabilitation.
Medical practitioners would have real time access to the uploaded
information through a web application.
Abstract: E-travel is travel agency-s companies employing internet and website as e-commerce context. This study presents numerous initial key factors of electronic travel model based on small travel agencies perspectives. Browsing previous studies related to website travel activities are conducted. Five small travel agencies in Indonesia has been deeply interviewed in case studies. The finding of this research is identifying numerous characteristics and dimension factors and travel website operations including ownermanager roles, business experiences, characteristically business, and technological aspects. This study is the preliminary research related to travel website adoption in Indonesia. The further study would be conducted in questionnaires of the quantitative research in Indonesia contexts as a developing country.
Abstract: High quality requirements analysis is one of the most
crucial activities to ensure the success of a software project, so that
requirements verification for software system becomes more and more
important in Requirements Engineering (RE) and it is one of the most
helpful strategies for improving the quality of software system.
Related works show that requirement elicitation and analysis can be
facilitated by ontological approaches and semantic web technologies.
In this paper, we proposed a hybrid method which aims to verify
requirements with structural and formal semantics to detect
interactions. The proposed method is twofold: one is for modeling
requirements with the semantic web language OWL, to construct a
semantic context; the other is a set of interaction detection rules which
are derived from scenario-based analysis and represented with
semantic web rule language (SWRL). SWRL based rules are working
with rule engines like Jess to reason in semantic context for
requirements thus to detect interactions. The benefits of the proposed
method lie in three aspects: the method (i) provides systematic steps
for modeling requirements with an ontological approach, (ii) offers
synergy of requirements elicitation and domain engineering for
knowledge sharing, and (3)the proposed rules can systematically assist
in requirements interaction detection.
Abstract: Personal name matching system is the core of
essential task in national citizen database, text and web mining,
information retrieval, online library system, e-commerce and record
linkage system. It has necessitated to the all embracing research in
the vicinity of name matching. Traditional name matching methods
are suitable for English and other Latin based language. Asian
languages which have no word boundary such as Myanmar language
still requires sounds alike matching system in Unicode based
application. Hence we proposed matching algorithm to get analogous
sounds alike (phonetic) pattern that is convenient for Myanmar
character spelling. According to the nature of Myanmar character, we
consider for word boundary fragmentation, collation of character.
Thus we use pattern conversion algorithm which fabricates words in
pattern with fragmented and collated. We create the Myanmar sounds
alike phonetic group to help in the phonetic matching. The
experimental results show that fragmentation accuracy in 99.32% and
processing time in 1.72 ms.
Abstract: Baseball is unique among other sports in Taiwan.
Baseball has become a “symbol of the Taiwanese spirit and Taiwan-s
national sport". Taiwan-s first professional sports league, the Chinese
Professional Baseball League (CPBL), was established in 1989.
Starters pitch many more innings over the course of a season and for
a century teams have made all their best pitchers starters. In this
study, we attempt to determine the on-field performance these
pitchers and which won the most CPBL games in 2009. We utilize
the discriminate analysis approach to solve the problem, examining
winning pitchers and their statistics, to reliably find the best starting
pitcher. The data employed in this paper include innings pitched (IP),
earned runs allowed (ERA) and walks plus hits per inning pitched
(WPHIP) provided by the official website of the CPBL. The results
show that Aaron Rakers was the best starting pitcher of the CPBL.
The top 10 CPBL starting pitchers won 14 games to 8 games in the
2009 season. Though Fisher Discriminant Analysis, predicted to top
10 CPBL starting pitchers probably won 20 games to 9 games, more
1 game to 7 games in actually counts in 2009 season.
Abstract: In this contribution a newly developed elearning environment is presented, which incorporates Intelligent Agents and Computational Intelligence Techniques. The new e-learning environment is constituted by three parts, the E-learning platform Front-End, the Student Questioner Reasoning and the Student Model Agent. These parts are distributed geographically in dispersed computer servers, with main focus on the design and development of these subsystems through the use of new and emerging technologies. These parts are interconnected in an interoperable way, using web services for the integration of the subsystems, in order to enhance the user modelling procedure and achieve the goals of the learning process.
Abstract: Almost all universities include some form of assignment in their courses. The assignments are either carried out in either in groups or individually. To effectively manage these submitted assignments, a well-designed assignment submission system is needed, hence the need for an online assignment submission system to facilitate the distribution, and collection of assignments on due dates. The objective of such system is to facilitate interaction of lecturers and students for assessment and grading purposes. The aim of this study was to create a web based online assignment submission system for University of Mauritius. The system was created to eliminate the traditional process of giving an assignment and collecting the answers for the assignment. Lecturers can also create automated assessment to assess the students online. Moreover, the online submission system consists of an automatic mailing system which acts as a reminder for students about the deadlines of the posted assignments. System was tested to measure its acceptance rate among both student and lecturers.
Abstract: The explosive growth of World Wide Web has posed
a challenging problem in extracting relevant data. Traditional web
crawlers focus only on the surface web while the deep web keeps
expanding behind the scene. Deep web pages are created
dynamically as a result of queries posed to specific web databases.
The structure of the deep web pages makes it impossible for
traditional web crawlers to access deep web contents. This paper,
Deep iCrawl, gives a novel and vision-based approach for extracting
data from the deep web. Deep iCrawl splits the process into two
phases. The first phase includes Query analysis and Query translation
and the second covers vision-based extraction of data from the
dynamically created deep web pages. There are several established
approaches for the extraction of deep web pages but the proposed
method aims at overcoming the inherent limitations of the former.
This paper also aims at comparing the data items and presenting them
in the required order.
Abstract: Recently, many web services to provide information for public transport are developed and released. They are optimized for mobile devices such a smartphone. We are also developing better path planning system for route buses and trains called “Bus-Net"[1]. However these systems only provide paths and related information before the user start moving. So we propose a context aware navigation to change the way to support public transport users. If we go to somewhere using many kinds of public transport, we have to know how to use them. In addition, public transport is dynamic system, and these have different characteristic by type. So we need information at real-time. Therefore we suggest the system that can support on user-s state. It has a variety of ways to help public transport users by each state, like turn-by-turn navigation. Context aware navigation will be able to reduce anxiety for using public transport.
Abstract: Search for a tertiary substructure that geometrically
matches the 3D pattern of the binding site of a well-studied protein provides a solution to predict protein functions. In our previous work,
a web server has been built to predict protein-ligand binding sites
based on automatically extracted templates. However, a drawback of such templates is that the web server was prone to resulting in many
false positive matches. In this study, we present a sequence-order constraint to reduce the false positive matches of using automatically
extracted templates to predict protein-ligand binding sites. The binding site predictor comprises i) an automatically constructed template library and ii) a local structure alignment algorithm for
querying the library. The sequence-order constraint is employed to
identify the inconsistency between the local regions of the query protein and the templates. Experimental results reveal that the sequence-order constraint can largely reduce the false positive matches and is effective for template-based binding site prediction.