Abstract: Recommender systems are usually regarded as an
important marketing tool in the e-commerce. They use important
information about users to facilitate accurate recommendation. The
information includes user context such as location, time and interest
for personalization of mobile users. We can easily collect information
about location and time because mobile devices communicate with the
base station of the service provider. However, information about user
interest can-t be easily collected because user interest can not be
captured automatically without user-s approval process. User interest
usually represented as a need. In this study, we classify needs into two
types according to prior research. This study investigates the
usefulness of data mining techniques for classifying user need type for
recommendation systems. We employ several data mining techniques
including artificial neural networks, decision trees, case-based
reasoning, and multivariate discriminant analysis. Experimental
results show that CHAID algorithm outperforms other models for
classifying user need type. This study performs McNemar test to
examine the statistical significance of the differences of classification
results. The results of McNemar test also show that CHAID performs
better than the other models with statistical significance.
Abstract: Generally, administrative systems in an academic
environment are disjoint and support independent queries. The
objective in this work is to semantically connect these independent
systems to provide support to queries run on the integrated platform.
The proposed framework, by enriching educational material in the
legacy systems, provides a value-added semantics layer where
activities such as annotation, query and reasoning can be carried out
to support management requirements. We discuss the development of
this ontology framework with a case study of UAE University
program administration to show how semantic web technologies can
be used by administration to develop student profiles for better
academic program management.
Abstract: In two studies we tested the hypothesis that the
appropriate linguistic formulation of a deontic rule – i.e. the
formulation which clarifies the monadic nature of deontic operators
- should produce more correct responses than the conditional
formulation in Wason selection task. We tested this assumption by
presenting a prescription rule and a prohibition rule in conditional
vs. proper deontic formulation. We contrasted this hypothesis with
two other hypotheses derived from social contract theory and
relevance theory. According to the first theory, a deontic rule
expressed in terms of cost-benefit should elicit a cheater detection
module, sensible to mental states attributions and thus able to
discriminate intentional rule violations from accidental rule
violations. We tested this prevision by distinguishing the two types
of violations. According to relevance theory, performance in
selection task should improve by increasing cognitive effect and
decreasing cognitive effort. We tested this prevision by focusing
experimental instructions on the rule vs. the action covered by the
rule. In study 1, in which 480 undergraduates participated, we
tested these predictions through a 2 x 2 x 2 x 2 (type of the rule x
rule formulation x type of violation x experimental instructions)
between-subjects design. In study 2 – carried out by means of a 2 x
2 (rule formulation x type of violation) between-subjects design -
we retested the hypothesis of rule formulation vs. the cheaterdetection
hypothesis through a new version of selection task in
which intentional vs. accidental rule violations were better
discriminated. 240 undergraduates participated in this study.
Results corroborate our hypothesis and challenge the contrasting
assumptions. However, they show that the conditional formulation
of deontic rules produces a lower performance than what is
reported in literature.
Abstract: Knowledge discovery from text and ontology learning
are relatively new fields. However their usage is extended in many
fields like Information Retrieval (IR) and its related domains. Human
Plausible Reasoning based (HPR) IR systems for example need a
knowledge base as their underlying system which is currently made
by hand. In this paper we propose an architecture based on ontology
learning methods to automatically generate the needed HPR
knowledge base.
Abstract: To realize the vision of ubiquitous computing, it is
important to develop a context-aware infrastructure which can help
ubiquitous agents, services, and devices become aware of their
contexts because such computational entities need to adapt themselves
to changing situations. A context-aware infrastructure manages the
context model representing contextual information and provides
appropriate information. In this paper, we introduce Context-Aware
Middleware for URC System (hereafter CAMUS) as a context-aware
infrastructure for a network-based intelligent robot system and discuss
the ontology-based context modeling and reasoning approach which is
used in that infrastructure.
Abstract: Trust is essential for further and wider acceptance of
contemporary e-services. It was first addressed almost thirty years
ago in Trusted Computer System Evaluation Criteria standard by
the US DoD. But this and other proposed approaches of that
period were actually solving security. Roughly some ten years ago,
methodologies followed that addressed trust phenomenon at its core,
and they were based on Bayesian statistics and its derivatives, while
some approaches were based on game theory. However, trust is a
manifestation of judgment and reasoning processes. It has to be dealt
with in accordance with this fact and adequately supported in cyber
environment. On the basis of the results in the field of psychology
and our own findings, a methodology called qualitative algebra has
been developed, which deals with so far overlooked elements of trust
phenomenon. It complements existing methodologies and provides a
basis for a practical technical solution that supports management of
trust in contemporary computing environments. Such solution is also
presented at the end of this paper.
Abstract: This work presents a new algorithm based on a combination of fuzzy (FUZ), Dynamic Programming (DP), and Genetic Algorithm (GA) approach for capacitor allocation in distribution feeders. The problem formulation considers two distinct objectives related to total cost of power loss and total cost of capacitors including the purchase and installation costs. The novel formulation is a multi-objective and non-differentiable optimization problem. The proposed method of this article uses fuzzy reasoning for sitting of capacitors in radial distribution feeders, DP for sizing and finally GA for finding the optimum shape of membership functions which are used in fuzzy reasoning stage. The proposed method has been implemented in a software package and its effectiveness has been verified through a 9-bus radial distribution feeder for the sake of conclusions supports. A comparison has been done among the proposed method of this paper and similar methods in other research works that shows the effectiveness of the proposed method of this paper for solving optimum capacitor planning problem.
Abstract: CIM is the standard formalism for modeling management
information developed by the Distributed Management Task
Force (DMTF) in the context of its WBEM proposal, designed to
provide a conceptual view of the managed environment. In this
paper, we propose the inclusion of formal knowledge representation
techniques, based on Description Logics (DLs) and the Web Ontology
Language (OWL), in CIM-based conceptual modeling, and then we
examine the benefits of such a decision. The proposal is specified
as a CIM metamodel level mapping to a highly expressive subset
of DLs capable of capturing all the semantics of the models. The
paper shows how the proposed mapping provides CIM diagrams with
precise semantics and can be used for automatic reasoning about the
management information models, as a design aid, by means of newgeneration
CASE tools, thanks to the use of state-of-the-art automatic
reasoning systems that support the proposed logic and use algorithms
that are sound and complete with respect to the semantics. Such a
CASE tool framework has been developed by the authors and its
architecture is also introduced. The proposed formalization is not
only useful at design time, but also at run time through the use of
rational autonomous agents, in response to a need recently recognized
by the DMTF.
Abstract: The challenge for software development house in
Bangladesh is to find a path of using minimum process rather than CMMI or ISO type gigantic practice and process area. The small and medium size organization in Bangladesh wants to ensure minimum
basic Software Process Improvement (SPI) in day to day operational
activities. Perhaps, the basic practices will ensure to realize their company's improvement goals. This paper focuses on the key issues in basic software practices for small and medium size software
organizations, who are unable to effort the CMMI, ISO, ITIL etc. compliance certifications. This research also suggests a basic software process practices model for Bangladesh and it will show the mapping of our suggestions with international best practice. In this IT
competitive world for software process improvement, Small and medium size software companies that require collaboration and
strengthening to transform their current perspective into inseparable global IT scenario. This research performed some investigations and analysis on some projects- life cycle, current good practice, effective approach, reality and pain area of practitioners, etc. We did some
reasoning, root cause analysis, comparative analysis of various
approach, method, practice and justifications of CMMI and real life. We did avoid reinventing the wheel, where our focus is for minimal
practice, which will ensure a dignified satisfaction between
organizations and software customer.
Abstract: Meta-reasoning is essential for multi-agent communication. In this paper we propose a framework of multi-agent communication in which agents employ meta-reasoning to reason with agent and ontology locations in order to communicate semantic information with other agents on the semantic web and also reason with multiple distributed ontologies. We shall argue that multi-agent communication of Semantic Web information cannot be realized without the need to reason with agent and ontology locations. This is because for an agent to be able to communicate with another agent, it must know where and how to send a message to that agent. Similarly, for an agent to be able to reason with an external semantic web ontology, it must know where and how to access to that ontology. The agent framework and its communication mechanism are formulated entirely in meta-logic.
Abstract: Validation of an automation system is an important issue. The goal is to check if the system under investigation, modeled by a Petri net, never enters the undesired states. Usually, tools dedicated to Petri nets such as DESIGN/CPN are used to make reachability analysis. The biggest problem with this approach is that it is impossible to generate the full occurence graph of the system because it is too large. In this paper, we show how computational methods such as temporal logic model checking and Groebner bases can be used to verify the correctness of the design of an automation system. We report our experimental results with two automation systems: the Automated Guided Vehicle (AGV) system and the traffic light system. Validation of these two systems ranged from 10 to 30 seconds on a PC depending on the optimizing parameters.
Abstract: We demonstrate through a sample application, Ebanking,
that the Web Service Modelling Language Ontology component
can be used as a very powerful object-oriented database design
language with logic capabilities. Its conceptual syntax allows the
definition of class hierarchies, and logic syntax allows the definition
of constraints in the database. Relations, which are available for
modelling relations of three or more concepts, can be connected to
logical expressions, allowing the implicit specification of database
content. Using a reasoning tool, logic queries can also be made
against the database in simulation mode.
Abstract: This article addresses feature selection for breast
cancer diagnosis. The present process contains a wrapper approach
based on Genetic Algorithm (GA) and case-based reasoning (CBR).
GA is used for searching the problem space to find all of the possible
subsets of features and CBR is employed to estimate the evaluation
result of each subset. The results of experiment show that the
proposed model is comparable to the other models on Wisconsin
breast cancer (WDBC) dataset.
Abstract: Recent theorizations on the cognitive process of moral
judgment have focused on the role of intuitions and emotions, marking
a departure from previous emphasis on conscious, step-by-step
reasoning. My study investigated how being in a disgusted mood state
affects moral judgment.
Participants were induced to enter a disgusted mood state through
listening to disgusting sounds and reading disgusting descriptions.
Results shows that they, when compared to control who have not been
induced to feel disgust, are more likely to endorse actions that are
emotionally aversive but maximizes utilitarian return
The result is analyzed using the 'emotion-as-information' approach
to decision making. The result is consistent with the view that
emotions play an important role in determining moral judgment.
Abstract: There have been different approaches to compute the
analytic instantaneous frequency with a variety of background reasoning
and applicability in practice, as well as restrictions. This paper presents an adaptive Fourier decomposition and (α-counting) based
instantaneous frequency computation approach. The adaptive Fourier
decomposition is a recently proposed new signal decomposition
approach. The instantaneous frequency can be computed through the so called mono-components decomposed by it. Due to the fast energy
convergency, the highest frequency of the signal will be discarded by the adaptive Fourier decomposition, which represents the noise of
the signal in most of the situation. A new instantaneous frequency
definition for a large class of so-called simple waves is also proposed
in this paper. Simple wave contains a wide range of signals for which
the concept instantaneous frequency has a perfect physical sense.
The α-counting instantaneous frequency can be used to compute the highest frequency for a signal. Combination of these two approaches one can obtain the IFs of the whole signal. An experiment is demonstrated the computation procedure with promising results.
Abstract: In this contribution a newly developed elearning environment is presented, which incorporates Intelligent Agents and Computational Intelligence Techniques. The new e-learning environment is constituted by three parts, the E-learning platform Front-End, the Student Questioner Reasoning and the Student Model Agent. These parts are distributed geographically in dispersed computer servers, with main focus on the design and development of these subsystems through the use of new and emerging technologies. These parts are interconnected in an interoperable way, using web services for the integration of the subsystems, in order to enhance the user modelling procedure and achieve the goals of the learning process.
Abstract: Hybrid knowledge model is suggested as an underlying
framework for product development management. It can support such
hybrid features as ontologies and rules. Effective collaboration in
product development environment depends on sharing and reasoning
product information as well as engineering knowledge. Many studies
have considered product information and engineering knowledge.
However, most previous research has focused either on building the
ontology of product information or rule-based systems of engineering
knowledge. This paper shows that F-logic based knowledge model can
support such desirable features in a hybrid way.
Abstract: Diabetes is one of the high prevalence diseases
worldwide with increased number of complications, with retinopathy
as one of the most common one. This paper describes how data
mining and case-based reasoning were integrated to predict
retinopathy prevalence among diabetes patients in Malaysia. The
knowledge base required was built after literature reviews and
interviews with medical experts. A total of 140 diabetes patients- data
were used to train the prediction system. A voting mechanism selects
the best prediction results from the two techniques used. It has been
successfully proven that both data mining and case-based reasoning
can be used for retinopathy prediction with an improved accuracy of
85%.
Abstract: Influence diagrams (IDs) are one of the most commonly used graphical decision models for reasoning under uncertainty. The quantification of IDs which consists in defining conditional probabilities for chance nodes and utility functions for value nodes is not always obvious. In fact, decision makers cannot always provide exact numerical values and in some cases, it is more easier for them to specify qualitative preference orders. This work proposes an adaptation of standard IDs to the qualitative framework based on possibility theory.
Abstract: The approaches to make an agent generate intelligent actions in the AI field might be roughly categorized into two ways–the classical planning and situated action system. It is well known that each system have its own strength and weakness. However, each system also has its own application field. In particular, most of situated action systems do not directly deal with the logical problem. This paper first briefly mentions the novel action generator to situatedly extract a set of actions, which is likely to help to achieve the goal at the current situation in the relaxed logical space. After performing the action set, the agent should recognize the situation for deciding the next likely action set. However, since the extracted action is an approximation of the action which helps to achieve the goal, the agent could be caught into the deadlock of the problem. This paper proposes the newly developed hybrid architecture to solve the problem, which combines the novel situated action generator with the conventional planner. The empirical result in some planning domains shows that the quality of the resultant path to the goal is mostly acceptable as well as deriving the fast response time, and suggests the correlation between the structure of problems and the organization of each system which generates the action.