Abstract: As computer network technology becomes
increasingly complex, it becomes necessary to place greater
requirements on the validity of developing standards and the
resulting technology. Communication networks are based on large
amounts of protocols. The validity of these protocols have to be
proved either individually or in an integral fashion. One strategy for
achieving this is to apply the growing field of formal methods.
Formal methods research defines systems in high order logic so that
automated reasoning can be applied for verification. In this research
we represent and implement a formerly announced multicast protocol
in Prolog language so that certain properties of the protocol can be
verified. It is shown that by using this approach some minor faults in
the protocol were found and repaired. Describing the protocol as
facts and rules also have other benefits i.e. leads to a process-able
knowledge. This knowledge can be transferred as ontology between
systems in KQML format. Since the Prolog language can increase its
knowledge base every time, this method can also be used to learn an
intelligent network.
Abstract: Machine-understandable data when strongly
interlinked constitutes the basis for the SemanticWeb. Annotating
web documents is one of the major techniques for creating metadata
on the Web. Annotating websites defines the containing data in a
form which is suitable for interpretation by machines. In this paper,
we present a new approach to annotate websites and documents by
promoting the abstraction level of the annotation process to a
conceptual level. By this means, we hope to solve some of the
problems of the current annotation solutions.
Abstract: We introduce a logic-based framework for database
updating under constraints. In our framework, the constraints are
represented as an instantiated extended logic program. When performing
an update, database consistency may be violated. We provide
an approach of maintaining database consistency, and study the
conditions under which the maintenance process is deterministic. We
show that the complexity of the computations and decision problems
presented in our framework is in each case polynomial time.
Abstract: We report in this paper the model adopted by our
system of continuous speech recognition in Arab language SySRA
and the results obtained until now. This system uses the database
Arabdic-10 which is a corpus of word for the Arab language and
which was manually segmented. Phonetic decoding is represented
by an expert system where the knowledge base is translated in the
form of production rules. This expert system transforms a vocal
signal into a phonetic lattice. The higher level of the system takes
care of the recognition of the lattice thus obtained by deferring it in
the form of written sentences (orthographical Form). This level
contains initially the lexical analyzer which is not other than the
module of recognition. We subjected this analyzer to a set of
spectrograms obtained by dictating a score of sentences in Arab
language. The rate of recognition of these sentences is about 70%
which is, to our knowledge, the best result for the recognition of the
Arab language. The test set consists of twenty sentences from four
speakers not having taken part in the training.
Abstract: Knowledge modelling, a main activity for the development of Knowledge Based Systems, have no set standards and are mostly done in an ad hoc way. There is a lack of support for the transition from abstract level to implementation. In this paper, a methodology for the development of the knowledge model, which is inspired by both Software and Knowledge Engineering, is proposed. Use of UML which is the de-facto standard for modelling in the software engineering arena is explored for knowledge modelling. The methodology proposed, is used to develop a knowledge model of a knowledge based system for recommending suitable hotels for tourists visiting Mauritius.
Abstract: In this paper we propose an NLP-based method for
Ontology Population from texts and apply it to semi automatic
instantiate a Generic Knowledge Base (Generic Domain Ontology) in
the risk management domain. The approach is semi-automatic and
uses a domain expert intervention for validation. The proposed
approach relies on a set of Instances Recognition Rules based on
syntactic structures, and on the predicative power of verbs in the
instantiation process. It is not domain dependent since it heavily
relies on linguistic knowledge.
A description of an experiment performed on a part of the
ontology of the PRIMA1 project (supported by the European
community) is given. A first validation of the method is done by
populating this ontology with Chemical Fact Sheets from
Environmental Protection Agency2. The results of this experiment
complete the paper and support the hypothesis that relying on the
predicative power of verbs in the instantiation process improves the
performance.
Abstract: In the last few years, the Semantic Web gained scientific acceptance as a means of relationships identification in knowledge base, widely known by semantic association. Query about complex relationships between entities is a strong requirement for many applications in analytical domains. In bioinformatics for example, it is critical to extract exchanges between proteins. Currently, the widely known result of such queries is to provide paths between connected entities from data graph. However, they do not always give good results while facing the user need by the best association or a set of limited best association, because they only consider all existing paths but ignore the path evaluation. In this paper, we present an approach for supporting association discovery queries. Our proposal includes (i) a query language PmSPRQL which provides a multiparadigm query expressions for association extraction and (ii) some quantification measures making easy the process of association ranking. The originality of our proposal is demonstrated by a performance evaluation of our approach on real world datasets.
Abstract: The Spiral development model has been used
successfully in many commercial systems and in a good number of
defense systems. This is due to the fact that cost-effective
incremental commitment of funds, via an analogy of the spiral model
to stud poker and also can be used to develop hardware or integrate
software, hardware, and systems. To support adaptive, semantic
collaboration between domain experts and knowledge engineers, a
new knowledge engineering process, called Spiral_OWL is proposed.
This model is based on the idea of iterative refinement, annotation
and structuring of knowledge base. The Spiral_OWL model is
generated base on spiral model and knowledge engineering
methodology. A central paradigm for Spiral_OWL model is the
concentration on risk-driven determination of knowledge engineering
process. The collaboration aspect comes into play during knowledge
acquisition and knowledge validation phase. Design rationales for the
Spiral_OWL model are to be easy-to-implement, well-organized, and
iterative development cycle as an expanding spiral.
Abstract: Intensive changes of environment and strong market
competition have raised management of information and knowledge
to the strategic level of companies. In a knowledge based economy
only those organizations are capable of living which have up-to-date,
special knowledge and they are able to exploit and develop it.
Companies have to know what knowledge they have by taking a
survey of organizational knowledge and they have to fix actual and
additional knowledge in organizational memory. The question is how
to identify, acquire, fix and use knowledge effectively. The paper will
show that over and above the tools of information technology
supporting acquisition, storage and use of information and
organizational learning as well as knowledge coming into being as a
result of it, fixing and storage of knowledge in the memory of a
company play an important role in the intelligence of organizations
and competitiveness of a company.
Abstract: This article concerns the presentation of an integrated
method for detection of steganographic content embedded by new
unknown programs. The method is based on data mining and
aggregated hypothesis testing. The article contains the theoretical
basics used to deploy the proposed detection system and the
description of improvement proposed for the basic system idea.
Further main results of experiments and implementation details are
collected and described. Finally example results of the tests are
presented.
Abstract: In this paper, we present a novel technique called Self-Learning Expert System (SLES). Unlike Expert System, where there is a need for an expert to impart experiences and knowledge to create the knowledge base, this technique tries to acquire the experience and knowledge automatically. To display this technique at work, a simulation of a mobile robot navigating through an environment with obstacles is employed using visual basic. The mobile robot will move through this area without colliding with any obstacle and save the path that it took. If the mobile robot has to go through a similar environment again, then it will apply this experience to help it move through quicker without having to check for collision.
Abstract: The concept of e-Learning is now emerging in Sub Saharan African countries like Tanzania. Due to economic constraints and other social and cultural factors faced by these countries, the use of Information and Communication Technology (ICT) is increasing at a very low pace. The digital divide threat has propelled the Government of Tanzania to put in place the national ICT Policy in 2003 which defines the direction of all ICT activities nationally. Among the main focused areas is the use of ICT in education, since for the development of any country, there is a need of creating knowledge based society. This paper discusses the initiatives made so far to introduce the use of ICT tools to some secondary schools using open source software in e-content development to facilitate a self-learning environment
Abstract: One of the most ancient humankind concerns is knowledge formalization i.e. what a concept is. Concept Analysis, a branch of analytical philosophy, relies on the purpose of decompose the elements, relations and meanings of a concept. This paper aims at presenting a method to make a concept analysis obtaining a knowledge representation suitable to be processed by a computer system using either object-oriented or ontology technologies. Security notion is, usually, known as a set of different concepts related to “some kind of protection". Our method concludes that a more general framework for the concept, despite it is dynamic, is possible and any particular definition (instantiation) depends on the elements used by its construction instead of the concept itself.
Abstract: The paper represents a reflection on how to select proper indicators to assess the progress of regional contexts towards a knowledge-based society. Taking the first research methodologies elaborated at an international level (World Bank, OECD, etc.) as a reference point, this work intends to identify a set of indicators of the knowledge economy suitable to adequately understand in which manner and to which extent the territorial development dynamics are correlated with the knowledge-base of the considered local society. After a critical survey of the variables utilized within other approaches adopted by international or national organizations, this paper seeks to elaborate a framework of variables, named Regional Knowledge Economy Indicators (ReKEI), necessary to describe the knowledge-based relations of subnational socio-economic contexts. The realization of this framework has a double purpose: an analytical one consisting in highlighting the regional differences in the governance of knowledge based processes, and an operative one consisting in providing some reference parameters for contributing to increasing the effectiveness of those economic policies aiming at enlarging the knowledge bases of local societies.
Abstract: Thailand-s health system is challenged by the rising
number of patients and decreasing ratio of medical
practitioners/patients, especially in rural areas. This may tempt
inexperienced GPs to rush through the process of anamnesis with the
risk of incorrect diagnosis. Patients have to travel far to the hospital
and wait for a long time presenting their case. Many patients try to
cure themselves with traditional Thai medicine. Many countries are
making use of the Internet for medical information gathering,
distribution and storage. Telemedicine applications are a relatively
new field of study in Thailand; the infrastructure of ICT had
hampered widespread use of the Internet for using medical
information. With recent improvements made health and technology
professionals can work out novel applications and systems to help
advance telemedicine for the benefit of the people. Here we explore
the use of telemedicine for people with health problems in rural areas
in Thailand and present a Telemedicine Diagnosis System for Rural
Thailand (TEDIST) for diagnosing certain conditions that people
with Internet access can use to establish contact with Community
Health Centers, e.g. by mobile phone. The system uses a Web-based
input method for individual patients- symptoms, which are taken by
an expert system for the analysis of conditions and appropriate
diseases. The analysis harnesses a knowledge base and a backward
chaining component to find out, which health professionals should be
presented with the case. Doctors have the opportunity to exchange
emails or chat with the patients they are responsible for or other
specialists. Patients- data are then stored in a Personal Health Record.
Abstract: The knowledge base of welding defect recognition is
essentially incomplete. This characteristic determines that the recognition results do not reflect the actual situation. It also has a further influence on the classification of welding quality. This paper is
concerned with the study of a rough set based method to reduce the influence and improve the classification accuracy. At first, a rough set
model of welding quality intelligent classification has been built. Both condition and decision attributes have been specified. Later on, groups
of the representative multiple compound defects have been chosen
from the defect library and then classified correctly to form the
decision table. Finally, the redundant information of the decision table has been reducted and the optimal decision rules have been reached. By this method, we are able to reclassify the misclassified defects to
the right quality level. Compared with the ordinary ones, this method
has higher accuracy and better robustness.
Abstract: Knowledge discovery from text and ontology learning
are relatively new fields. However their usage is extended in many
fields like Information Retrieval (IR) and its related domains. Human
Plausible Reasoning based (HPR) IR systems for example need a
knowledge base as their underlying system which is currently made
by hand. In this paper we propose an architecture based on ontology
learning methods to automatically generate the needed HPR
knowledge base.
Abstract: This paper proposes a declarative language for
knowledge representation (Ibn Rochd), and its environment of
exploitation (DeGSE). This DeGSE system was designed and
developed to facilitate Ibn Rochd writing applications. The system
was tested on several knowledge bases by ascending complexity,
culminating in a system for recognition of a plant or a tree, and
advisors to purchase a car, for pedagogical and academic guidance,
or for bank savings and credit. Finally, the limits of the language and
research perspectives are stated.
Abstract: This paper presents methodologies for developing an
intelligent CAD system assisting in analysis and design of
reconfigurable special machines. It describes a procedure for
determining feasibility of utilizing these machines for a given part
and presents a model for developing an intelligent CAD system. The
system analyzes geometrical and topological information of the given
part to determine possibility of the part being produced by
reconfigurable special machines from a technical point of view. Also
feasibility of the process from a economical point of view is
analyzed. Then the system determines proper positioning of the part
considering details of machining features and operations needed.
This involves determination of operation types, cutting tools and the
number of working stations needed. Upon completion of this stage
the overall layout of the machine and machining equipment required
are determined.
Abstract: Machine-understandable data when strongly
interlinked constitutes the basis for the SemanticWeb. Annotating
web documents is one of the major techniques for creating metadata
on the Web. Annotating websitexs defines the containing data in a
form which is suitable for interpretation by machines. In this paper,
we present a better and improved approach than previous [1] to
annotate the texts of the websites depends on the knowledge base.