Abstract: An ontology is a data model that represents a set of
concepts in a given field and the relationships among those concepts.
As the emphasis on achieving a semantic web continues to escalate,
ontologies for all types of domains increasingly will be developed.
These ontologies may become large and complex, and as their size
and complexity grows, so will the need for multi-user interfaces for
ontology curation. Herein a functionally comprehensive, generic
approach to maintaining an ontology as a relational database is
presented. Unlike many other ontology editors that utilize a database,
this approach is entirely domain-generic and fully supports Webbased,
collaborative editing including the designation of different
levels of authorization for users.
Abstract: E-services have significantly changed the way of
doing business in recent years. We can, however, observe poor use of
these services. There is a large gap between supply and actual eservices
usage. This is why we started a project to provide an
environment that will encourage the use of e-services. We believe
that only providing e-service does not automatically mean consumers
would use them. This paper shows the origins of our project and its
current position. We discuss the decision of using semantic web
technologies and their potential to improve e-services usage. We also
present current knowledge base and its real-world classification. In the paper, we discuss further work to be done in the project. Current
state of the project is promising.
Abstract: In the world of Peer-to-Peer (P2P) networking
different protocols have been developed to make the resource sharing
or information retrieval more efficient. The SemPeer protocol is a
new layer on Gnutella that transforms the connections of the nodes
based on semantic information to make information retrieval more
efficient. However, this transformation causes high clustering in the
network that decreases the number of nodes reached, therefore the
probability of finding a document is also decreased. In this paper we
describe a mathematical model for the Gnutella and SemPeer
protocols that captures clustering-related issues, followed by a
proposition to modify the SemPeer protocol to achieve moderate
clustering. This modification is a sort of link management for the
individual nodes that allows the SemPeer protocol to be more
efficient, because the probability of a successful query in the P2P
network is reasonably increased. For the validation of the models, we
evaluated a series of simulations that supported our results.
Abstract: One major difficulty that faces developers of
concurrent and distributed software is analysis for concurrency based
faults like deadlocks. Petri nets are used extensively in the
verification of correctness of concurrent programs. ECATNets [2] are
a category of algebraic Petri nets based on a sound combination of
algebraic abstract types and high-level Petri nets. ECATNets have
'sound' and 'complete' semantics because of their integration in
rewriting logic [12] and its programming language Maude [13].
Rewriting logic is considered as one of very powerful logics in terms
of description, verification and programming of concurrent systems.
We proposed in [4] a method for translating Ada-95 tasking
programs to ECATNets formalism (Ada-ECATNet). In this paper,
we show that ECATNets formalism provides a more compact
translation for Ada programs compared to the other approaches based
on simple Petri nets or Colored Petri nets (CPNs). Such translation
doesn-t reduce only the size of program, but reduces also the number
of program states. We show also, how this compact Ada-ECATNet
may be reduced again by applying reduction rules on it. This double
reduction of Ada-ECATNet permits a considerable minimization of
the memory space and run time of corresponding Maude program.
Abstract: The paper presents the method developed to assess
rating points of objects with qualitative indexes. The novelty of the
method lies in the fact that the authors use linguistic scales that allow
to formalize the values of the indexes with the help of fuzzy sets. As
a result it is possible to operate correctly with dissimilar indexes on
the unified basis and to get stable final results. The obtained rating
points are used in decision making based on fuzzy expert opinions.
Abstract: It has been recognized that due to the autonomy and
heterogeneity, of Web services and the Web itself, new approaches
should be developed to describe and advertise Web services. The
most notable approaches rely on the description of Web services
using semantics. This new breed of Web services, termed semantic
Web services, will enable the automatic annotation, advertisement,
discovery, selection, composition, and execution of interorganization
business logic, making the Internet become a common
global platform where organizations and individuals communicate
with each other to carry out various commercial activities and to
provide value-added services. This paper deals with two of the
hottest R&D and technology areas currently associated with the Web
– Web services and the semantic Web. It describes how semantic
Web services extend Web services as the semantic Web improves the
current Web, and presents three different conceptual approaches to
deploying semantic Web services, namely, WSDL-S, OWL-S, and
WSMO.
Abstract: This paper is concerned with the production of an Arabic word semantic similarity benchmark dataset. It is the first of its kind for Arabic which was particularly developed to assess the accuracy of word semantic similarity measurements. Semantic similarity is an essential component to numerous applications in fields such as natural language processing, artificial intelligence, linguistics, and psychology. Most of the reported work has been done for English. To the best of our knowledge, there is no word similarity measure developed specifically for Arabic. In this paper, an Arabic benchmark dataset of 70 word pairs is presented. New methods and best possible available techniques have been used in this study to produce the Arabic dataset. This includes selecting and creating materials, collecting human ratings from a representative sample of participants, and calculating the overall ratings. This dataset will make a substantial contribution to future work in the field of Arabic WSS and hopefully it will be considered as a reference basis from which to evaluate and compare different methodologies in the field.
Abstract: Imitation learning is considered to be an effective way of teaching humanoid robots and action recognition is the key step to imitation learning. In this paper an online algorithm to recognize
parametric actions with object context is presented. Objects are key instruments in understanding an action when there is uncertainty.
Ambiguities arising in similar actions can be resolved with objectn context. We classify actions according to the changes they make to
the object space. Actions that produce the same state change in the object movement space are classified to belong to the same class. This allow us to define several classes of actions where members of
each class are connected with a semantic interpretation.
Abstract: It is discussed about modern usage of adopted words
and their vocabularies, Turkism usage fields, phonetic, grammatical
and lexis-semantic assimilation of the typological-morphological
structures of entering to different Hindi languages in comparative
typological aspects in this scientific article. The lexis vocabulary is
rich, the prevalence area is wide and it has researched the entering
process of vocabulary into the great languages of Turkic elements
from the speakers- numbers. The research work has worked on the
base of Hindi vocabulary.
Abstract: The growing interest on national heritage
preservation has led to intensive efforts on digital documentation of
cultural heritage knowledge. Encapsulated within this effort is the
focus on ontology development that will help facilitate the
organization and retrieval of the knowledge. Ontologies surrounding
cultural heritage domain are related to archives, museum and library
information such as archaeology, artifacts, paintings, etc. The growth
in number and size of ontologies indicates the well acceptance of its
semantic enrichment in many emerging applications. Nowadays,
there are many heritage information systems available for access.
Among others is community-based e-museum designed to support the
digital cultural heritage preservation. This work extends previous
effort of developing the Traditional Malay Textile (TMT) Knowledge
Model where the model is designed with the intention of auxiliary
mapping with CIDOC CRM. Due to its internal constraints, the
model needs to be transformed in advance. This paper addresses the
issue by reviewing the previous harmonization works with CIDOC
CRM as exemplars in refining the facets in the model particularly
involving TMT-Artifact class. The result is an extensible model
which could lead to a common view for automated mapping with
CIDOC CRM. Hence, it promotes integration and exchange of
textile information especially batik-related between communities in
e-museum applications.
Abstract: Semantic query optimization consists in restricting the
search space in order to reduce the set of objects of interest for a
query. This paper presents an indexing method based on UB-trees
and a static analysis of the constraints associated to the views of the
database and to any constraint expressed on attributes. The result of
the static analysis is a partitioning of the object space into disjoint
blocks. Through Space Filling Curve (SFC) techniques, each
fragment (block) of the partition is assigned a unique identifier,
enabling the efficient indexing of fragments by UB-trees. The search
space corresponding to a range query is restricted to a subset of the
blocks of the partition. This approach has been developed in the
context of a KB-DBMS but it can be applied to any relational
system.
Abstract: Content-Based Image Retrieval (CBIR) has been
one on the most vivid research areas in the field of computer vision
over the last 10 years. Many programs and tools have been
developed to formulate and execute queries based on the visual or
audio content and to help browsing large multimedia repositories.
Still, no general breakthrough has been achieved with respect to
large varied databases with documents of difering sorts and with
varying characteristics. Answers to many questions with respect to
speed, semantic descriptors or objective image interpretations are
still unanswered. In the medical field, images, and especially
digital images, are produced in ever increasing quantities and used
for diagnostics and therapy. In several articles, content based
access to medical images for supporting clinical decision making
has been proposed that would ease the management of clinical data
and scenarios for the integration of content-based access methods
into Picture Archiving and Communication Systems (PACS) have
been created. This paper gives an overview of soft computing
techniques. New research directions are being defined that can
prove to be useful. Still, there are very few systems that seem to be
used in clinical practice. It needs to be stated as well that the goal
is not, in general, to replace text based retrieval methods as they
exist at the moment.
Abstract: From the importance of the conference and its
constructive role in the studies discussion, there must be a strong
organization that allows the exploitation of the discussions in opening
new horizons. The vast amount of information scattered across the
web, make it difficult to find experts, who can play a prominent role
in organizing conferences. In this paper we proposed a new approach
of extracting researchers- information from various Web resources
and correlating them in order to confirm their correctness. As a
validator of this approach, we propose a service that will be useful to
set up a conference. Its main objective is to find appropriate experts,
as well as the social events for a conference. For this application we
us Semantic Web technologies like RDF and ontology to represent
the confirmed information, which are linked to another ontology
(skills ontology) that are used to present and compute the expertise.
Abstract: Ontologies play an important role in semantic web applications and are often developed by different groups and continues to evolve over time. The knowledge in ontologies changes very rapidly that make the applications outdated if they continue to use old versions or unstable if they jump to new versions. Temporal frames using frame versioning and slot versioning are used to take care of dynamic nature of the ontologies. The paper proposes new tags and restructured OWL format enabling the applications to work with the old or new version of ontologies. Gene Ontology, a very dynamic ontology, has been used as a case study to explain the OWL Ontology with Temporal Tags.
Abstract: For best collaboration, Asynchronous tools and particularly the discussion forums are the most used thanks to their flexibility in terms of time. To convey only the messages that belong to a theme of interest of the tutor in order to help him during his tutoring work, use of a tool for classification of these messages is indispensable. For this we have proposed a semantics classification tool of messages of a discussion forum that is based on LSA (Latent Semantic Analysis), which includes a thesaurus to organize the vocabulary. Benefits offered by formal ontology can overcome the insufficiencies that a thesaurus generates during its use and encourage us then to use it in our semantic classifier. In this work we propose the use of some functionalities that a OWL ontology proposes. We then explain how functionalities like “ObjectProperty", "SubClassOf" and “Datatype" property make our classification more intelligent by way of integrating new terms. New terms found are generated based on the first terms introduced by tutor and semantic relations described by OWL formalism.
Abstract: Word sense disambiguation is one of the most important open problems in natural language processing applications such as information retrieval and machine translation. Many approach strategies can be employed to resolve word ambiguity with a reasonable degree of accuracy. These strategies are: knowledgebased, corpus-based, and hybrid-based. This paper pays attention to the corpus-based strategy that employs an unsupervised learning method for disambiguation. We report our investigation of Latent Semantic Indexing (LSI), an information retrieval technique and unsupervised learning, to the task of Thai noun and verbal word sense disambiguation. The Latent Semantic Indexing has been shown to be efficient and effective for Information Retrieval. For the purposes of this research, we report experiments on two Thai polysemous words, namely /hua4/ and /kep1/ that are used as a representative of Thai nouns and verbs respectively. The results of these experiments demonstrate the effectiveness and indicate the potential of applying vector-based distributional information measures to semantic disambiguation.
Abstract: Source code retrieval is of immense importance in the software engineering field. The complex tasks of retrieving and extracting information from source code documents is vital in the development cycle of the large software systems. The two main subtasks which result from these activities are code duplication prevention and plagiarism detection. In this paper, we propose a Mohamed Amine Ouddan, and Hassane Essafi source code retrieval system based on two-level fingerprint representation, respectively the structural and the semantic information within a source code. A sequence alignment technique is applied on these fingerprints in order to quantify the similarity between source code portions. The specific purpose of the system is to detect plagiarism and duplicated code between programs written in different programming languages belonging to the same class, such as C, Cµ, Java and CSharp. These four languages are supported by the actual version of the system which is designed such that it may be easily adapted for any programming language.
Abstract: The objective of the paper was to understand the use
of an important element of design, namely color in a Semiotic
system. Semiotics is the study of signs and sign processes, it is often
divided into three branches namely (i) Semantics that deals with the
relation between signs and the things to which they refer to mean, (ii)
Syntactics which addresses the relations among signs in formal
structures and (iii) Pragmatics that relates between signs and its
effects on they have on the people who use them to create a plan for
an object or a system referred to as design. Cubism with its versatility
was the key design tool prevalent across the 20th century. In order to
analyze the user's understanding of interaction and appreciation of
color through the movement of Cubism, an exercise was undertaken
in Dept. of Design, IIT Guwahati. This included tasks to design a
composition using color and sign process to the theme 'Between the
Lines' on a given tessellation where the users relate their work to the
world they live in, which in this case was the college campus of IIT
Guwahati. The findings demonstrate impact of the key design
element color on the principles of visual perception based on image
analysis of specific compositions.
Abstract: Service discovery is a very important component of Service Oriented Architectures (SOA). This paper presents two alternative approaches to customise the query results of private service registry such as Universal Description, Discovery and Integration (UDDI). The customisation is performed based on some pre-defined and/or real-time changing parameters. This work identifies the requirements, designs and additional mechanisms that must be applied to UDDI in order to support this customisation capability. We also detail the implements of the approaches and examine its performance and scalability. Based on our experimental results, we conclude that both approaches can be used to customise registry query results, but by storing personalization parameters in external resource will yield better performance and but less scalable when size of query results increases. We believe these approaches when combined with semantics enabled service registry will enhance the service discovery methods within a private UDDI registry environment.
Abstract: Ontology-based modelling of multi-formatted
software application content is a challenging area in content
management. When the number of software content unit is huge and
in continuous process of change, content change management is
important. The management of content in this context requires
targeted access and manipulation methods. We present a novel
approach to deal with model-driven content-centric information
systems and access to their content. At the core of our approach is an
ontology-based semantic annotation technique for diversely
formatted content that can improve the accuracy of access and
systems evolution. Domain ontologies represent domain-specific
concepts and conform to metamodels. Different ontologies - from
application domain ontologies to software ontologies - capture and
model the different properties and perspectives on a software content
unit. Interdependencies between domain ontologies, the artifacts and
the content are captured through a trace model. The annotation traces
are formalised and a graph-based system is selected for the
representation of the annotation traces.