Abstract: We present the results of a case study aiming to assess the reflection of the tourism community in the Web and its usability to propose new ways to communicate visually. The wealth of information contained in the Web and the clear facilities to communicate personals points of view makes of the social web a new space of exploration. In this way, social web allow the sharing of information between communities with similar interests. However, the tourism community remains unexplored as is the case of the information covered in travel stories. Along the Web, we find multiples sites allowing the users to communicate their experiences and personal points of view of a particular place of the world. This cultural heritage is found in multiple documents, usually very little supplemented with photos, so they are difficult to explore due to the lack of visual information. This paper explores the possibility of analyzing travel stories to display them visually on maps and generate new knowledge such as patterns of travel routes. This way, travel narratives published in electronic formats can be very important especially to the tourism community because of the great amount of knowledge that can be extracted. Our approach is based on the use of a Geoparsing Web Service to extract geographic coordinates from travel narratives in order to draw the geo-positions and link the documents into a map image.
Abstract: This paper presents an information retrieval model on
XML documents based on tree matching. Queries and documents are
represented by extended trees. An extended tree is built starting from
the original tree, with additional weighted virtual links between each
node and its indirect descendants allowing to directly reach each
descendant. Therefore only one level separates between each node
and its indirect descendants. This allows to compare the user query
and the document with flexibility and with respect to the structural
constraints of the query. The content of each node is very important to
decide weither a document element is relevant or not, thus the content
should be taken into account in the retrieval process. We separate
between the structure-based and the content-based retrieval processes.
The content-based score of each node is commonly based on the
well-known Tf × Idf criteria. In this paper, we compare between
this criteria and another one we call Tf × Ief. The comparison
is based on some experiments into a dataset provided by INEX1 to
show the effectiveness of our approach on one hand and those of
both weighting functions on the other.
Abstract: This paper describes Clinical Document Architecture Release Two (CDA R2) standard and a client application for messaging with SAĞLIK-NET project developed by The Ministry of Health of Turkey. CDA R2 , developed by Health Level 7 (HL7) organization and approved by American National Standards Institute (ANSI) in 2004, to standardize medical information to be able to share semantically and syntactically. In this study, a client application compatible with HL7 V3 for a project named SAĞLIKNET, aimed to build a National Health Information System by Turkey. Moreover, CDA conformance of this application will also be evaluated.
Abstract: Sustainable development is a concept which was
originated in Burtland commission in 1978. Although this concept
was born with environmental aspects, it is penetrated in all areas
rapidly, turning into a dominate view of planning. Concentrating on
future generation issue, especially when talking about heritage has a
long story. Each approach with all of its characteristics illustrates
differences in planning, hence planning always reflects the dominate
idea of its age. This paper studies sustainable development in
planning for historical cities with the aim of finding ways to deal
with heritage in planning for historical cities in Iran. Through this, it
will be illustrated how challenges between sustainable concept and
heritage could be concluded in planning.
Consequently, the paper will emphasize on:
Sustainable development in city planning
Trends regarding heritage
Challenges due to planning for historical cities in Iran
For the first two issues, documentary method regarding the
sustainable development and heritage literature is considered. As the
next step focusing on Iranian historical cities require considering the
urban planning and management structure and identifying the main
challenges related to heritage, so analyzing challenges regarding
heritage is considered. As the result it would be illustrated that key
issue in such planning is active conservation to improve and use the
potential of heritage while it's continues conservation is guaranteed.
By emphasizing on the planning system in Iran it will be obvious that
some reforms are needed in this system and its way of relating with
heritage. The main weakness in planning for historical cities in Iran
is the lack of independent city management. Without this factor
achieving active conservation as the main factor of sustainable
development would not be possible.
Abstract: Research papers are usually evaluated via peer
review. However, peer review has limitations in evaluating research
papers. In this paper, Scienstein and the new idea of 'collaborative
document evaluation' are presented. Scienstein is a project to
evaluate scientific papers collaboratively based on ratings, links,
annotations and classifications by the scientific community using the
internet. In this paper, critical success factors of collaborative
document evaluation are analyzed. That is the scientists- motivation
to participate as reviewers, the reviewers- competence and the
reviewers- trustworthiness. It is shown that if these factors are
ensured, collaborative document evaluation may prove to be a more
objective, faster and less resource intensive approach to scientific
document evaluation in comparison to the classical peer review
process. It is shown that additional advantages exist as collaborative
document evaluation supports interdisciplinary work, allows
continuous post-publishing quality assessments and enables the
implementation of academic recommendation engines. In the long
term, it seems possible that collaborative document evaluation will
successively substitute peer review and decrease the need for
journals.
Abstract: Frequent machine breakdowns, low plant availability and increased overtime are a great threat to a manufacturing plant as they increase operating costs of an industry. The main aim of this study was to improve Overall Equipment Effectiveness (OEE) at a manufacturing company through the implementation of innovative maintenance strategies. A case study approach was used. The paper focuses on improving the maintenance in a manufacturing set up using an innovative maintenance regime mix to improve overall equipment effectiveness. Interviews, reviewing documentation and historical records, direct and participatory observation were used as data collection methods during the research. Usually production is based on the total kilowatt of motors produced per day. The target kilowatt at 91% availability is 75 Kilowatts a day. Reduced demand and lack of raw materials particularly imported items are adversely affecting the manufacturing operations. The company had to reset its targets from the usual figure of 250 Kilowatt per day to mere 75 per day due to lower availability of machines as result of breakdowns as well as lack of raw materials. The price reductions and uncertainties as well as general machine breakdowns further lowered production. Some recommendations were given. For instance, employee empowerment in the company will enhance responsibility and authority to improve and totally eliminate the six big losses. If the maintenance department is to realise its proper function in a progressive, innovative industrial society, then its personnel must be continuously trained to meet current needs as well as future requirements. To make the maintenance planning system effective, it is essential to keep track of all the corrective maintenance jobs and preventive maintenance inspections. For large processing plants these cannot be handled manually. It was therefore recommended that the company implement (Computerised Maintenance Management System) CMMS.
Abstract: Lately there has been a significant boost of interest in
music digital libraries, which constitute an attractive area of research
and development due to their inherent interesting issues and
challenging technical problems, solutions to which will be highly
appreciated by enthusiastic end-users. We present here a DL that we
have developed to support users in their quest for classical music
pieces within a particular collection of 18,000+ audio recordings.
To cope with the early DL model limitations, we have used a refined
socio-semantic and contextual model that allows rich bibliographic
content description, along with semantic annotations, reviewing,
rating, knowledge sharing etc. The multi-layered service model
allows incorporation of local and distributed information,
construction of rich hypermedia documents, expressing the complex
relationships between various objects and multi-dimensional spaces,
agents, actors, services, communities, scenarios etc., and facilitates
collaborative activities to offer to individual users the needed
collections and services.
Abstract: Despite many success stories of manufacturing safety, many organizations are still reluctant, perceiving it as cost increasing and time consuming. The clear contributor may be due to the use of lagging indicators rather than leading indicator measures. The study therefore proposes a combinatorial model for determining the best safety strategy. A combination theory and cost benefit analysis was employed to develop a monetary saving / loss function in terms value of preventions and cost of prevention strategy. Documentations, interviews and structured questionnaire were employed to collect information on Before-And-After safety programme records from a Tobacco company between periods of 1993-2001(for pre-safety) and 2002-2008 (safety period) for the model application. Three combinatorial alternatives A, B, C were obtained resulting into 4, 6 and 4 strategies respectively with PPE and Training being predominant. A total of 728 accidents were recorded for a 9 year period of pre-safety programme and 163 accidents were recorded for 7 years period of safety programme. Six preventions activities (alternative B) yielded the best results. However, all the years of operation experienced except year 2004. The study provides a leading resources for planning successful safety programme
Abstract: Web applications have become very complex and
crucial, especially when combined with areas such as CRM
(Customer Relationship Management) and BPR (Business Process
Reengineering), the scientific community has focused attention to
Web applications design, development, analysis, and testing, by
studying and proposing methodologies and tools. This paper
proposes an approach to automatic multi-dimensional concern
mining for Web Applications, based on concepts analysis, impact
analysis, and token-based concern identification. This approach lets
the user to analyse and traverse Web software relevant to a particular
concern (concept, goal, purpose, etc.) via multi-dimensional
separation of concerns, to document, understand and test Web
applications. This technique was developed in the context of WAAT
(Web Applications Analysis and Testing) project. A semi-automatic
tool to support this technique is currently under development.
Abstract: Noise level has critical effects on the diagnostic
performance of signal-averaged electrocardiogram (SAECG), because
the true starting and end points of QRS complex would be masked by
the residual noise and sensitive to the noise level. Several studies and
commercial machines have used a fixed number of heart beats
(typically between 200 to 600 beats) or set a predefined noise level
(typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform
SAECG analysis. However different criteria or methods used to
perform SAECG would cause the discrepancies of the noise levels
among study subjects. According to the recommendations of 1991
ESC, AHA and ACC Task Force Consensus Document for the use of
SAECG, the determinations of onset and offset are related closely to
the mean and standard deviation of noise sample. Hence this study
would try to perform SAECG using consistent root-mean-square
(RMS) noise levels among study subjects and analyze the noise level
effects on SAECG. This study would also evaluate the differences
between normal subjects and chronic renal failure (CRF) patients in
the time-domain SAECG parameters.
The study subjects were composed of 50 normal Taiwanese and 20
CRF patients. During the signal-averaged processing, different RMS
noise levels were adjusted to evaluate their effects on three time
domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS
voltage of the last QRS 40 ms (RMS40), and (3) duration of the low
amplitude signals below 40 μV (LAS40). The study results
demonstrated that the reduction of RMS noise level can increase
fQRSD and LAS40 and decrease the RMS40, and can further increase
the differences of fQRSD and RMS40 between normal subjects and
CRF patients. The SAECG may also become abnormal due to the
reduction of RMS noise level. In conclusion, it is essential to establish
diagnostic criteria of SAECG using consistent RMS noise levels for
the reduction of the noise level effects.
Abstract: Amazing development of the information technology,
communications and internet expansion as well as the requirements
of the city managers to new ideas to run the city and higher
participation of the citizens encourage us to complete the electronic
city as soon as possible. The foundations of this electronic city are in
information technology. People-s participation in metropolitan
management is a crucial topic. Information technology does not
impede this matter. It can ameliorate populace-s participation and
better interactions between the citizens and the city managers.
Citizens can proffer their ideas, beliefs and votes through digital
mass media based upon the internet and computerization plexuses on
the topical matters to receive appropriate replies and services. They
can participate in urban projects by becoming cognizant of the city
views. The most significant challenges are as follows: information
and communicative management, altering citizens- views, as well as
legal and office documents
Electronic city obstacles have been identified in this research. The
required data were forgathered through questionnaires to identify the
barriers from a statistical community comprising specialists and
practitioners of the ministry of information technology and
communication, the municipality information technology
organization.
The conclusions demonstrate that the prioritized electronic city
application barriers in Iran are as follows:
The support quandaries (non-financial ones), behavioral, cultural
and educational plights, the security, legal and license predicaments,
the hardware, orismological and infrastructural curbs, the software
and fiscal problems.
Abstract: The objective of the research was focused on the
design, development and evaluation of a sustainable web based
network system to be used as an interoperable environment for
University process workflow and document management. In this
manner the most of the process workflows in Universities can be
entirely realized electronically and promote integrated University.
Definition of the most used University process workflows enabled
creating electronic workflows and their execution on standard
workflow execution engines. Definition or reengineering of
workflows provided increased work efficiency and helped in having
standardized process through different faculties. The concept and the
process definition as well as the solution applied as Case study are
evaluated and findings are reported.
Abstract: Discovering new biological knowledge from the highthroughput biological data is a major challenge to bioinformatics today. To address this challenge, we developed a new approach for protein classification. Proteins that are evolutionarily- and thereby functionally- related are said to belong to the same classification. Identifying protein classification is of fundamental importance to document the diversity of the known protein universe. It also provides a means to determine the functional roles of newly discovered protein sequences. Our goal is to predict the functional classification of novel protein sequences based on a set of features extracted from each protein sequence. The proposed technique used datasets extracted from the Structural Classification of Proteins (SCOP) database. A set of spectral domain features based on Fast Fourier Transform (FFT) is used. The proposed classifier uses multilayer back propagation (MLBP) neural network for protein classification. The maximum classification accuracy is about 91% when applying the classifier to the full four levels of the SCOP database. However, it reaches a maximum of 96% when limiting the classification to the family level. The classification results reveal that spectral domain contains information that can be used for classification with high accuracy. In addition, the results emphasize that sequence similarity measures are of great importance especially at the family level.
Abstract: In text categorization problem the most used method
for documents representation is based on words frequency vectors
called VSM (Vector Space Model). This representation is based only
on words from documents and in this case loses any “word context"
information found in the document. In this article we make a
comparison between the classical method of document representation
and a method called Suffix Tree Document Model (STDM) that is
based on representing documents in the Suffix Tree format. For the
STDM model we proposed a new approach for documents
representation and a new formula for computing the similarity
between two documents. Thus we propose to build the suffix tree
only for any two documents at a time. This approach is faster, it has
lower memory consumption and use entire document representation
without using methods for disposing nodes. Also for this method is
proposed a formula for computing the similarity between documents,
which improves substantially the clustering quality. This
representation method was validated using HAC - Hierarchical
Agglomerative Clustering. In this context we experiment also the
stemming influence in the document preprocessing step and highlight
the difference between similarity or dissimilarity measures to find
“closer" documents.
Abstract: The self-organizing map (SOM) model is a well-known neural network model with wide spread of applications. The main characteristics of SOM are two-fold, namely dimension reduction and topology preservation. Using SOM, a high-dimensional data space will be mapped to some low-dimensional space. Meanwhile, the topological relations among data will be preserved. With such characteristics, the SOM was usually applied on data clustering and visualization tasks. However, the SOM has main disadvantage of the need to know the number and structure of neurons prior to training, which are difficult to be determined. Several schemes have been proposed to tackle such deficiency. Examples are growing/expandable SOM, hierarchical SOM, and growing hierarchical SOM. These schemes could dynamically expand the map, even generate hierarchical maps, during training. Encouraging results were reported. Basically, these schemes adapt the size and structure of the map according to the distribution of training data. That is, they are data-driven or dataoriented SOM schemes. In this work, a topic-oriented SOM scheme which is suitable for document clustering and organization will be developed. The proposed SOM will automatically adapt the number as well as the structure of the map according to identified topics. Unlike other data-oriented SOMs, our approach expands the map and generates the hierarchies both according to the topics and their characteristics of the neurons. The preliminary experiments give promising result and demonstrate the plausibility of the method.
Abstract: This article describes Uruk, the virtual museum of
Iraq that we developed for visual exploration and retrieval of image
collections. The system largely exploits the loosely-structured
hierarchy of XML documents that provides a useful representation
method to store semi-structured or unstructured data, which does not
easily fit into existing database. The system offers users the
capability to mine and manage the XML-based image collections
through a web-based Graphical User Interface (GUI). Typically, at an
interactive session with the system, the user can browse a visual
structural summary of the XML database in order to select interesting
elements. Using this intermediate result, queries combining structure
and textual references can be composed and presented to the system.
After query evaluation, the full set of answers is presented in a visual
and structured way.
Abstract: This paper presents the design and implementation of
the WebGD, a CORBA-based document classification and retrieval
system on Internet. The WebGD makes use of such techniques as Web,
CORBA, Java, NLP, fuzzy technique, knowledge-based processing
and database technology. Unified classification and retrieval model,
classifying and retrieving with one reasoning engine and flexible
working mode configuration are some of its main features. The
architecture of WebGD, the unified classification and retrieval model,
the components of the WebGD server and the fuzzy inference engine
are discussed in this paper in detail.
Abstract: In this paper, we propose a new model of English-
Vietnamese bilingual Information Retrieval system. Although there
are so many CLIR systems had been researched and built, the accuracy of searching results in different languages that the CLIR
system supports still need to improve, especially in finding bilingual documents. The problems identified in this paper are the limitation of
machine translation-s result and the extra large collections of document to be found. So we try to establish a different model to overcome these problems.
Abstract: Efficient preprocessing is very essential for automatic
recognition of handwritten documents. In this paper, techniques on
segmenting words in handwritten Arabic text are presented. Firstly,
connected components (ccs) are extracted, and distances among
different components are analyzed. The statistical distribution of this
distance is then obtained to determine an optimal threshold for words
segmentation. Meanwhile, an improved projection based method is
also employed for baseline detection. The proposed method has been
successfully tested on IFN/ENIT database consisting of 26459
Arabic words handwritten by 411 different writers, and the results
were promising and very encouraging in more accurate detection of
the baseline and segmentation of words for further recognition.
Abstract: The primary purpose of this article is an attempt to
find the implication of globalization on education. Globalization has
an important role as a process in the economical, political, cultural
and technological dimensions in the life of the contemporary human
being and has been affected by it. Education has its effects in this
procedure and while influencing it through educating global citizens
having universal human features and characteristics, has been
influenced by this phenomenon too. Nowadays, the role of education
is not just to develop in the students the knowledge and skills
necessary for the new kinds of jobs. If education wants to help
students be prepared of the new global society, it has to make them
engaged productive and critical citizens for the global era, so that
they can reflect about their roles as key actors in a dynamic often
uneven, matrix of economic and cultural exchanges. If education
wants to reinforce and raise the national identity, the value system
and the children and teenagers, it should make them ready for living
in the global era of this century. The used method in this research is
documentary and analyzing the documents. Studies in this field show
globalization has influences on the processes of the production,
distribution and consuming of knowledge. The happening of this
event in the information era has not only provide the necessary
opportunities for the exchanges of education worldwide but also has
privileges for the developing countries which enables them to
strengthen educational bases of their society and have an important
step toward their future.