Abstract: The analysis is mainly concentrating on the knowledge
management literatures productivity trend which subjects as
“knowledge management" in SSCI database. The purpose what the
analysis will propose is to summarize the trend information for
knowledge management researchers since core knowledge will be
concentrated in core categories. The result indicated that the literature
productivity which topic as “knowledge management" is still
increasing extremely and will demonstrate the trend by different
categories including author, country/territory, institution name,
document type, language, publication year, and subject area. Focus on
the right categories, you will catch the core research information. This
implies that the phenomenon "success breeds success" is more
common in higher quality publications.
Abstract: This research documents a qualitative study of
selected Native Americans who have successfully graduated from
mainstream higher education institutions. The research framework
explored the Bicultural Identity Formation Model as a means of
understanding the expressions of the students' adaptations to
mainstream education. This approach lead to an awareness of how
the participants in the study used specific cultural and social
strategies to enhance their educational success and also to an
awareness of how they coped with cultural dissonance to achieve a
new academic identity. Research implications impact a larger
audience of bicultural, foreign, or international students experiencing
cultural dissonance.
Abstract: Demand over web services is in growing with increases number of Web users. Web service is applied by Web application. Web application size is affected by its user-s requirements and interests. Differential in requirements and interests lead to growing of Web application size. The efficient way to save store spaces for more data and information is achieved by implementing algorithms to compress the contents of Web application documents. This paper introduces an algorithm to reduce Web application size based on reduction of the contents of HTML files. It removes unimportant contents regardless of the HTML file size. The removing is not ignored any character that is predicted in the HTML building process.
Abstract: Text categorization - the assignment of natural language documents to one or more predefined categories based on their semantic content - is an important component in many information organization and management tasks. Performance of neural networks learning is known to be sensitive to the initial weights and architecture. This paper discusses the use multilayer neural network initialization with decision tree classifier for improving text categorization accuracy. An adaptation of the algorithm is proposed in which a decision tree from root node until a final leave is used for initialization of multilayer neural network. The experimental evaluation demonstrates this approach provides better classification accuracy with Reuters-21578 corpus, one of the standard benchmarks for text categorization tasks. We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.
Abstract: Processing the data by computers and performing
reasoning tasks is an important aim in Computer Science. Semantic
Web is one step towards it. The use of ontologies to enhance the
information by semantically is the current trend. Huge amount of
domain specific, unstructured on-line data needs to be expressed in
machine understandable and semantically searchable format.
Currently users are often forced to search manually in the results
returned by the keyword-based search services. They also want to use
their native languages to express what they search. In this paper, an
ontology-based automated question answering system on software
test documents domain is presented. The system allows users to enter
a question about the domain by means of natural language and
returns exact answer of the questions. Conversion of the natural
language question into the ontology based query is the challenging
part of the system. To be able to achieve this, a new algorithm
regarding free text to ontology based search engine query conversion
is proposed. The algorithm is based on investigation of suitable
question type and parsing the words of the question sentence.
Abstract: Reverse Engineering is a very important process in
Software Engineering. It can be performed backwards from system
development life cycle (SDLC) in order to get back the source data
or representations of a system through analysis of its structure,
function and operation. We use reverse engineering to introduce an
automatic tool to generate system requirements from its program
source codes. The tool is able to accept the Cµ programming source
codes, scan the source codes line by line and parse the codes to
parser. Then, the engine of the tool will be able to generate system
requirements for that specific program to facilitate reuse and
enhancement of the program. The purpose of producing the tool is to
help recovering the system requirements of any system when the
system requirements document (SRD) does not exist due to
undocumented support of the system.
Abstract: One of the common problems encountered in software
engineering is addressing and responding to the changing nature of
requirements. While several approaches have been devised to address
this issue, ranging from instilling resistance to changing requirements
in order to mitigate impact to project schedules, to developing an
agile mindset towards requirements, the approach discussed in this
paper is one of conceptualizing the delta in requirement and
modeling it, in order to plan a response to it. To provide some
context here, change is first formally identified and categorized as
either formal change or informal change. While agile methodology
facilitates informal change, the approach discussed in this paper
seeks to develop the idea of facilitating formal change. To collect,
document meta-requirements that represent the phenomena of change
would be a pro-active measure towards building a realistic cognition
of the requirements entity that can further be harnessed in the
software engineering process.
Abstract: Information sharing and gathering are important in the rapid advancement era of technology. The existence of WWW has caused rapid growth of information explosion. Readers are overloaded with too many lengthy text documents in which they are more interested in shorter versions. Oil and gas industry could not escape from this predicament. In this paper, we develop an Automated Text Summarization System known as AutoTextSumm to extract the salient points of oil and gas drilling articles by incorporating statistical approach, keywords identification, synonym words and sentence-s position. In this study, we have conducted interviews with Petroleum Engineering experts and English Language experts to identify the list of most commonly used keywords in the oil and gas drilling domain. The system performance of AutoTextSumm is evaluated using the formulae of precision, recall and F-score. Based on the experimental results, AutoTextSumm has produced satisfactory performance with F-score of 0.81.
Abstract: This study explores perceptions of English as a Foreign
Language (EFL) learners on using computer mediated communication
technology in their learner of English. The data consists of
observations of both synchronous and asynchronous communication
participants engaged in for over a period of 4 months, which included
online, and offline communication protocols, open-ended interviews
and reflection papers composed by participants.
Content analysis of interview data and the written documents listed
above, as well as, member check and triangulation techniques are the
major data analysis strategies. The findings suggest that participants
generally do not benefit from computer-mediated communication in
terms of its effect in learning a foreign language. Participants regarded
the nature of CMC as artificial, or pseudo communication that did not
aid their authentic communicational skills in English. The results of
this study sheds lights on insufficient and inconclusive findings, which
most quantitative CMC studies previously generated.
Abstract: Typhoon Morakot hit Taiwan in 2009 and caused
severe damages. The government employs a compulsory relocation
strategy for post-disaster reconstruction. This study analyzes the
impact of this strategy on community solidarity. It employs a multiple
approach for data collection, including semi-structural interview,
secondary data, and documentation. The results indicate that the
government-s strategy for distributing housing has led to conflicts
within the communities. In addition, the relocating process has
stimulated tensions between victims of the disaster and those residents
whose lands were chosen to be new sites for relocation. The
government-s strategy of “collective relocation" also worsened
community integration. In addition, the fact that a permanent housing
community may accommodate people from different places also posts
challenge for the development of new inter-personal relations in the
communities. This study concludes by emphasizing the importance of
bringing social, economic and cultural aspects into consideration for
post-disaster relocation..
Abstract: In this document, we have proposed a robust
conceptual strategy, in order to improve the robustness against the manufacturing defects and thus the reliability of logic CMOS circuits. However, in order to enable the use of future CMOS
technology nodes this strategy combines various types of design:
DFR (Design for Reliability), techniques of tolerance: hardware
redundancy TMR (Triple Modular Redundancy) for hard error
tolerance, the DFT (Design for Testability. The Results on largest ISCAS and ITC benchmark circuits show that our approach improves
considerably the reliability, by reducing the key factors, the area costs and fault tolerance probability.
Abstract: One of the ubiquitous routines in medical practice is searching through voluminous piles of clinical documents. In this paper we introduce a distributed system to search and exchange clinical documents. Clinical documents are distributed peer-to-peer. Relevant information is found in multiple iterations of cross-searches between the clinical text and its domain encyclopedia.
Abstract: Prospective readers can quickly determine whether a document is relevant to their information need if the significant phrases (or keyphrases) in this document are provided. Although keyphrases are useful, not many documents have keyphrases assigned to them, and manually assigning keyphrases to existing documents is costly. Therefore, there is a need for automatic keyphrase extraction. This paper introduces a new domain independent keyphrase extraction algorithm. The algorithm approaches the problem of keyphrase extraction as a classification task, and uses a combination of statistical and computational linguistics techniques, a new set of attributes, and a new machine learning method to distinguish keyphrases from non-keyphrases. The experiments indicate that this algorithm performs better than other keyphrase extraction tools and that it significantly outperforms Microsoft Word 2000-s AutoSummarize feature. The domain independence of this algorithm has also been confirmed in our experiments.
Abstract: Machine Translation, (hereafter in this document
referred to as the "MT") faces a lot of complex problems from its
origination. Extracting multiword expressions is also one of the
complex problems in MT. Finding multiword expressions during
translating a sentence from English into Urdu, through existing
solutions, takes a lot of time and occupies system resources. We have
designed a simple relational data approach, in which we simply set a
bit in dictionary (database) for multiword, to find and handle
multiword expression. This approach handles multiword efficiently.
Abstract: Web applications have become very complex and crucial, especially when combined with areas such as CRM (Customer Relationship Management) and BPR (Business Process Reengineering), the scientific community has focused attention to Web applications design, development, analysis, and testing, by studying and proposing methodologies and tools. This paper proposes an approach to automatic multi-dimensional concern mining for Web Applications, based on concepts analysis, impact analysis, and token-based concern identification. This approach lets the user to analyse and traverse Web software relevant to a particular concern (concept, goal, purpose, etc.) via multi-dimensional separation of concerns, to document, understand and test Web applications. This technique was developed in the context of WAAT (Web Applications Analysis and Testing) project. A semi-automatic tool to support this technique is currently under development.
Abstract: Problem solving has traditionally been one of the principal research areas for artificial intelligence. Yet, although artificial intelligence reasoning techniques have been employed in several product support systems, the benefit of integrating product support, knowledge engineering, and problem solving, is still unclear. This paper studies the synergy of these areas and proposes a knowledge engineering framework that integrates product support systems and artificial intelligence techniques. The framework includes four spaces; the data, problem, hypothesis, and solution ones. The data space incorporates the knowledge needed for structured reasoning to take place, the problem space contains representations of problems, and the hypothesis space utilizes a multimodal reasoning approach to produce appropriate solutions in the form of virtual documents. The solution space is used as the gateway between the system and the user. The proposed framework enables the development of product support systems in terms of smaller, more manageable steps while the combination of different reasoning techniques provides a way to overcome the lack of documentation resources.
Abstract: Knowledge of an organization does not merely reside
in structured form of information and data; it is also embedded in
unstructured form. The discovery of such knowledge is particularly
difficult as the characteristic is dynamic, scattered, massive and
multiplying at high speed. Conventional methods of managing
unstructured information are considered too resource demanding and
time consuming to cope with the rapid information growth.
In this paper, a Multi-faceted and Automatic Knowledge
Elicitation System (MAKES) is introduced for the purpose of
discovery and capture of organizational knowledge. A trial
implementation has been conducted in a public organization to
achieve the objective of decision capture and navigation from a
number of meeting minutes which are autonomously organized,
classified and presented in a multi-faceted taxonomy map in both
document and content level. Key concepts such as critical decision
made, key knowledge workers, knowledge flow and the relationship
among them are elicited and displayed in predefined knowledge
model and maps. Hence, the structured knowledge can be retained,
shared and reused.
Conducting Knowledge Management with MAKES reduces work
in searching and retrieving the target decision, saves a great deal of
time and manpower, and also enables an organization to keep pace
with the knowledge life cycle. This is particularly important when
the amount of unstructured information and data grows extremely
quickly. This system approach of knowledge management can
accelerate value extraction and creation cycles of organizations.
Abstract: Campus sustainability is the goal of a university striving for sustainable development. This study found that of 17 popular approaches, two comprehensive campus sustainability assessment frameworks were developed in the context of Sustainability in Higher Education (SHE), and used by many university campuses around the world. Sustainability Tracking Assessment and Rating Systems (STARS) and the Campus Sustainability Assessment Framework (CSAF) approaches are more comprehensive than others. Therefore, the researchers examined aspects and elements used by CSAF and STARS in the approach to develop a campus sustainability assessment framework for Universiti Kebangsaan Malaysia (UKM). Documents analysis found that CSAF and STARS do not focus on physical development, especially the construction industry, as key elements of campus sustainability assessment. This finding is in accordance with the Sustainable UKM Programme which consists of three main components of sustainable community, ecosystem and physical development.
Abstract: Walking as a type of non-motorized transportation has
various social, economical and environmental privileges. Also, today
different aspects of sustainable development have been emphasized
and promotion of sustainable transportation modes has been
considered according to this approach. Therefore, the objective of
this research is exploring the circumstance of relationship between
walking and sustainable urban transportation.For writing this article,
the most important resources related to the traits of walking have
been surveyed via a documentary method and after explaining the
concept of sustainable transportation and its indicators, benefiting
from the viewpoints of transportation experts of Tehran, as the capital
and greatest city of Iran, different modes of urban transportation have
been compared in proportion to each criterion and to each other and
have been analyzed according to AHP method. The results of this
study indicate that walking is the most sustainable mode of inner city
transportation.
Abstract: Indigenous Knowledge (IK) has many social and
economic benefits. However, IK is at the risk of extinction due to the
difficulties to preserve it as most of the IK largely remains
undocumented. This study aims to design a model of the factors
affecting the adoption of Information and Communication
Technologies (ICTs) for the preservation of IK. The proposed model
is based on theoretical frameworks on ICT adoption. It was designed
following a literature review of ICT adoption theories for households,
and of the factors affecting ICT adoption for IK. The theory that
fitted to the best all factors was then chosen as the basis for the
proposed model. This study found that the Model of Adoption of
Technology in Households (MATH) is the most suitable theoretical
framework for modeling ICT adoption factors for the preservation of
IK.