Abstract: The analysis is mainly concentrating on the knowledge
management literatures productivity trend which subjects as
“knowledge management" in SSCI database. The purpose what the
analysis will propose is to summarize the trend information for
knowledge management researchers since core knowledge will be
concentrated in core categories. The result indicated that the literature
productivity which topic as “knowledge management" is still
increasing extremely and will demonstrate the trend by different
categories including author, country/territory, institution name,
document type, language, publication year, and subject area. Focus on
the right categories, you will catch the core research information. This
implies that the phenomenon "success breeds success" is more
common in higher quality publications.
Abstract: LabVIEW and SIMULINK are two most widely used
graphical programming environments for designing digital signal
processing and control systems. Unlike conventional text-based
programming languages such as C, Cµ and MATLAB, graphical
programming involves block-based code developments, allowing a
more efficient mechanism to build and analyze control systems. In
this paper a LabVIEW environment has been employed as a
graphical user interface for monitoring the operation of a controlled
distillation column, by visualizing both the closed loop performance
and the user selected control conditions, while the column dynamics
has been modeled under the SIMULINK environment. This tool has
been applied to the PID based decoupled control of a binary
distillation column. By means of such integrated environments the
control designer is able to monitor and control the plant behavior and
optimize the response when both, the quality improvement of
distillation products and the operation efficiency tasks, are
considered.
Abstract: Set covering problem is a classical problem in
computer science and complexity theory. It has many applications,
such as airline crew scheduling problem, facilities location problem,
vehicle routing, assignment problem, etc. In this paper, three
different techniques are applied to solve set covering problem.
Firstly, a mathematical model of set covering problem is introduced
and solved by using optimization solver, LINGO. Secondly, the
Genetic Algorithm Toolbox available in MATLAB is used to solve
set covering problem. And lastly, an ant colony optimization method
is programmed in MATLAB programming language. Results
obtained from these methods are presented in tables. In order to
assess the performance of the techniques used in this project, the
benchmark problems available in open literature are used.
Abstract: This research uses computational linguistics, an area of study that employs a computer to process natural language, and aims at discerning the patterns that exist in declarative sentences used in technical texts. The approach is mathematical, and the focus is on instructional texts found on web pages. The technique developed by the author and named the MAYA Semantic Technique is used here and organized into four stages. In the first stage, the parts of speech in each sentence are identified. In the second stage, the subject of the sentence is determined. In the third stage, MAYA performs a frequency analysis on the remaining words to determine the verb and its object. In the fourth stage, MAYA does statistical analysis to determine the content of the web page. The advantage of the MAYA Semantic Technique lies in its use of mathematical principles to represent grammatical operations which assist processing and accuracy if performed on unambiguous text. The MAYA Semantic Technique is part of a proposed architecture for an entire web-based intelligent tutoring system. On a sample set of sentences, partial semantics derived using the MAYA Semantic Technique were approximately 80% accurate. The system currently processes technical text in one domain, namely Cµ programming. In this domain all the keywords and programming concepts are known and understood.
Abstract: Text categorization - the assignment of natural language documents to one or more predefined categories based on their semantic content - is an important component in many information organization and management tasks. Performance of neural networks learning is known to be sensitive to the initial weights and architecture. This paper discusses the use multilayer neural network initialization with decision tree classifier for improving text categorization accuracy. An adaptation of the algorithm is proposed in which a decision tree from root node until a final leave is used for initialization of multilayer neural network. The experimental evaluation demonstrates this approach provides better classification accuracy with Reuters-21578 corpus, one of the standard benchmarks for text categorization tasks. We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.
Abstract: Grobner basis calculation forms a key part of computational
commutative algebra and many other areas. One important
ramification of the theory of Grobner basis provides a means to solve
a system of non-linear equations. This is why it has become very
important in the areas where the solution of non-linear equations is
needed, for instance in algebraic cryptanalysis and coding theory. This
paper explores on a parallel-distributed implementation for Grobner
basis calculation over GF(2). For doing so Buchberger algorithm is
used. OpenMP and MPI-C language constructs have been used to
implement the scheme. Some relevant results have been furnished
to compare the performances between the standalone and hybrid
(parallel-distributed) implementation.
Abstract: Processing the data by computers and performing
reasoning tasks is an important aim in Computer Science. Semantic
Web is one step towards it. The use of ontologies to enhance the
information by semantically is the current trend. Huge amount of
domain specific, unstructured on-line data needs to be expressed in
machine understandable and semantically searchable format.
Currently users are often forced to search manually in the results
returned by the keyword-based search services. They also want to use
their native languages to express what they search. In this paper, an
ontology-based automated question answering system on software
test documents domain is presented. The system allows users to enter
a question about the domain by means of natural language and
returns exact answer of the questions. Conversion of the natural
language question into the ontology based query is the challenging
part of the system. To be able to achieve this, a new algorithm
regarding free text to ontology based search engine query conversion
is proposed. The algorithm is based on investigation of suitable
question type and parsing the words of the question sentence.
Abstract: Location-aware computing is a type of pervasive
computing that utilizes user-s location as a dominant factor for
providing urban services and application-related usages. One of the
important urban services is navigation instruction for wayfinders in a
city especially when the user is a tourist. The services which are
presented to the tourists should provide adapted location aware
instructions. In order to achieve this goal, the main challenge is to
find spatial relevant objects and location-dependent information. The
aim of this paper is the development of a reusable location-aware
model to handle spatial relevancy parameters in urban location-aware
systems. In this way we utilized ontology as an approach which could
manage spatial relevancy by defining a generic model. Our
contribution is the introduction of an ontological model based on the
directed interval algebra principles. Indeed, it is assumed that the
basic elements of our ontology are the spatial intervals for the user
and his/her related contexts. The relationships between them would
model the spatial relevancy parameters. The implementation language
for the model is OWLs, a web ontology language. The achieved
results show that our proposed location-aware model and the
application adaptation strategies provide appropriate services for the
user.
Abstract: Information sharing and gathering are important in the rapid advancement era of technology. The existence of WWW has caused rapid growth of information explosion. Readers are overloaded with too many lengthy text documents in which they are more interested in shorter versions. Oil and gas industry could not escape from this predicament. In this paper, we develop an Automated Text Summarization System known as AutoTextSumm to extract the salient points of oil and gas drilling articles by incorporating statistical approach, keywords identification, synonym words and sentence-s position. In this study, we have conducted interviews with Petroleum Engineering experts and English Language experts to identify the list of most commonly used keywords in the oil and gas drilling domain. The system performance of AutoTextSumm is evaluated using the formulae of precision, recall and F-score. Based on the experimental results, AutoTextSumm has produced satisfactory performance with F-score of 0.81.
Abstract: This study explores perceptions of English as a Foreign
Language (EFL) learners on using computer mediated communication
technology in their learner of English. The data consists of
observations of both synchronous and asynchronous communication
participants engaged in for over a period of 4 months, which included
online, and offline communication protocols, open-ended interviews
and reflection papers composed by participants.
Content analysis of interview data and the written documents listed
above, as well as, member check and triangulation techniques are the
major data analysis strategies. The findings suggest that participants
generally do not benefit from computer-mediated communication in
terms of its effect in learning a foreign language. Participants regarded
the nature of CMC as artificial, or pseudo communication that did not
aid their authentic communicational skills in English. The results of
this study sheds lights on insufficient and inconclusive findings, which
most quantitative CMC studies previously generated.
Abstract: This article discusses superordinate national identity as a means for immigrants integration into democratic polities. It is suggested that a superordinate national identity perceived as inclusive, by immigrants and by the native population, would be conducive to such integration. Command of the dominant language of society is seen as most important of the inclusive criteria. Other such criteria are respect of the country's political institutions and feelings of belonging to the country where you live. The argument is supported by data, showing a majority in favour of inclusive criteria for 'Swedishness', from a recent study among 1000 secondary school students of 'Swedish' and non-'Swedish' backgrounds.
Abstract: Machine Translation, (hereafter in this document
referred to as the "MT") faces a lot of complex problems from its
origination. Extracting multiword expressions is also one of the
complex problems in MT. Finding multiword expressions during
translating a sentence from English into Urdu, through existing
solutions, takes a lot of time and occupies system resources. We have
designed a simple relational data approach, in which we simply set a
bit in dictionary (database) for multiword, to find and handle
multiword expression. This approach handles multiword efficiently.
Abstract: Quality costs are the costs associated with preventing,
finding, and correcting defective work. Since the main language of
corporate management is money, quality-related costs act as means of
communication between the staff of quality engineering departments
and the company managers. The objective of quality engineering is to
minimize the total quality cost across the life of product. Quality
costs provide a benchmark against which improvement can be
measured over time. It provides a rupee-based report on quality
improvement efforts. It is an effective tool to identify, prioritize and
select quality improvement projects. After reviewing through the
literature it was noticed that a simplified methodology for data
collection of quality cost in a manufacturing industry was required.
The quantified standard methodology is proposed for collecting data
of various elements of quality cost categories for manufacturing
industry. Also in the light of research carried out so far, it is felt
necessary to standardise cost elements in each of the prevention,
appraisal, internal failure and external failure costs. . Here an attempt
is made to standardise the various cost elements applicable to
manufacturing industry and data is collected by using the proposed
quantified methodology. This paper discusses the case study carried
in luggage manufacturing industry.
Abstract: This paper describes the evolution of language
politics and the part played by political leaders with reference to
the Dravidian parties in Tamil Nadu. It explores the interesting
evolution from separatism to coalition in sustaining the values of
parliamentary democracy and federalism. It seems that the
appropriation of language politics is fully ascribed to the DMK
leadership under Annadurai and Karunanidhi. For them, the Tamil
language is a self-determining power, a terrain of nationhood, and
a perennial source of social and political powers. The DMK
remains a symbol of Tamil nationalist party playing language
politics in the interest of the Tamils. Though electoral alliances
largely determine the success, the language politics still has
significant space in the politics of Tamil Nadu. Ironically, DMK
moves from the periphery to centre for getting national recognition
for the Tamils as well as for its own maximization of power. The
evolution can be seen in two major phases as: language politics for
party building; and language politics for state building with three
successive political processes, namely, language politics in the
process of separatism, representative politics and coalition. The
much pronounced Dravidian Movement is radical enough to
democratize the party ideology to survive the spirit of
parliamentary democracy. This has secured its own rewards in
terms of political power. The political power provides the means to
achieve the social and political goal of the political party.
Language politics and leadership pattern actualized this trend
though the movement is shifted from separatism to coalition.
Abstract: Recently the usefulness of Concept Abduction, a novel non-monotonic inference service for Description Logics (DLs), has been argued in the context of ontology-based applications such as semantic matchmaking and resource retrieval. Based on tableau calculus, a method has been proposed to realize this reasoning task in ALN, a description logic that supports simple cardinality restrictions as well as other basic constructors. However, in many ontology-based systems, the representation of ontology would require expressive formalisms for capturing domain-specific constraints, this language is not sufficient. In order to increase the applicability of the abductive reasoning method in such contexts, we would like to present in the scope of this paper an extension of the tableaux-based algorithm for dealing with concepts represented inALCQ, the description logic that extends ALN with full concept negation and quantified number restrictions.
Abstract: Hand gesture is one of the typical methods used in
sign language for non-verbal communication. It is most commonly
used by people who have hearing or speech problems to
communicate among themselves or with normal people. Various sign
language systems have been developed by manufacturers around the
globe but they are neither flexible nor cost-effective for the end
users. This paper presents a system prototype that is able to
automatically recognize sign language to help normal people to
communicate more effectively with the hearing or speech impaired
people. The Sign to Voice system prototype, S2V, was developed
using Feed Forward Neural Network for two-sequence signs
detection. Different sets of universal hand gestures were captured
from video camera and utilized to train the neural network for
classification purpose. The experimental results have shown that
neural network has achieved satisfactory result for sign-to-voice
translation.
Abstract: Automated operations based on voice commands will become more and more important in many applications, including robotics, maintenance operations, etc. However, voice command recognition rates drop quite a lot under non-stationary and chaotic noise environments. In this paper, we tried to significantly improve the speech recognition rates under non-stationary noise environments. First, 298 Navy acronyms have been selected for automatic speech recognition. Data sets were collected under 4 types of noisy environments: factory, buccaneer jet, babble noise in a canteen, and destroyer. Within each noisy environment, 4 levels (5 dB, 15 dB, 25 dB, and clean) of Signal-to-Noise Ratio (SNR) were introduced to corrupt the speech. Second, a new algorithm to estimate speech or no speech regions has been developed, implemented, and evaluated. Third, extensive simulations were carried out. It was found that the combination of the new algorithm, the proper selection of language model and a customized training of the speech recognizer based on clean speech yielded very high recognition rates, which are between 80% and 90% for the four different noisy conditions. Fourth, extensive comparative studies have also been carried out.
Abstract: Business Process Modeling (BPM) is the first and
most important step in business process management lifecycle. Graph
based formalism and rule based formalism are the two most
predominant formalisms on which process modeling languages are
developed. BPM technology continues to face challenges in coping
with dynamic business environments where requirements and goals
are constantly changing at the execution time. Graph based
formalisms incur problems to react to dynamic changes in Business
Process (BP) at the runtime instances. In this research, an adaptive
and flexible framework based on the integration between Object
Oriented diagramming technique and Petri Net modeling language is
proposed in order to support change management techniques for
BPM and increase the representation capability for Object Oriented
modeling for the dynamic changes in the runtime instances. The
proposed framework is applied in a higher education environment to
achieve flexible, updatable and dynamic BP.
Abstract: As computer network technology becomes
increasingly complex, it becomes necessary to place greater
requirements on the validity of developing standards and the
resulting technology. Communication networks are based on large
amounts of protocols. The validity of these protocols have to be
proved either individually or in an integral fashion. One strategy for
achieving this is to apply the growing field of formal methods.
Formal methods research defines systems in high order logic so that
automated reasoning can be applied for verification. In this research
we represent and implement a formerly announced multicast protocol
in Prolog language so that certain properties of the protocol can be
verified. It is shown that by using this approach some minor faults in
the protocol were found and repaired. Describing the protocol as
facts and rules also have other benefits i.e. leads to a process-able
knowledge. This knowledge can be transferred as ontology between
systems in KQML format. Since the Prolog language can increase its
knowledge base every time, this method can also be used to learn an
intelligent network.
Abstract: This preliminary study attempts to see if a learning
environment influences instructor’s teaching strategies and learners’
in-class activities in a foreign language class at a university in Japan.
The class under study was conducted in a computer room, while the
majority of classes of the same course were offered in traditional
classrooms without computers. The study also sees if the unplanned
blended learning environment, enhanced, or worked against, in
achieving course goals, by paying close attention to in-class artefacts,
such as computers. In the macro-level analysis, the course syllabus
and weekly itinerary of the course were looked at; and in the microlevel
analysis, nonhuman actors in their environments were named
and analyzed to see how they influenced the learners’ task processes.
The result indicated that students were heavily influenced by the
presence of computers, which lead them to disregard some aspects of
intended learning objectives.