Abstract: The importance of the formal specification in the
software life cycle is barely concealing to anyone. Formal
specifications use mathematical notation to describe the properties of
information system precisely, without unduly constraining the way in
how these properties are achieved. Having a correct and quality
software specification is not easy task. This study concerns with how
a group of rectifiers can communicate with each other and work to
prepare and produce a correct formal software specification. WBCS
has been implemented based mainly in the proposed supported
cooperative work model and a survey conducted on the existing Webbased
collaborative writing tools. This paper aims to assess the
feasibility of executing the web-based collaboration process using
WBCS. The purpose of conducting this test is to test the system as a
whole for functionality and fitness for use based on the evaluation
test plan.
Abstract: Sudoku is a logic-based combinatorial puzzle game
which people in different ages enjoy playing it. The challenging and
addictive nature of this game has made it a ubiquitous game. Most
magazines, newspapers, puzzle books, etc. publish lots of Sudoku
puzzles every day. These puzzles often come in different levels of
difficulty so that all people, from beginner to expert, can play the
game and enjoy it. Generating puzzles with different levels of
difficulty is a major concern of Sudoku designers. There are several
works in the literature which propose ways of generating puzzles
having a desirable level of difficulty. In this paper, we propose a
method based on constraint satisfaction problems to evaluate the
difficulty of the Sudoku puzzles. Then we propose a hill climbing
method to generate puzzles with different levels of difficulty.
Whereas other methods are usually capable of generating puzzles
with only few number of difficulty levels, our method can be used to
generate puzzles with arbitrary number of different difficulty levels.
We test our method by generating puzzles with different levels of
difficulty and having a group of 15 people solve all the puzzles and
recording the time they spend for each puzzle.
Abstract: One of the major goals of Spoken Dialog Systems
(SDS) is to understand what the user utters.
In the SDS domain, the Spoken Language Understanding (SLU)
Module classifies user utterances by means of a pre-definite
conceptual knowledge. The SLU module is able to recognize only the
meaning previously included in its knowledge base. Due the vastity
of that knowledge, the information storing is a very expensive
process.
Updating and managing the knowledge base are time-consuming
and error-prone processes because of the rapidly growing number of
entities like proper nouns and domain-specific nouns. This paper
proposes a solution to the problem of Name Entity Recognition
(NER) applied to a SDS domain. The proposed solution attempts to
automatically recognize the meaning associated with an utterance by
using the PANKOW (Pattern based Annotation through Knowledge
On the Web) method at runtime.
The method being proposed extracts information from the Web to
increase the SLU knowledge module and reduces the development
effort. In particular, the Google Search Engine is used to extract
information from the Facebook social network.
Abstract: Web search engines are designed to retrieve and
extract the information in the web databases and to return dynamic
web pages. The Semantic Web is an extension of the current web in
which it includes semantic content in web pages. The main goal of
semantic web is to promote the quality of the current web by
changing its contents into machine understandable form. Therefore,
the milestone of semantic web is to have semantic level information
in the web. Nowadays, people use different keyword- based search
engines to find the relevant information they need from the web.
But many of the words are polysemous. When these words are
used to query a search engine, it displays the Search Result Records
(SRRs) with different meanings. The SRRs with similar meanings are
grouped together based on Word Sense Disambiguation (WSD). In
addition to that semantic annotation is also performed to improve the
efficiency of search result records. Semantic Annotation is the
process of adding the semantic metadata to web resources. Thus the
grouped SRRs are annotated and generate a summary which
describes the information in SRRs. But the automatic semantic
annotation is a significant challenge in the semantic web. Here
ontology and knowledge based representation are used to annotate
the web pages.
Abstract: As currently various portable devices were launched,
smart business conducted using them became common. Since smart
business can use company-internal resources in an exlternal remote
place, user authentication that can identify authentic users is an
important factor. Commonly used user authentication is a method of
using user ID and Password. In the user authentication using ID and
Password, the user should see and enter authentication information
him or her. In this user authentication system depending on the user’s
vision, there is the threat of password leaks through snooping in the
process which the user enters his or her authentication information.
This study designed and produced a user authentication module
using an actuator to respond to the snooping threat.
Abstract: The growth of wireless devices affects the availability
of limited frequencies or spectrum bands as it has been known that
spectrum bands are a natural resource that cannot be added.
Meanwhile, the licensed frequencies are idle most of the time.
Cognitive radio is one of the solutions to solve those problems.
Cognitive radio is a promising technology that allows the unlicensed
users known as secondary users (SUs) to access licensed bands
without making interference to licensed users or primary users (PUs).
As cloud computing has become popular in recent years, cognitive
radio networks (CRNs) can be integrated with cloud platform. One of
the important issues in CRNs is security. It becomes a problem since
CRNs use radio frequencies as a medium for transmitting and CRNs
share the same issues with wireless communication systems. Another
critical issue in CRNs is performance. Security has adverse effect to
performance and there are trade-offs between them. The goal of this
paper is to investigate the performance related to security trade-off in
CRNs with supporting cloud platforms. Furthermore, Queuing
Network Models with preemptive resume and preemptive repeat
identical priority are applied in this project to measure the impact of
security to performance in CRNs with or without cloud platform. The
generalized exponential (GE) type distribution is used to reflect the
bursty inter-arrival and service times at the servers. The results show
that the best performance is obtained when security is disabled and
cloud platform is enabled.
Abstract: In this paper, an edge-strength guided multiscale
retinex (EGMSR) approach will be proposed for color image contrast
enhancement. In EGMSR, the pixel-dependent weight associated with
each pixel in the single scale retinex output image is computed
according to the edge strength around this pixel in order to prevent
from over-enhancing the noises contained in the smooth dark/bright
regions. Further, by fusing together the enhanced results of EGMSR
and adaptive multiscale retinex (AMSR), we can get a natural fused
image having high contrast and proper tonal rendition. Experimental
results on several low-contrast images have shown that our proposed
approach can produce natural and appealing enhanced images.
Abstract: Consumer-to-Consumer (C2C) E-commerce has been
growing at a very high speed in recent years. Since identical or
nearly-same kinds of products compete one another by relying on
keyword search in C2C E-commerce, some sellers describe their
products with spam keywords that are popular but are not related to
their products. Though such products get more chances to be retrieved
and selected by consumers than those without spam keywords,
the spam keywords mislead the consumers and waste their time.
This problem has been reported in many commercial services like
ebay and taobao, but there have been little research to solve this
problem. As a solution to this problem, this paper proposes a method
to classify whether keywords of a product are spam or not. The
proposed method assumes that a keyword for a given product is
more reliable if the keyword is observed commonly in specifications
of products which are the same or the same kind as the given
product. This is because that a hierarchical category of a product
in general determined precisely by a seller of the product and so is
the specification of the product. Since higher layers of the hierarchical
category represent more general kinds of products, a reliable degree
is differently determined according to the layers. Hence, reliable
degrees from different layers of a hierarchical category become
features for keywords and they are used together with features only
from specifications for classification of the keywords. Support Vector
Machines are adopted as a basic classifier using the features, since
it is powerful, and widely used in many classification tasks. In
the experiments, the proposed method is evaluated with a golden
standard dataset from Yi-han-wang, a Chinese C2C E-commerce,
and is compared with a baseline method that does not consider
the hierarchical category. The experimental results show that the
proposed method outperforms the baseline in F1-measure, which
proves that spam keywords are effectively identified by a hierarchical
category in C2C E-commerce.
Abstract: In this paper the issue of dimensionality reduction is
investigated in finger vein recognition systems using kernel Principal
Component Analysis (KPCA). One aspect of KPCA is to find the
most appropriate kernel function on finger vein recognition as there
are several kernel functions which can be used within PCA-based
algorithms. In this paper, however, another side of PCA-based
algorithms -particularly KPCA- is investigated. The aspect of
dimension of feature vector in PCA-based algorithms is of
importance especially when it comes to the real-world applications
and usage of such algorithms. It means that a fixed dimension of
feature vector has to be set to reduce the dimension of the input and
output data and extract the features from them. Then a classifier is
performed to classify the data and make the final decision. We
analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in
this paper and investigate the optimal feature extraction dimension in
finger vein recognition using KPCA.
Abstract: The emergence of the Semantic Web technology
increases day by day due to the rapid growth of multiple web pages.
Many standard formats are available to store the semantic web data.
The most popular format is the Resource Description Framework
(RDF). Querying large RDF graphs becomes a tedious procedure
with a vast increase in the amount of data. The problem of query
optimization becomes an issue in querying large RDF graphs.
Choosing the best query plan reduces the amount of query execution
time. To address this problem, nature inspired algorithms can be used
as an alternative to the traditional query optimization techniques. In
this research, the optimal query plan is generated by the proposed
SAPSO algorithm which is a hybrid of Simulated Annealing (SA)
and Particle Swarm Optimization (PSO) algorithms. The proposed
SAPSO algorithm has the ability to find the local optimistic result
and it avoids the problem of local minimum. Experiments were
performed on different datasets by changing the number of predicates
and the amount of data. The proposed algorithm gives improved
results compared to existing algorithms in terms of query execution
time.
Abstract: In this paper, GSM signal strength was measured in
order to detect the type of the signal fading phenomenon using onedimensional
multilevel wavelet residual method and neural network
clustering to determine the average GSM signal strength received in
the study area. The wavelet residual method predicted that the GSM
signal experienced slow fading and attenuated with MSE of 3.875dB.
The neural network clustering revealed that mostly -75dB, -85dB and
-95dB were received. This means that the signal strength received in
the study is a weak signal.
Abstract: Bureaucracy reform program drives Indonesian
government to change their management to enhance their
organizational performance. Information technology became one of
strategic plan that organization tried to improve. Knowledge
management system is one of information system that supporting
knowledge management implementation in government which
categorized as people perspective, because this system has high
dependency in human interaction and participation. Strategic plan for
developing knowledge management system can be determine using
some of information system strategic methods. This research
conducted to define type of strategic method of information system,
stage of activity each method, strength and weakness. Literature
review methods used to identify and classify strategic methods of
information system, differentiate method type, categorize common
activities, strength and weakness. Result of this research are
determine and compare six strategic information system methods,
Balanced Scorecard and Risk Analysis believe as common strategic
method that usually used and have the highest excellence strength.
Abstract: ‘Steganalysis’ is one of the challenging and attractive interests for the researchers with the development of information hiding techniques. It is the procedure to detect the hidden information from the stego created by known steganographic algorithm. In this paper, a novel feature based image steganalysis technique is proposed. Various statistical moments have been used along with some similarity metric. The proposed steganalysis technique has been designed based on transformation in four wavelet domains, which include Haar, Daubechies, Symlets and Biorthogonal. Each domain is being subjected to various classifiers, namely K-nearest-neighbor, K* Classifier, Locally weighted learning, Naive Bayes classifier, Neural networks, Decision trees and Support vector machines. The experiments are performed on a large set of pictures which are available freely in image database. The system also predicts the different message length definitions.
Abstract: Recently GPS data is used in a lot of studies to
automatically reconstruct travel patterns for trip survey. The aim is to
minimize the use of questionnaire surveys and travel diaries so as to
reduce their negative effects. In this paper data acquired from GPS and
accelerometer embedded in smart phones is utilized to predict the
mode of transportation used by the phone carrier. For prediction,
Support Vector Machine (SVM) and Adaptive boosting (AdaBoost)
are employed. Moreover a unique method to improve the prediction
results from these algorithms is also proposed. Results suggest that the
prediction accuracy of AdaBoost after improvement is relatively better
than the rest.
Abstract: In remote sensing, shadow causes problems in many
applications such as change detection and classification. It is caused
by objects which are elevated, thus can directly affect the accuracy of
information. For these reasons, it is very important to detect shadows
particularly in urban high spatial resolution imagery which created a
significant problem. This paper focuses on automatic shadow
detection based on a new spectral index for multispectral imagery
known as Shadow Detection Index (SDI). The new spectral index
was tested on different areas of WorldView-2 images and the results
demonstrated that the new spectral index has a massive potential to
extract shadows with accuracy of 94% effectively and automatically.
Furthermore, the new shadow detection index improved road
extraction from 82% to 93%.
Abstract: The need to extract R&D keywords from issues and use
them to retrieve R&D information is increasing rapidly. However, it is
difficult to identify related issues or distinguish them. Although the
similarity between issues cannot be identified, with an R&D lexicon,
issues that always share the same R&D keywords can be determined.
In detail, the R&D keywords that are associated with a particular issue
imply the key technology elements that are needed to solve a particular
issue.
Furthermore, the relationship among issues that share the same
R&D keywords can be shown in a more systematic way by clustering
them according to keywords. Thus, sharing R&D results and reusing
R&D technology can be facilitated. Indirectly, redundant investment
in R&D can be reduced as the relevant R&D information can be shared
among corresponding issues and the reusability of related R&D can be
improved. Therefore, a methodology to cluster issues from the
perspective of common R&D keywords is proposed to satisfy these
demands.
Abstract: An extensive amount of work has been done in data
clustering research under the unsupervised learning technique in Data
Mining during the past two decades. Moreover, several approaches
and methods have been emerged focusing on clustering diverse data
types, features of cluster models and similarity rates of clusters.
However, none of the single clustering algorithm exemplifies its best
nature in extracting efficient clusters. Consequently, in order to
rectify this issue, a new challenging technique called Cluster
Ensemble method was bloomed. This new approach tends to be the
alternative method for the cluster analysis problem. The main
objective of the Cluster Ensemble is to aggregate the diverse
clustering solutions in such a way to attain accuracy and also to
improve the eminence the individual clustering algorithms. Due to
the massive and rapid development of new methods in the globe of
data mining, it is highly mandatory to scrutinize a vital analysis of
existing techniques and the future novelty. This paper shows the
comparative analysis of different cluster ensemble methods along
with their methodologies and salient features. Henceforth this
unambiguous analysis will be very useful for the society of clustering
experts and also helps in deciding the most appropriate one to resolve
the problem in hand.
Abstract: Control of a semi-batch polymerization reactor using
an adaptive radial basis function (RBF) neural network method is
investigated in this paper. A neural network inverse model is used to
estimate the valve position of the reactor; this method can identify the
controlled system with the RBF neural network identifier. The
weights of the adaptive PID controller are timely adjusted based on
the identification of the plant and self-learning capability of RBFNN.
A PID controller is used in the feedback control to regulate the actual
temperature by compensating the neural network inverse model
output. Simulation results show that the proposed control has strong
adaptability, robustness and satisfactory control performance and the
nonlinear system is achieved.
Abstract: Developing a reliable and sustainable software products is today a big challenge among up–coming software developers in Nigeria. The inability to develop a comprehensive problem statement needed to execute proper requirements engineering process is missing. The need to describe the ‘what’ of a system in one document, written in a natural language is a major step in the overall process of Software Engineering. Requirements Engineering is a process use to discover, analyze and validate system requirements. This process is needed in reducing software errors at the early stage of the development of software. The importance of each of the steps in Requirements Engineering is clearly explained in the context of using detailed problem statement from client/customer to get an overview of an existing system along with expectations from the new system. This paper elicits inadequate Requirements Engineering principle as the major cause of poor software development in developing nations using a case study of final year computer science students of a tertiary-education institution in Nigeria.
Abstract: The society of 3rd Generation Partnership Project (3GPP) is completed developing Long Term Evolution Advanced (LTE-Advanced) systems as a standard 4G cellular system. This generation goals to produce conditions for a new radio-access technology geared to higher data rates, low latency, and better spectral efficiency. LTE-Advanced is an evolutionary step in the continuing development of LTE where the description in this article is based on LTE release 10. This paper provides a model of the traffic links of 4G system represented by LTE-Advanced system with the effect of the Transmission Control Protocols (TCP) and Stream Control Transmission Protocol (SCTP) in term of throughput and packet loss. Furthermore, the article presents the investigation and the analysis the behavior of SCTP and TCP variants over the 4G cellular systems. The traffic model and the scenario of the simulation developed using the network simulator NS-2 using different TCP source variants.