Abstract: Developing a reliable and sustainable software products is today a big challenge among up–coming software developers in Nigeria. The inability to develop a comprehensive problem statement needed to execute proper requirements engineering process is missing. The need to describe the ‘what’ of a system in one document, written in a natural language is a major step in the overall process of Software Engineering. Requirements Engineering is a process use to discover, analyze and validate system requirements. This process is needed in reducing software errors at the early stage of the development of software. The importance of each of the steps in Requirements Engineering is clearly explained in the context of using detailed problem statement from client/customer to get an overview of an existing system along with expectations from the new system. This paper elicits inadequate Requirements Engineering principle as the major cause of poor software development in developing nations using a case study of final year computer science students of a tertiary-education institution in Nigeria.
Abstract: An extensive amount of work has been done in data
clustering research under the unsupervised learning technique in Data
Mining during the past two decades. Moreover, several approaches
and methods have been emerged focusing on clustering diverse data
types, features of cluster models and similarity rates of clusters.
However, none of the single clustering algorithm exemplifies its best
nature in extracting efficient clusters. Consequently, in order to
rectify this issue, a new challenging technique called Cluster
Ensemble method was bloomed. This new approach tends to be the
alternative method for the cluster analysis problem. The main
objective of the Cluster Ensemble is to aggregate the diverse
clustering solutions in such a way to attain accuracy and also to
improve the eminence the individual clustering algorithms. Due to
the massive and rapid development of new methods in the globe of
data mining, it is highly mandatory to scrutinize a vital analysis of
existing techniques and the future novelty. This paper shows the
comparative analysis of different cluster ensemble methods along
with their methodologies and salient features. Henceforth this
unambiguous analysis will be very useful for the society of clustering
experts and also helps in deciding the most appropriate one to resolve
the problem in hand.
Abstract: The need to extract R&D keywords from issues and use
them to retrieve R&D information is increasing rapidly. However, it is
difficult to identify related issues or distinguish them. Although the
similarity between issues cannot be identified, with an R&D lexicon,
issues that always share the same R&D keywords can be determined.
In detail, the R&D keywords that are associated with a particular issue
imply the key technology elements that are needed to solve a particular
issue.
Furthermore, the relationship among issues that share the same
R&D keywords can be shown in a more systematic way by clustering
them according to keywords. Thus, sharing R&D results and reusing
R&D technology can be facilitated. Indirectly, redundant investment
in R&D can be reduced as the relevant R&D information can be shared
among corresponding issues and the reusability of related R&D can be
improved. Therefore, a methodology to cluster issues from the
perspective of common R&D keywords is proposed to satisfy these
demands.
Abstract: This paper presents circular polar coordinates
transformation of periodic fuzzy membership function. The purpose
is identification of domain of periodic membership functions in
consequent part of IF-THEN rules. Proposed methods in this paper
remove complicatedness concerning domain of periodic membership
function from defuzzification in fuzzy approximate reasoning.
Defuzzification on circular polar coordinates is also proposed.
Abstract: We present our approach on using continuous delivery
pattern for release management. One of the key practices of agile and
lean teams is the continuous delivery of new features to stakeholders.
The main benefits of this approach lie in the ability to release new
applications rapidly which has real strategic impact on the
competitive advantage of an organization. Organizations that
successfully implement Continuous Delivery have the ability to
evolve rapidly to support innovation, provide stable and reliable
software in more efficient ways, decrease the amount of resources
need for maintenance, and lower the software delivery time and costs.
One of the objectives of this paper is to elaborate a case study where
IT division of Central Securities Depository Institution (MKK) of
Turkey apply Continuous Delivery pattern to improve release
management process.
Abstract: Recently GPS data is used in a lot of studies to
automatically reconstruct travel patterns for trip survey. The aim is to
minimize the use of questionnaire surveys and travel diaries so as to
reduce their negative effects. In this paper data acquired from GPS and
accelerometer embedded in smart phones is utilized to predict the
mode of transportation used by the phone carrier. For prediction,
Support Vector Machine (SVM) and Adaptive boosting (AdaBoost)
are employed. Moreover a unique method to improve the prediction
results from these algorithms is also proposed. Results suggest that the
prediction accuracy of AdaBoost after improvement is relatively better
than the rest.
Abstract: ‘Steganalysis’ is one of the challenging and attractive interests for the researchers with the development of information hiding techniques. It is the procedure to detect the hidden information from the stego created by known steganographic algorithm. In this paper, a novel feature based image steganalysis technique is proposed. Various statistical moments have been used along with some similarity metric. The proposed steganalysis technique has been designed based on transformation in four wavelet domains, which include Haar, Daubechies, Symlets and Biorthogonal. Each domain is being subjected to various classifiers, namely K-nearest-neighbor, K* Classifier, Locally weighted learning, Naive Bayes classifier, Neural networks, Decision trees and Support vector machines. The experiments are performed on a large set of pictures which are available freely in image database. The system also predicts the different message length definitions.
Abstract: The emergence of the Semantic Web technology
increases day by day due to the rapid growth of multiple web pages.
Many standard formats are available to store the semantic web data.
The most popular format is the Resource Description Framework
(RDF). Querying large RDF graphs becomes a tedious procedure
with a vast increase in the amount of data. The problem of query
optimization becomes an issue in querying large RDF graphs.
Choosing the best query plan reduces the amount of query execution
time. To address this problem, nature inspired algorithms can be used
as an alternative to the traditional query optimization techniques. In
this research, the optimal query plan is generated by the proposed
SAPSO algorithm which is a hybrid of Simulated Annealing (SA)
and Particle Swarm Optimization (PSO) algorithms. The proposed
SAPSO algorithm has the ability to find the local optimistic result
and it avoids the problem of local minimum. Experiments were
performed on different datasets by changing the number of predicates
and the amount of data. The proposed algorithm gives improved
results compared to existing algorithms in terms of query execution
time.
Abstract: The purposes of this study were to design and find
users’ satisfaction after using the decision support system for tourism
northern part of Thailand, which can provide tourists touristic
information and plan their personal voyage. Such information can be
retrieved systematically based on personal budget and provinces. The
samples of this study were five experts and users 30 persons white
collars in Bangkok. This decision support system was designed via
ASP.NET. Its database was developed by using MySQL, for
administrators are able to effectively manage the database. The
application outcome revealed that the innovation works properly as
sought in objectives. Specialists and white collars in Bangkok have
evaluated the decision support system; the result was satisfactorily
positive.
Abstract: In this paper the issue of dimensionality reduction is
investigated in finger vein recognition systems using kernel Principal
Component Analysis (KPCA). One aspect of KPCA is to find the
most appropriate kernel function on finger vein recognition as there
are several kernel functions which can be used within PCA-based
algorithms. In this paper, however, another side of PCA-based
algorithms -particularly KPCA- is investigated. The aspect of
dimension of feature vector in PCA-based algorithms is of
importance especially when it comes to the real-world applications
and usage of such algorithms. It means that a fixed dimension of
feature vector has to be set to reduce the dimension of the input and
output data and extract the features from them. Then a classifier is
performed to classify the data and make the final decision. We
analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in
this paper and investigate the optimal feature extraction dimension in
finger vein recognition using KPCA.
Abstract: Consumer-to-Consumer (C2C) E-commerce has been
growing at a very high speed in recent years. Since identical or
nearly-same kinds of products compete one another by relying on
keyword search in C2C E-commerce, some sellers describe their
products with spam keywords that are popular but are not related to
their products. Though such products get more chances to be retrieved
and selected by consumers than those without spam keywords,
the spam keywords mislead the consumers and waste their time.
This problem has been reported in many commercial services like
ebay and taobao, but there have been little research to solve this
problem. As a solution to this problem, this paper proposes a method
to classify whether keywords of a product are spam or not. The
proposed method assumes that a keyword for a given product is
more reliable if the keyword is observed commonly in specifications
of products which are the same or the same kind as the given
product. This is because that a hierarchical category of a product
in general determined precisely by a seller of the product and so is
the specification of the product. Since higher layers of the hierarchical
category represent more general kinds of products, a reliable degree
is differently determined according to the layers. Hence, reliable
degrees from different layers of a hierarchical category become
features for keywords and they are used together with features only
from specifications for classification of the keywords. Support Vector
Machines are adopted as a basic classifier using the features, since
it is powerful, and widely used in many classification tasks. In
the experiments, the proposed method is evaluated with a golden
standard dataset from Yi-han-wang, a Chinese C2C E-commerce,
and is compared with a baseline method that does not consider
the hierarchical category. The experimental results show that the
proposed method outperforms the baseline in F1-measure, which
proves that spam keywords are effectively identified by a hierarchical
category in C2C E-commerce.
Abstract: In this paper, an edge-strength guided multiscale
retinex (EGMSR) approach will be proposed for color image contrast
enhancement. In EGMSR, the pixel-dependent weight associated with
each pixel in the single scale retinex output image is computed
according to the edge strength around this pixel in order to prevent
from over-enhancing the noises contained in the smooth dark/bright
regions. Further, by fusing together the enhanced results of EGMSR
and adaptive multiscale retinex (AMSR), we can get a natural fused
image having high contrast and proper tonal rendition. Experimental
results on several low-contrast images have shown that our proposed
approach can produce natural and appealing enhanced images.
Abstract: As currently various portable devices were launched,
smart business conducted using them became common. Since smart
business can use company-internal resources in an exlternal remote
place, user authentication that can identify authentic users is an
important factor. Commonly used user authentication is a method of
using user ID and Password. In the user authentication using ID and
Password, the user should see and enter authentication information
him or her. In this user authentication system depending on the user’s
vision, there is the threat of password leaks through snooping in the
process which the user enters his or her authentication information.
This study designed and produced a user authentication module
using an actuator to respond to the snooping threat.
Abstract: Web search engines are designed to retrieve and
extract the information in the web databases and to return dynamic
web pages. The Semantic Web is an extension of the current web in
which it includes semantic content in web pages. The main goal of
semantic web is to promote the quality of the current web by
changing its contents into machine understandable form. Therefore,
the milestone of semantic web is to have semantic level information
in the web. Nowadays, people use different keyword- based search
engines to find the relevant information they need from the web.
But many of the words are polysemous. When these words are
used to query a search engine, it displays the Search Result Records
(SRRs) with different meanings. The SRRs with similar meanings are
grouped together based on Word Sense Disambiguation (WSD). In
addition to that semantic annotation is also performed to improve the
efficiency of search result records. Semantic Annotation is the
process of adding the semantic metadata to web resources. Thus the
grouped SRRs are annotated and generate a summary which
describes the information in SRRs. But the automatic semantic
annotation is a significant challenge in the semantic web. Here
ontology and knowledge based representation are used to annotate
the web pages.
Abstract: This paper presents a new meta-heuristic bio-inspired
optimization algorithm which is called Cuttlefish Algorithm (CFA).
The algorithm mimics the mechanism of color changing behavior of
the cuttlefish to solve numerical global optimization problems. The
colors and patterns of the cuttlefish are produced by reflected light
from three different layers of cells. The proposed algorithm considers
mainly two processes: reflection and visibility. Reflection process
simulates light reflection mechanism used by these layers, while
visibility process simulates visibility of matching patterns of the
cuttlefish. To show the effectiveness of the algorithm, it is tested with
some other popular bio-inspired optimization algorithms such as
Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and
Bees Algorithm (BA) that have been previously proposed in the
literature. Simulations and obtained results indicate that the proposed
CFA is superior when compared with these algorithms.
Abstract: A method is proposed for stable detection of
seismoacoustic sources in C-OTDR systems that guarantee given
upper bounds for probabilities of type I and type II errors. Properties
of the proposed method are rigorously proved. The results of
practical applications of the proposed method in a real C-OTDRsystem
are presented.
Abstract: Sudoku is a logic-based combinatorial puzzle game
which people in different ages enjoy playing it. The challenging and
addictive nature of this game has made it a ubiquitous game. Most
magazines, newspapers, puzzle books, etc. publish lots of Sudoku
puzzles every day. These puzzles often come in different levels of
difficulty so that all people, from beginner to expert, can play the
game and enjoy it. Generating puzzles with different levels of
difficulty is a major concern of Sudoku designers. There are several
works in the literature which propose ways of generating puzzles
having a desirable level of difficulty. In this paper, we propose a
method based on constraint satisfaction problems to evaluate the
difficulty of the Sudoku puzzles. Then we propose a hill climbing
method to generate puzzles with different levels of difficulty.
Whereas other methods are usually capable of generating puzzles
with only few number of difficulty levels, our method can be used to
generate puzzles with arbitrary number of different difficulty levels.
We test our method by generating puzzles with different levels of
difficulty and having a group of 15 people solve all the puzzles and
recording the time they spend for each puzzle.
Abstract: The work proposes a decision support methodology
for the credit risk minimization in selection of investment projects.
The methodology provides two stages of projects’ evaluation.
Preliminary selection of projects with minor credit risks is made
using the Expertons Method. The second stage makes ranking of
chosen projects using the Possibilistic Discrimination Analysis
Method. The latter is a new modification of a well-known Method of
Fuzzy Discrimination Analysis.
Abstract: This work proposes a fuzzy methodology to support
the investment decisions. While choosing among competitive
investment projects, the methodology makes ranking of projects
using the new aggregation OWA operator – AsPOWA, presented in
the environment of possibility uncertainty. For numerical evaluation
of the weighting vector associated with the AsPOWA operator the
mathematical programming problem is constructed. On the basis of
the AsPOWA operator the projects’ group ranking maximum criteria
is constructed. The methodology also allows making the most
profitable investments into several of the project using the method
developed by the authors for discrete possibilistic bicriteria problems.
The article provides an example of the investment decision-making
that explains the work of the proposed methodology.
Abstract: The importance of the formal specification in the
software life cycle is barely concealing to anyone. Formal
specifications use mathematical notation to describe the properties of
information system precisely, without unduly constraining the way in
how these properties are achieved. Having a correct and quality
software specification is not easy task. This study concerns with how
a group of rectifiers can communicate with each other and work to
prepare and produce a correct formal software specification. WBCS
has been implemented based mainly in the proposed supported
cooperative work model and a survey conducted on the existing Webbased
collaborative writing tools. This paper aims to assess the
feasibility of executing the web-based collaboration process using
WBCS. The purpose of conducting this test is to test the system as a
whole for functionality and fitness for use based on the evaluation
test plan.