Abstract: Prior literature in the field of adaptive and
personalized learning sequence in e-learning have proposed and
implemented various mechanisms to improve the learning process
such as individualization and personalization, but complex to
implement due to expensive algorithmic programming and need of
extensive and prior data. The main objective of personalizing
learning sequence is to maximize learning by dynamically selecting
the closest teaching operation in order to achieve the learning
competency of learner. In this paper, a revolutionary technique has
been proposed and tested to perform individualization and
personalization using modified reversed roulette wheel selection
algorithm that runs at O(n). The technique is simpler to implement
and is algorithmically less expensive compared to other revolutionary
algorithms since it collects the dynamic real time performance matrix
such as examinations, reviews, and study to form the RWSA single
numerical fitness value. Results show that the implemented system is
capable of recommending new learning sequences that lessens time
of study based on student's prior knowledge and real performance
matrix.
Abstract: The paper focuses on the benefits of business process
modeling. Although this discipline is developing for many years,
there is still necessity of creating new opportunities to meet the ever
increasing users’ needs. Because one of these needs is related to the
conversion of business process models from one standard to another,
the authors have developed a converter between BPMN and EPC
standards using workflow patterns as intermediate tool. Nowadays
there are too many systems for business process modeling. The
variety of output formats is almost the same as the systems
themselves. This diversity additionally hampers the conversion of the
models. The presented study is aimed at discussing problems due to
differences in the output formats of various modeling environments.
Abstract: Common Platform for Automated Programming
(CPAP) is defined in details. Two versions of CPAP are described:
Cloud based (including set of components for classic programming,
and set of components for combined programming); and Knowledge
Based Automated Software Engineering (KBASE) based (including
set of components for automated programming, and set of
components for ontology programming). Four KBASE products
(Module for Automated Programming of Robots, Intelligent Product
Manual, Intelligent Document Display, and Intelligent Form
Generator) are analyzed and CPAP contributions to automated
programming are presented.
Abstract: Software quality issues require special attention
especially in view of the demands of quality software product to meet
customer satisfaction. Software development projects in most
organisations need proper defect management process in order to
produce high quality software product and reduce the number of
defects. The research question of this study is how to produce high
quality software and reducing the number of defects. Therefore, the
objective of this paper is to provide a framework for managing
software defects by following defined life cycle processes. The
methodology starts by reviewing defects, defect models, best
practices, and standards. A framework for defect management life
cycle is proposed. The major contribution of this study is to define a
defect management roadmap in software development. The adoption
of an effective defect management process helps to achieve the
ultimate goal of producing high quality software products and
contributes towards continuous software process improvement.
Abstract: One of the global combinatorial optimization
problems in machine learning is feature selection. It concerned with
removing the irrelevant, noisy, and redundant data, along with
keeping the original meaning of the original data. Attribute reduction
in rough set theory is an important feature selection method. Since
attribute reduction is an NP-hard problem, it is necessary to
investigate fast and effective approximate algorithms. In this paper,
we proposed two feature selection mechanisms based on memetic
algorithms (MAs) which combine the genetic algorithm with a fuzzy
record to record travel algorithm and a fuzzy controlled great deluge
algorithm, to identify a good balance between local search and
genetic search. In order to verify the proposed approaches, numerical
experiments are carried out on thirteen datasets. The results show that
the MAs approaches are efficient in solving attribute reduction
problems when compared with other meta-heuristic approaches.
Abstract: Sentiment analysis means to classify a given review
document into positive or negative polar document. Sentiment
analysis research has been increased tremendously in recent times
due to its large number of applications in the industry and academia.
Sentiment analysis models can be used to determine the opinion of
the user towards any entity or product. E-commerce companies can
use sentiment analysis model to improve their products on the basis
of users’ opinion. In this paper, we propose a new One-class Support
Vector Machine (One-class SVM) based sentiment analysis model
for movie review documents. In the proposed approach, we initially
extract features from one class of documents, and further test the
given documents with the one-class SVM model if a given new test
document lies in the model or it is an outlier. Experimental results
show the effectiveness of the proposed sentiment analysis model.
Abstract: We present probabilistic multinomial Dirichlet
classification model for multidimensional data and Gaussian process
priors. Here, we have considered efficient computational method that
can be used to obtain the approximate posteriors for latent variables
and parameters needed to define the multiclass Gaussian process
classification model. We first investigated the process of inducing a
posterior distribution for various parameters and latent function by
using the variational Bayesian approximations and important sampling
method, and next we derived a predictive distribution of latent
function needed to classify new samples. The proposed model is
applied to classify the synthetic multivariate dataset in order to verify
the performance of our model. Experiment result shows that our model
is more accurate than the other approximation methods.
Abstract: Password authentication is one of the widely used
methods to achieve authentication for legal users of computers and
defense against attackers. There are many different ways to
authenticate users of a system and there are many password cracking
methods also developed. This paper proposes how best password
cracking can be performed on a CPU-GPGPU based system. The
main objective of this work is to project how quickly a password can
be cracked with some knowledge about the computer security and
password cracking if sufficient security is not incorporated to the
system.
Abstract: Liver segmentation from medical images poses more
challenges than analogous segmentations of other organs. This
contribution introduces a liver segmentation method from a series of
computer tomography images. Overall, we present a novel method for
segmenting liver by coupling density matching with shape priors.
Density matching signifies a tracking method which operates via
maximizing the Bhattacharyya similarity measure between the
photometric distribution from an estimated image region and a model
photometric distribution. Density matching controls the direction of
the evolution process and slows down the evolving contour in regions
with weak edges. The shape prior improves the robustness of density
matching and discourages the evolving contour from exceeding liver’s
boundaries at regions with weak boundaries. The model is
implemented using a modified distance regularized level set (DRLS)
model. The experimental results show that the method achieves a
satisfactory result. By comparing with the original DRLS model, it is
evident that the proposed model herein is more effective in addressing
the over segmentation problem. Finally, we gauge our performance of
our model against matrices comprising of accuracy, sensitivity, and
specificity.
Abstract: Machine visualization is an area of interest with fast
and progressive development. We present a method of machine
visualization which will be applicable in real industrial conditions
according to current needs and demands. Real factory data were
obtained in a newly built research plant. Methods described in this
paper were validated on a case study. Input data were processed and
the virtual environment was created. The environment contains
information about dimensions, structure, disposition, and function.
Hardware was enhanced by modular machines, prototypes, and
accessories. We added functionalities and machines into the virtual
environment. The user is able to interact with objects such as testing
and cutting machines, he/she can operate and move them. Proposed
design consists of an environment with two degrees of freedom of
movement. Users are in touch with items in the virtual world which
are embedded into the real surroundings. This paper describes development of the virtual environment. We
compared and tested various options of factory layout virtualization
and visualization. We analyzed possibilities of using a 3D scanner in
the layout obtaining process and we also analyzed various virtual
reality hardware visualization methods such as: Stereoscopic (CAVE)
projection, Head Mounted Display (HMD) and augmented reality
(AR) projection provided by see-through glasses.
Abstract: The practical efficient approach is suggested for
estimation of the seismoacoustic sources energy in C-OTDR
monitoring systems. This approach is represents the sequential plan
for confidence estimation both the seismoacoustic sources energy, as
well the absorption coefficient of the soil. The sequential plan
delivers the non-asymptotic guaranteed accuracy of obtained
estimates in the form of non-asymptotic confidence regions with
prescribed sizes. These confidence regions are valid for a finite
sample size when the distributions of the observations are unknown.
Thus, suggested estimates are non-asymptotic and nonparametric,
and also these estimates guarantee the prescribed estimation accuracy
in form of prior prescribed size of confidence regions, and prescribed
confidence coefficient value.
Abstract: Magnetic Resonance Imaging (MRI) is one of the
most important medical imaging modality. Subjective assessment of
the image quality is regarded as the gold standard to evaluate MR
images. In this study, a database of 210 MR images which contains
ten reference images and 200 distorted images is presented. The
reference images were distorted with four types of distortions: Rician
Noise, Gaussian White Noise, Gaussian Blur and DCT compression.
The 210 images were assessed by ten subjects. The subjective scores
were presented in Difference Mean Opinion Score (DMOS). The
DMOS values were compared with four FR-IQA metrics. We have
used Pearson Linear Coefficient (PLCC) and Spearman Rank Order
Correlation Coefficient (SROCC) to validate the DMOS values. The
high correlation values of PLCC and SROCC shows that the DMOS
values are close to the objective FR-IQA metrics.
Abstract: A method of effective planning and control of
industrial facility energy consumption is offered. The method allows
optimally arranging the management and full control of complex
production facilities in accordance with the criteria of minimal
technical and economic losses at the forecasting control. The method
is based on the optimal construction of the power efficiency
characteristics with the prescribed accuracy. The problem of optimal
designing of the forecasting model is solved on the basis of three
criteria: maximizing the weighted sum of the points of forecasting
with the prescribed accuracy; the solving of the problem by the
standard principles at the incomplete statistic data on the basis of
minimization of the regularized function; minimizing the technical
and economic losses due to the forecasting errors.
Abstract: Recently, many users have begun to frequently share
their opinions on diverse issues using various social media. Therefore,
numerous governments have attempted to establish or improve
national policies according to the public opinions captured from
various social media. In this paper, we indicate several limitations of
the traditional approaches to analyze public opinion on science and
technology and provide an alternative methodology to overcome these
limitations. First, we distinguish between the science and technology
analysis phase and the social issue analysis phase to reflect the fact that
public opinion can be formed only when a certain science and
technology is applied to a specific social issue. Next, we successively
apply a start list and a stop list to acquire clarified and interesting
results. Finally, to identify the most appropriate documents that fit
with a given subject, we develop a new logical filter concept that
consists of not only mere keywords but also a logical relationship
among the keywords. This study then analyzes the possibilities for the
practical use of the proposed methodology thorough its application to
discover core issues and public opinions from 1,700,886 documents
comprising SNS, blogs, news, and discussions.