Abstract: Radio Frequency Identification (RFID) has become a
key technology in the emerging concept of Internet of Things (IoT).
Naturally, business applications would require the deployment of
various RFID systems developed by different vendors that use
different data formats and structures. This heterogeneity poses a
challenge in developing real-life IoT systems with RFID, as
integration is becoming very complex and challenging. Semantic
integration is a key approach to deal with this challenge. To do so,
ontology for RFID systems need to be developed in order to
annotated semantically RFID systems, and hence, facilitate their
integration. Accordingly, in this paper, we propose ontology for
RFID systems. The proposed ontology can be used to semantically
enrich RFID systems, and hence, improve their usage and reasoning.
Abstract: Wireless Sensor Network (WSN) routing is complex
due to its dynamic nature, computational overhead, limited battery
life, non-conventional addressing scheme, self-organization, and
sensor nodes limited transmission range. An energy efficient routing
protocol is a major concern in WSN. LEACH is a hierarchical WSN
routing protocol to increase network life. It performs self-organizing
and re-clustering functions for each round. This study proposes a
better sensor networks cluster head selection for efficient data
aggregation. The algorithm is based on Tabu search.
Abstract: Recently, many users have begun to frequently share
their opinions on diverse issues using various social media. Therefore,
numerous governments have attempted to establish or improve
national policies according to the public opinions captured from
various social media. In this paper, we indicate several limitations of
the traditional approaches to analyze public opinion on science and
technology and provide an alternative methodology to overcome these
limitations. First, we distinguish between the science and technology
analysis phase and the social issue analysis phase to reflect the fact that
public opinion can be formed only when a certain science and
technology is applied to a specific social issue. Next, we successively
apply a start list and a stop list to acquire clarified and interesting
results. Finally, to identify the most appropriate documents that fit
with a given subject, we develop a new logical filter concept that
consists of not only mere keywords but also a logical relationship
among the keywords. This study then analyzes the possibilities for the
practical use of the proposed methodology thorough its application to
discover core issues and public opinions from 1,700,886 documents
comprising SNS, blogs, news, and discussions.
Abstract: A method of effective planning and control of
industrial facility energy consumption is offered. The method allows
optimally arranging the management and full control of complex
production facilities in accordance with the criteria of minimal
technical and economic losses at the forecasting control. The method
is based on the optimal construction of the power efficiency
characteristics with the prescribed accuracy. The problem of optimal
designing of the forecasting model is solved on the basis of three
criteria: maximizing the weighted sum of the points of forecasting
with the prescribed accuracy; the solving of the problem by the
standard principles at the incomplete statistic data on the basis of
minimization of the regularized function; minimizing the technical
and economic losses due to the forecasting errors.
Abstract: Magnetic Resonance Imaging (MRI) is one of the
most important medical imaging modality. Subjective assessment of
the image quality is regarded as the gold standard to evaluate MR
images. In this study, a database of 210 MR images which contains
ten reference images and 200 distorted images is presented. The
reference images were distorted with four types of distortions: Rician
Noise, Gaussian White Noise, Gaussian Blur and DCT compression.
The 210 images were assessed by ten subjects. The subjective scores
were presented in Difference Mean Opinion Score (DMOS). The
DMOS values were compared with four FR-IQA metrics. We have
used Pearson Linear Coefficient (PLCC) and Spearman Rank Order
Correlation Coefficient (SROCC) to validate the DMOS values. The
high correlation values of PLCC and SROCC shows that the DMOS
values are close to the objective FR-IQA metrics.
Abstract: The practical efficient approach is suggested for
estimation of the seismoacoustic sources energy in C-OTDR
monitoring systems. This approach is represents the sequential plan
for confidence estimation both the seismoacoustic sources energy, as
well the absorption coefficient of the soil. The sequential plan
delivers the non-asymptotic guaranteed accuracy of obtained
estimates in the form of non-asymptotic confidence regions with
prescribed sizes. These confidence regions are valid for a finite
sample size when the distributions of the observations are unknown.
Thus, suggested estimates are non-asymptotic and nonparametric,
and also these estimates guarantee the prescribed estimation accuracy
in form of prior prescribed size of confidence regions, and prescribed
confidence coefficient value.
Abstract: Given the increase in the number of e-commerce sites,
the number of competitors has become very important. This means
that companies have to take appropriate decisions in order to meet the
expectations of their customers and satisfy their needs. In this paper,
we present a case study of applying LRFM (length, recency,
frequency and monetary) model and clustering techniques in the
sector of electronic commerce with a view to evaluating customers’
values of the Moroccan e-commerce websites and then developing
effective marketing strategies. To achieve these objectives, we adopt
LRFM model by applying a two-stage clustering method. In the first
stage, the self-organizing maps method is used to determine the best
number of clusters and the initial centroid. In the second stage, kmeans
method is applied to segment 730 customers into nine clusters
according to their L, R, F and M values. The results show that the
cluster 6 is the most important cluster because the average values of
L, R, F and M are higher than the overall average value. In addition,
this study has considered another variable that describes the mode of
payment used by customers to improve and strengthen clusters’
analysis. The clusters’ analysis demonstrates that the payment method is
one of the key indicators of a new index which allows to assess the
level of customers’ confidence in the company's Website.
Abstract: Routing in adhoc networks is a challenge as nodes are
mobile, and links are constantly created and broken. Present ondemand
adhoc routing algorithms initiate route discovery after a path
breaks, incurring significant cost to detect disconnection and
establish a new route. Specifically, when a path is about to be broken,
the source is warned of the likelihood of a disconnection. The source
then initiates path discovery early, avoiding disconnection totally. A
path is considered about to break when link availability decreases.
This study modifies Adhoc On-demand Multipath Distance Vector
routing (AOMDV) so that route handoff occurs through link
availability estimation.
Abstract: Machine visualization is an area of interest with fast
and progressive development. We present a method of machine
visualization which will be applicable in real industrial conditions
according to current needs and demands. Real factory data were
obtained in a newly built research plant. Methods described in this
paper were validated on a case study. Input data were processed and
the virtual environment was created. The environment contains
information about dimensions, structure, disposition, and function.
Hardware was enhanced by modular machines, prototypes, and
accessories. We added functionalities and machines into the virtual
environment. The user is able to interact with objects such as testing
and cutting machines, he/she can operate and move them. Proposed
design consists of an environment with two degrees of freedom of
movement. Users are in touch with items in the virtual world which
are embedded into the real surroundings. This paper describes development of the virtual environment. We
compared and tested various options of factory layout virtualization
and visualization. We analyzed possibilities of using a 3D scanner in
the layout obtaining process and we also analyzed various virtual
reality hardware visualization methods such as: Stereoscopic (CAVE)
projection, Head Mounted Display (HMD) and augmented reality
(AR) projection provided by see-through glasses.
Abstract: Liver segmentation from medical images poses more
challenges than analogous segmentations of other organs. This
contribution introduces a liver segmentation method from a series of
computer tomography images. Overall, we present a novel method for
segmenting liver by coupling density matching with shape priors.
Density matching signifies a tracking method which operates via
maximizing the Bhattacharyya similarity measure between the
photometric distribution from an estimated image region and a model
photometric distribution. Density matching controls the direction of
the evolution process and slows down the evolving contour in regions
with weak edges. The shape prior improves the robustness of density
matching and discourages the evolving contour from exceeding liver’s
boundaries at regions with weak boundaries. The model is
implemented using a modified distance regularized level set (DRLS)
model. The experimental results show that the method achieves a
satisfactory result. By comparing with the original DRLS model, it is
evident that the proposed model herein is more effective in addressing
the over segmentation problem. Finally, we gauge our performance of
our model against matrices comprising of accuracy, sensitivity, and
specificity.
Abstract: Multiprocessor task scheduling problem for dependent
and independent tasks is computationally complex problem. Many
methods are proposed to achieve optimal running time. As the
multiprocessor task scheduling is NP hard in nature, therefore, many
heuristics are proposed which have improved the makespan of the
problem. But due to problem specific nature, the heuristic method
which provide best results for one problem, might not provide good
results for another problem. So, Simulated Annealing which is meta
heuristic approach is considered. It can be applied on all types of
problems. However, due to many runs, meta heuristic approach takes
large computation time. Hence, the hybrid approach is proposed by
combining the Duplication Scheduling Heuristic and Simulated
Annealing (SA) and the makespan results of Simple Simulated
Annealing and Hybrid approach are analyzed.
Abstract: For the last decade, researchers have started to focus
their interest on Multicast Group Key Management Framework. The
central research challenge is secure and efficient group key
distribution. The present paper is based on the Bit model based
Secure Multicast Group key distribution scheme using the most
popular absolute encoder output type code named Gray Code. The
focus is of two folds. The first fold deals with the reduction of
computation complexity which is achieved in our scheme by
performing fewer multiplication operations during the key updating
process. To optimize the number of multiplication operations, an
O(1) time algorithm to multiply two N-bit binary numbers which
could be used in an N x N bit-model of reconfigurable mesh is used
in this proposed work. The second fold aims at reducing the amount
of information stored in the Group Center and group members while
performing the update operation in the key content. Comparative
analysis to illustrate the performance of various key distribution
schemes is shown in this paper and it has been observed that this
proposed algorithm reduces the computation and storage complexity
significantly. Our proposed algorithm is suitable for high
performance computing environment.
Abstract: In this paper, we propose the variational EM inference
algorithm for the multi-class Gaussian process classification model
that can be used in the field of human behavior recognition. This
algorithm can drive simultaneously both a posterior distribution of a
latent function and estimators of hyper-parameters in a Gaussian
process classification model with multiclass. Our algorithm is based
on the Laplace approximation (LA) technique and variational EM
framework. This is performed in two steps: called expectation and
maximization steps. First, in the expectation step, using the Bayesian
formula and LA technique, we derive approximately the posterior
distribution of the latent function indicating the possibility that each
observation belongs to a certain class in the Gaussian process
classification model. Second, in the maximization step, using a derived
posterior distribution of latent function, we compute the maximum
likelihood estimator for hyper-parameters of a covariance matrix
necessary to define prior distribution for latent function. These two
steps iteratively repeat until a convergence condition satisfies.
Moreover, we apply the proposed algorithm with human action
classification problem using a public database, namely, the KTH
human action data set. Experimental results reveal that the proposed
algorithm shows good performance on this data set.
Abstract: Wireless networks are built upon the open shared
medium which makes easy for attackers to conduct malicious
activities. Jamming is one of the most serious security threats to
information economy and it must be dealt efficiently. Jammer
prevents legitimate data to reach the receiver side and also it
seriously degrades the network performance. The objective of this
paper is to provide a general overview of jamming in wireless
network. It covers relevant works, different jamming techniques,
various types of jammers and typical prevention techniques.
Challenges associated with comparing several anti-jamming
techniques are also highlighted.
Abstract: The aim of this work is to detect geometrical shape
objects in an image. In this paper, the object is considered to be as a
circle shape. The identification requires find three characteristics,
which are number, size, and location of the object. To achieve the
goal of this work, this paper presents an algorithm that combines
from some of statistical approaches and image analysis techniques.
This algorithm has been implemented to arrive at the major
objectives in this paper. The algorithm has been evaluated by using
simulated data, and yields good results, and then it has been applied
to real data.
Abstract: We present probabilistic multinomial Dirichlet
classification model for multidimensional data and Gaussian process
priors. Here, we have considered efficient computational method that
can be used to obtain the approximate posteriors for latent variables
and parameters needed to define the multiclass Gaussian process
classification model. We first investigated the process of inducing a
posterior distribution for various parameters and latent function by
using the variational Bayesian approximations and important sampling
method, and next we derived a predictive distribution of latent
function needed to classify new samples. The proposed model is
applied to classify the synthetic multivariate dataset in order to verify
the performance of our model. Experiment result shows that our model
is more accurate than the other approximation methods.
Abstract: Sentiment analysis means to classify a given review
document into positive or negative polar document. Sentiment
analysis research has been increased tremendously in recent times
due to its large number of applications in the industry and academia.
Sentiment analysis models can be used to determine the opinion of
the user towards any entity or product. E-commerce companies can
use sentiment analysis model to improve their products on the basis
of users’ opinion. In this paper, we propose a new One-class Support
Vector Machine (One-class SVM) based sentiment analysis model
for movie review documents. In the proposed approach, we initially
extract features from one class of documents, and further test the
given documents with the one-class SVM model if a given new test
document lies in the model or it is an outlier. Experimental results
show the effectiveness of the proposed sentiment analysis model.
Abstract: The aim of this paper is to propose a general
framework for storing, analyzing, and extracting knowledge from
two-dimensional echocardiographic images, color Doppler images,
non-medical images, and general data sets. A number of high
performance data mining algorithms have been used to carry out this
task. Our framework encompasses four layers namely physical
storage, object identification, knowledge discovery, user level.
Techniques such as active contour model to identify the cardiac
chambers, pixel classification to segment the color Doppler echo
image, universal model for image retrieval, Bayesian method for
classification, parallel algorithms for image segmentation, etc., were
employed. Using the feature vector database that have been
efficiently constructed, one can perform various data mining tasks
like clustering, classification, etc. with efficient algorithms along
with image mining given a query image. All these facilities are
included in the framework that is supported by state-of-the-art user
interface (UI). The algorithms were tested with actual patient data
and Coral image database and the results show that their performance
is better than the results reported already.
Abstract: One of the global combinatorial optimization
problems in machine learning is feature selection. It concerned with
removing the irrelevant, noisy, and redundant data, along with
keeping the original meaning of the original data. Attribute reduction
in rough set theory is an important feature selection method. Since
attribute reduction is an NP-hard problem, it is necessary to
investigate fast and effective approximate algorithms. In this paper,
we proposed two feature selection mechanisms based on memetic
algorithms (MAs) which combine the genetic algorithm with a fuzzy
record to record travel algorithm and a fuzzy controlled great deluge
algorithm, to identify a good balance between local search and
genetic search. In order to verify the proposed approaches, numerical
experiments are carried out on thirteen datasets. The results show that
the MAs approaches are efficient in solving attribute reduction
problems when compared with other meta-heuristic approaches.
Abstract: Software quality issues require special attention
especially in view of the demands of quality software product to meet
customer satisfaction. Software development projects in most
organisations need proper defect management process in order to
produce high quality software product and reduce the number of
defects. The research question of this study is how to produce high
quality software and reducing the number of defects. Therefore, the
objective of this paper is to provide a framework for managing
software defects by following defined life cycle processes. The
methodology starts by reviewing defects, defect models, best
practices, and standards. A framework for defect management life
cycle is proposed. The major contribution of this study is to define a
defect management roadmap in software development. The adoption
of an effective defect management process helps to achieve the
ultimate goal of producing high quality software products and
contributes towards continuous software process improvement.