Abstract: Background modeling and subtraction in video
analysis has been widely used as an effective method for moving
objects detection in many computer vision applications. Recently, a
large number of approaches have been developed to tackle different
types of challenges in this field. However, the dynamic background
and illumination variations are the most frequently occurred problems
in the practical situation. This paper presents a favorable two-layer
model based on codebook algorithm incorporated with local binary
pattern (LBP) texture measure, targeted for handling dynamic
background and illumination variation problems. More specifically,
the first layer is designed by block-based codebook combining with
LBP histogram and mean value of each RGB color channel. Because
of the invariance of the LBP features with respect to monotonic
gray-scale changes, this layer can produce block wise detection results
with considerable tolerance of illumination variations. The pixel-based
codebook is employed to reinforce the precision from the output of the
first layer which is to eliminate false positives further. As a result, the
proposed approach can greatly promote the accuracy under the
circumstances of dynamic background and illumination changes.
Experimental results on several popular background subtraction
datasets demonstrate very competitive performance compared to
previous models.
Abstract: Web Usage Mining is the application of data mining
techniques to find usage patterns from web log data, so as to grasp
required patterns and serve the requirements of Web-based
applications. User’s expertise on the internet may be improved by
minimizing user’s web access latency. This may be done by
predicting the future search page earlier and the same may be prefetched
and cached. Therefore, to enhance the standard of web
services, it is needed topic to research the user web navigation
behavior. Analysis of user’s web navigation behavior is achieved
through modeling web navigation history. We propose this technique
which cluster’s the user sessions, based on the K-medoids technique.
Abstract: A Reconfigurable Wilkinson power divider is
proposed in this paper. In existing system only a limited number of
bandwidth is used at the output ports, in the proposed Wilkinson
power divider different band of frequencies are obtained by using
PIN diode. By tuning the PIN diode, different frequencies are
achieved. The size of the power divider is reduced for the operating
frequency and increases the fractional bandwidth.
Abstract: Small-size and low-power sensors with sensing, signal
processing and wireless communication capabilities is suitable for the
wireless sensor networks. Due to the limited resources and battery
constraints, complex routing algorithms used for the ad-hoc networks
cannot be employed in sensor networks. In this paper, we propose
node-disjoint multi-path hexagon-based routing algorithms in wireless
sensor networks. We suggest the details of the algorithm and compare
it with other works. Simulation results show that the proposed scheme
achieves better performance in terms of efficiency and message
delivery ratio.
Abstract: This article discusses the passage of RDB to XML
documents (schema and data) based on metadata and semantic
enrichment, which makes the RDB under flattened shape and is
enriched by the object concept. The integration and exploitation of
the object concept in the XML uses a syntax allowing for the
verification of the conformity of the document XML during the
creation. The information extracted from the RDB is therefore
analyzed and filtered in order to adjust according to the structure of
the XML files and the associated object model. Those implemented
in the XML document through a SQL query are built dynamically. A
prototype was implemented to realize automatic migration, and so
proves the effectiveness of this particular approach.
Abstract: Social networking sites such as Twitter and Facebook
attracts over 500 million users across the world, for those users, their
social life, even their practical life, has become interrelated. Their
interaction with social networking has affected their life forever.
Accordingly, social networking sites have become among the main
channels that are responsible for vast dissemination of different kinds
of information during real time events. This popularity in Social
networking has led to different problems including the possibility of
exposing incorrect information to their users through fake accounts
which results to the spread of malicious content during life events.
This situation can result to a huge damage in the real world to the
society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the
fake accounts on Twitter. The study determines the minimized set of
the main factors that influence the detection of the fake accounts on
Twitter, and then the determined factors are applied using different
classification techniques. A comparison of the results of these
techniques has been performed and the most accurate algorithm is
selected according to the accuracy of the results. The study has been
compared with different recent researches in the same area; this
comparison has proved the accuracy of the proposed study. We claim
that this study can be continuously applied on Twitter social network
to automatically detect the fake accounts; moreover, the study can be
applied on different social network sites such as Facebook with minor
changes according to the nature of the social network which are
discussed in this paper.
Abstract: The Internet of Things (IoT) field has been applied in
industries with different purposes. Sensing Enterprise (SE) is an
attribute of an enterprise or a network that allows it to react to
business stimuli originating on the Internet. These fields have come
into focus recently on the enterprises, and there is some evidence of
the use and implications in supply chain management, while
finding it as an interesting aspect to work on. This paper presents a
revision and proposals of IoT applications in supply chain
management.
Abstract: Recently, Job Recommender Systems have gained
much attention in industries since they solve the problem of
information overload on the recruiting website. Therefore, we
proposed Extended Personalized Job System that has the capability of
providing the appropriate jobs for job seeker and recommending
some suitable information for them using Data Mining Techniques
and Dynamic User Profile. On the other hands, company can also
interact to the system for publishing and updating job information.
This system have emerged and supported various platforms such as
web application and android mobile application. In this paper, User
profiles, Implicit User Action, User Feedback, and Clustering
Techniques in WEKA libraries were applied and implemented. In
additions, open source tools like Yii Web Application Framework,
Bootstrap Front End Framework and Android Mobile Technology
were also applied.
Abstract: Prior literature in the field of adaptive and
personalized learning sequence in e-learning have proposed and
implemented various mechanisms to improve the learning process
such as individualization and personalization, but complex to
implement due to expensive algorithmic programming and need of
extensive and prior data. The main objective of personalizing
learning sequence is to maximize learning by dynamically selecting
the closest teaching operation in order to achieve the learning
competency of learner. In this paper, a revolutionary technique has
been proposed and tested to perform individualization and
personalization using modified reversed roulette wheel selection
algorithm that runs at O(n). The technique is simpler to implement
and is algorithmically less expensive compared to other revolutionary
algorithms since it collects the dynamic real time performance matrix
such as examinations, reviews, and study to form the RWSA single
numerical fitness value. Results show that the implemented system is
capable of recommending new learning sequences that lessens time
of study based on student's prior knowledge and real performance
matrix.
Abstract: Since the advances in digital imaging technologies have led to
development of high quality digital devices, there are a lot of illegal copies
of copyrighted video content on the Internet. Also, unauthorized editing is
occurred frequently. Thus, we propose an editing prevention technique for
high-quality (HQ) video that can prevent these illegally edited copies from
spreading out. The proposed technique is applied spatial and temporal gradient
methods to improve the fidelity and detection performance. Also, the scheme
duplicates the embedding signal temporally to alleviate the signal reduction
caused by geometric and signal-processing distortions. Experimental results
show that the proposed scheme achieves better performance than previously
proposed schemes and it has high fidelity. The proposed scheme can be used
in unauthorized access prevention method of visual communication or traitor
tracking applications which need fast detection process to prevent illegally
edited video content from spreading out.
Abstract: This study aims to establish function point process
based on stochastic distribution. In order to demonstrate effectiveness
of the study we present a case study that it applies suggested method
on an automotive electrical and electronics system software
development based on Monte Carlo Simulation. It is expected that the
result of this paper is used as guidance for establishing function point
process in organizations and tools for helping project managers make
decisions correctly.
Abstract: This paper presents the development of a mobile
application for students at the Faculty of Information Technology,
Rangsit University (RSU), Thailand. RSU upgrades an enrollment
process by improving its information systems. Students can
download the RSU APP easily in order to access the RSU substantial
information. The reason of having a mobile application is to help
students to access the system regardless of time and place. The objectives of this paper include: 1. To develop an application
on iOS platform for those students at the Faculty of Information
Technology, Rangsit University, Thailand. 2. To obtain the students’
perception towards the new mobile app. The target group is those
from the freshman year till the senior year of the faculty of
Information Technology, Rangsit University. The new mobile application, called as RSU APP, is developed by
the department of Information Technology, Rangsit University. It
contains useful features and various functionalities particularly on
those that can give support to students. The core contents of the app
consist of RSU’s announcement, calendar, events, activities, and ebook.
The mobile app is developed on the iOS platform. The user
satisfaction is analyzed from the interview data from 81 interviewees
as well as a Google application like a Google form which 122
interviewees are involved. The result shows that users are satisfied
with the application as they score it the most satisfaction level at 4.67
SD 0.52. The score for the question if users can learn and use the
application quickly is high which is 4.82 SD 0.71. On the other hand,
the lowest satisfaction rating is in the app’s form, apps lists, with the
satisfaction level as 4.01 SD 0.45.
Abstract: Opportunistic Routing (OR) increases the
transmission reliability and network throughput. Traditional routing
protocols preselects one or more predetermined nodes before
transmission starts and uses a predetermined neighbor to forward a
packet in each hop. The opportunistic routing overcomes the
drawback of unreliable wireless transmission by broadcasting one
transmission can be overheard by manifold neighbors. The first
cooperation-optimal protocol for Multirate OR (COMO) used to
achieve social efficiency and prevent the selfish behavior of the
nodes. The novel link-correlation-aware OR improves the
performance by exploiting the miscellaneous low correlated forward
links. Context aware Adaptive OR (CAOR) uses active suppression
mechanism to reduce packet duplication. The Context-aware OR
(COR) can provide efficient routing in mobile networks. By using
Cooperative Opportunistic Routing in Mobile Ad hoc Networks
(CORMAN), the problem of opportunistic data transfer can be
tackled. While comparing to all the protocols, COMO is the best as it
achieves social efficiency and prevents the selfish behavior of the
nodes.
Abstract: The paper focuses on the benefits of business process
modeling. Although this discipline is developing for many years,
there is still necessity of creating new opportunities to meet the ever
increasing users’ needs. Because one of these needs is related to the
conversion of business process models from one standard to another,
the authors have developed a converter between BPMN and EPC
standards using workflow patterns as intermediate tool. Nowadays
there are too many systems for business process modeling. The
variety of output formats is almost the same as the systems
themselves. This diversity additionally hampers the conversion of the
models. The presented study is aimed at discussing problems due to
differences in the output formats of various modeling environments.
Abstract: Common Platform for Automated Programming
(CPAP) is defined in details. Two versions of CPAP are described:
Cloud based (including set of components for classic programming,
and set of components for combined programming); and Knowledge
Based Automated Software Engineering (KBASE) based (including
set of components for automated programming, and set of
components for ontology programming). Four KBASE products
(Module for Automated Programming of Robots, Intelligent Product
Manual, Intelligent Document Display, and Intelligent Form
Generator) are analyzed and CPAP contributions to automated
programming are presented.
Abstract: Software quality issues require special attention
especially in view of the demands of quality software product to meet
customer satisfaction. Software development projects in most
organisations need proper defect management process in order to
produce high quality software product and reduce the number of
defects. The research question of this study is how to produce high
quality software and reducing the number of defects. Therefore, the
objective of this paper is to provide a framework for managing
software defects by following defined life cycle processes. The
methodology starts by reviewing defects, defect models, best
practices, and standards. A framework for defect management life
cycle is proposed. The major contribution of this study is to define a
defect management roadmap in software development. The adoption
of an effective defect management process helps to achieve the
ultimate goal of producing high quality software products and
contributes towards continuous software process improvement.
Abstract: One of the global combinatorial optimization
problems in machine learning is feature selection. It concerned with
removing the irrelevant, noisy, and redundant data, along with
keeping the original meaning of the original data. Attribute reduction
in rough set theory is an important feature selection method. Since
attribute reduction is an NP-hard problem, it is necessary to
investigate fast and effective approximate algorithms. In this paper,
we proposed two feature selection mechanisms based on memetic
algorithms (MAs) which combine the genetic algorithm with a fuzzy
record to record travel algorithm and a fuzzy controlled great deluge
algorithm, to identify a good balance between local search and
genetic search. In order to verify the proposed approaches, numerical
experiments are carried out on thirteen datasets. The results show that
the MAs approaches are efficient in solving attribute reduction
problems when compared with other meta-heuristic approaches.
Abstract: The aim of this paper is to propose a general
framework for storing, analyzing, and extracting knowledge from
two-dimensional echocardiographic images, color Doppler images,
non-medical images, and general data sets. A number of high
performance data mining algorithms have been used to carry out this
task. Our framework encompasses four layers namely physical
storage, object identification, knowledge discovery, user level.
Techniques such as active contour model to identify the cardiac
chambers, pixel classification to segment the color Doppler echo
image, universal model for image retrieval, Bayesian method for
classification, parallel algorithms for image segmentation, etc., were
employed. Using the feature vector database that have been
efficiently constructed, one can perform various data mining tasks
like clustering, classification, etc. with efficient algorithms along
with image mining given a query image. All these facilities are
included in the framework that is supported by state-of-the-art user
interface (UI). The algorithms were tested with actual patient data
and Coral image database and the results show that their performance
is better than the results reported already.
Abstract: Sentiment analysis means to classify a given review
document into positive or negative polar document. Sentiment
analysis research has been increased tremendously in recent times
due to its large number of applications in the industry and academia.
Sentiment analysis models can be used to determine the opinion of
the user towards any entity or product. E-commerce companies can
use sentiment analysis model to improve their products on the basis
of users’ opinion. In this paper, we propose a new One-class Support
Vector Machine (One-class SVM) based sentiment analysis model
for movie review documents. In the proposed approach, we initially
extract features from one class of documents, and further test the
given documents with the one-class SVM model if a given new test
document lies in the model or it is an outlier. Experimental results
show the effectiveness of the proposed sentiment analysis model.
Abstract: We present probabilistic multinomial Dirichlet
classification model for multidimensional data and Gaussian process
priors. Here, we have considered efficient computational method that
can be used to obtain the approximate posteriors for latent variables
and parameters needed to define the multiclass Gaussian process
classification model. We first investigated the process of inducing a
posterior distribution for various parameters and latent function by
using the variational Bayesian approximations and important sampling
method, and next we derived a predictive distribution of latent
function needed to classify new samples. The proposed model is
applied to classify the synthetic multivariate dataset in order to verify
the performance of our model. Experiment result shows that our model
is more accurate than the other approximation methods.