Abstract: Seeking and sharing knowledge on online forums
have made them popular in recent years. Although online forums are
valuable sources of information, due to variety of sources of
messages, retrieving reliable threads with high quality content is an
issue. Majority of the existing information retrieval systems ignore
the quality of retrieved documents, particularly, in the field of thread
retrieval. In this research, we present an approach that employs
various quality features in order to investigate the quality of retrieved
threads. Different aspects of content quality, including completeness,
comprehensiveness, and politeness, are assessed using these features,
which lead to finding not only textual, but also conceptual relevant
threads for a user query within a forum. To analyse the influence of
the features, we used an adopted version of voting model thread
search as a retrieval system. We equipped it with each feature solely
and also various combinations of features in turn during multiple
runs. The results show that incorporating the quality features
enhances the effectiveness of the utilised retrieval system
significantly.
Abstract: Patient-specific models are instance-based learning
algorithms that take advantage of the particular features of the patient
case at hand to predict an outcome. We introduce two patient-specific
algorithms based on decision tree paradigm that use AUC as a
metric to select an attribute. We apply the patient specific algorithms
to predict outcomes in several datasets, including medical datasets.
Compared to the patient-specific decision path (PSDP) entropy-based
and CART methods, the AUC-based patient-specific decision path
models performed equivalently on area under the ROC curve (AUC).
Our results provide support for patient-specific methods being a
promising approach for making clinical predictions.
Abstract: Recently, traffic monitoring has attracted the attention
of computer vision researchers. Many algorithms have been
developed to detect and track moving vehicles. In fact, vehicle
tracking in daytime and in nighttime cannot be approached with the
same techniques, due to the extreme different illumination conditions.
Consequently, traffic-monitoring systems are in need of having a
component to differentiate between daytime and nighttime scenes. In
this paper, a HSV-based day/night detector is proposed for traffic
monitoring scenes. The detector employs the hue-histogram and the
value-histogram on the top half of the image frame. Experimental
results show that the extraction of the brightness features along with
the color features within the top region of the image is effective for
classifying traffic scenes. In addition, the detector achieves high
precision and recall rates along with it is feasible for real time
applications.
Abstract: Data fusion technology can be the best way to extract
useful information from multiple sources of data. It has been widely
applied in various applications. This paper presents a data fusion
approach in multimedia data for event detection in twitter by using
Dempster-Shafer evidence theory. The methodology applies a mining
algorithm to detect the event. There are two types of data in the
fusion. The first is features extracted from text by using the bag-ofwords
method which is calculated using the term frequency-inverse
document frequency (TF-IDF). The second is the visual features
extracted by applying scale-invariant feature transform (SIFT). The
Dempster - Shafer theory of evidence is applied in order to fuse the
information from these two sources. Our experiments have indicated
that comparing to the approaches using individual data source, the
proposed data fusion approach can increase the prediction accuracy
for event detection. The experimental result showed that the proposed
method achieved a high accuracy of 0.97, comparing with 0.93 with
texts only, and 0.86 with images only.
Abstract: In the deep south of Thailand, checkpoints for people
verification are necessary for the security management of risk zones,
such as official buildings in the conflict area. In this paper, we
propose an automatic checkpoint system that verifies persons using
information from ID cards and facial features. The methods for a
person’s information abstraction and verification are introduced
based on useful information such as ID number and name, extracted
from official cards, and facial images from videos. The proposed
system shows promising results and has a real impact on the local
society.
Abstract: Torrefaction of biomass pellets is considered as a
useful pretreatment technology in order to convert them into a high
quality solid biofuel that is more suitable for pyrolysis, gasification,
combustion, and co-firing applications. In the course of torrefaction,
the temperature varies across the pellet, and therefore chemical
reactions proceed unevenly within the pellet. However, the
uniformity of the thermal distribution along the pellet is generally
assumed. The torrefaction process of a single cylindrical pellet is
modeled here, accounting for heat transfer coupled with chemical
kinetics. The drying sub-model was also introduced. The nonstationary
process of wood pellet decomposition is described by the
system of non-linear partial differential equations over the
temperature and mass. The model captures well the main features of
the experimental data.
Abstract: With the increasing number of people reviewing
products online in recent years, opinion sharing websites has become
the most important source of customers’ opinions. Unfortunately,
spammers generate and post fake reviews in order to promote or
demote brands and mislead potential customers. These are notably
destructive not only for potential customers, but also for business
holders and manufacturers. However, research in this area is not
adequate, and many critical problems related to spam detection have
not been solved to date. To provide green researchers in the domain
with a great aid, in this paper, we have attempted to create a highquality
framework to make a clear vision on review spam-detection
methods. In addition, this report contains a comprehensive collection
of detection metrics used in proposed spam-detection approaches.
These metrics are extremely applicable for developing novel
detection methods.
Abstract: Ambient Computing or Ambient Intelligence (AmI) is
emerging area in computer science aiming to create intelligently
connected environments and Internet of Things. In this paper, we
propose communication middleware architecture for AmI. This
middleware architecture addresses problems of communication,
networking, and abstraction of applications, although there are other
aspects (e.g. HCI and Security) within general AmI framework.
Within this middleware architecture, any application developer might
address HCI and Security issues with extensibility features of this
platform.
Abstract: In order to retrieve images efficiently from a large
database, a unique method integrating color and texture features
using genetic programming has been proposed. Opponent color
histogram which gives shadow, shade, and light intensity invariant
property is employed in the proposed framework for extracting color
features. For texture feature extraction, fast discrete curvelet
transform which captures more orientation information at different
scales is incorporated to represent curved like edges. The recent
scenario in the issues of image retrieval is to reduce the semantic gap
between user’s preference and low level features. To address this
concern, genetic algorithm combined with relevance feedback is
embedded to reduce semantic gap and retrieve user’s preference
images. Extensive and comparative experiments have been conducted
to evaluate proposed framework for content based image retrieval on
two databases, i.e., COIL-100 and Corel-1000. Experimental results
clearly show that the proposed system surpassed other existing
systems in terms of precision and recall. The proposed work achieves
highest performance with average precision of 88.2% on COIL-100
and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3%
on Corel. Thus, the experimental results confirm that the proposed
content based image retrieval system architecture attains better
solution for image retrieval.
Abstract: In this paper, an approach for the liver tumor detection
in computed tomography (CT) images is represented. The detection
process is based on classifying the features of target liver cell to
either tumor or non-tumor. Fractional differential (FD) is applied for
enhancement of Liver CT images, with the aim of enhancing texture
and edge features. Later on, a fusion method is applied to merge
between the various enhanced images and produce a variety of
feature improvement, which will increase the accuracy of
classification. Each image is divided into NxN non-overlapping
blocks, to extract the desired features. Support vector machines
(SVM) classifier is trained later on a supplied dataset different from
the tested one. Finally, the block cells are identified whether they are
classified as tumor or not. Our approach is validated on a group of
patients’ CT liver tumor datasets. The experiment results
demonstrated the efficiency of detection in the proposed technique.
Abstract: Advances in spatial and spectral resolution of satellite
images have led to tremendous growth in large image databases. The
data we acquire through satellites, radars, and sensors consists of
important geographical information that can be used for remote
sensing applications such as region planning, disaster management.
Spatial data classification and object recognition are important tasks
for many applications. However, classifying objects and identifying
them manually from images is a difficult task. Object recognition is
often considered as a classification problem, this task can be
performed using machine-learning techniques. Despite of many
machine-learning algorithms, the classification is done using
supervised classifiers such as Support Vector Machines (SVM) as the
area of interest is known. We proposed a classification method,
which considers neighboring pixels in a region for feature extraction
and it evaluates classifications precisely according to neighboring
classes for semantic interpretation of region of interest (ROI). A
dataset has been created for training and testing purpose; we
generated the attributes by considering pixel intensity values and
mean values of reflectance. We demonstrated the benefits of using
knowledge discovery and data-mining techniques, which can be on
image data for accurate information extraction and classification from
high spatial resolution remote sensing imagery.
Abstract: The acceptance of sustainable products by the final
consumer is still one of the challenges of the industry, which
constantly seeks alternative approaches to successfully be accepted in
the global market. A large set of methods and approaches have been
discussed and analysed throughout the literature. Considering the current need for sustainable development and the
current pace of consumption, the need for a combined solution
towards the development of new products became clear, forcing
researchers in product development to propose alternatives to the
previous standard product development models. This paper presents, through a systemic analysis of the literature
on product development, eco-design and consumer involvement, a set
of alternatives regarding consumer involvement towards the
development of sustainable products and how these approaches could
help improve the sustainable industry’s establishment in the general
market. Still being developed in the course of the author’s PhD, the initial
findings of the research show that the understanding of the benefits of
sustainable behaviour lead to a more conscious acquisition and
eventually to the implementation of sustainable change in the
consumer. Thus this paper is the initial approach towards the
development of new sustainable products using the fashion industry
as an example of practical implementation and acceptance by the
consumers. By comparing the existing literature and critically analysing it, this
paper concluded that the consumer involvement is strategic to
improve the general understanding of sustainability and its features.
The use of consumers and communities has been studied since the
early 90s in order to exemplify uses and to guarantee a fast
comprehension. The analysis done also includes the importance of
this approach for the increase of innovation and ground breaking
developments, thus requiring further research and practical
implementation in order to better understand the implications and
limitations of this methodology.
Abstract: Segmentation of left ventricle (LV) from cardiac
ultrasound images provides a quantitative functional analysis of the
heart to diagnose disease. Active Shape Model (ASM) is widely used
for LV segmentation, but it suffers from the drawback that
initialization of the shape model is not sufficiently close to the target,
especially when dealing with abnormal shapes in disease. In this work,
a two-step framework is improved to achieve a fast and efficient LV
segmentation. First, a robust and efficient detection based on Hough
forest localizes cardiac feature points. Such feature points are used to
predict the initial fitting of the LV shape model. Second, ASM is
applied to further fit the LV shape model to the cardiac ultrasound
image. With the robust initialization, ASM is able to achieve more
accurate segmentation. The performance of the proposed method is
evaluated on a dataset of 810 cardiac ultrasound images that are mostly
abnormal shapes. This proposed method is compared with several
combinations of ASM and existing initialization methods. Our
experiment results demonstrate that accuracy of the proposed method
for feature point detection for initialization was 40% higher than the
existing methods. Moreover, the proposed method significantly
reduces the number of necessary ASM fitting loops and thus speeds up
the whole segmentation process. Therefore, the proposed method is
able to achieve more accurate and efficient segmentation results and is
applicable to unusual shapes of heart with cardiac diseases, such as left
atrial enlargement.
Abstract: Game-based learning can enhance the learning
motivation of students and provide a means for them to learn through
playing games. This study used augmented reality technology to
develop an interactive card game as a game-based teaching aid for
delivering elementary school science course content with the aim of
enhancing student learning processes and outcomes. Through playing
the proposed card game, students can familiarize themselves with
appearance, features, and foraging behaviors of insects. The system
records the actions of students, enabling teachers to determine their
students’ learning progress. In this study, 37 students participated in an
assessment experiment and provided feedback through questionnaires.
Their responses indicated that they were significantly more motivated
to learn after playing the game, and their feedback was mostly
positive.
Abstract: Background modeling and subtraction in video
analysis has been widely used as an effective method for moving
objects detection in many computer vision applications. Recently, a
large number of approaches have been developed to tackle different
types of challenges in this field. However, the dynamic background
and illumination variations are the most frequently occurred problems
in the practical situation. This paper presents a favorable two-layer
model based on codebook algorithm incorporated with local binary
pattern (LBP) texture measure, targeted for handling dynamic
background and illumination variation problems. More specifically,
the first layer is designed by block-based codebook combining with
LBP histogram and mean value of each RGB color channel. Because
of the invariance of the LBP features with respect to monotonic
gray-scale changes, this layer can produce block wise detection results
with considerable tolerance of illumination variations. The pixel-based
codebook is employed to reinforce the precision from the output of the
first layer which is to eliminate false positives further. As a result, the
proposed approach can greatly promote the accuracy under the
circumstances of dynamic background and illumination changes.
Experimental results on several popular background subtraction
datasets demonstrate very competitive performance compared to
previous models.
Abstract: Social networking sites such as Twitter and Facebook
attracts over 500 million users across the world, for those users, their
social life, even their practical life, has become interrelated. Their
interaction with social networking has affected their life forever.
Accordingly, social networking sites have become among the main
channels that are responsible for vast dissemination of different kinds
of information during real time events. This popularity in Social
networking has led to different problems including the possibility of
exposing incorrect information to their users through fake accounts
which results to the spread of malicious content during life events.
This situation can result to a huge damage in the real world to the
society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the
fake accounts on Twitter. The study determines the minimized set of
the main factors that influence the detection of the fake accounts on
Twitter, and then the determined factors are applied using different
classification techniques. A comparison of the results of these
techniques has been performed and the most accurate algorithm is
selected according to the accuracy of the results. The study has been
compared with different recent researches in the same area; this
comparison has proved the accuracy of the proposed study. We claim
that this study can be continuously applied on Twitter social network
to automatically detect the fake accounts; moreover, the study can be
applied on different social network sites such as Facebook with minor
changes according to the nature of the social network which are
discussed in this paper.
Abstract: In this paper, de Laval rotor system has been
characterized by a hinge model and its transient response numerically
treated for a dynamic solution. The effect of the ensuing non-linear
disturbances namely rub and breathing crack is numerically
simulated. Subsequently, three analysis methods: Orbit Analysis, Fast
Fourier Transform (FFT), and Wavelet Transform (WT) are
employed to extract features of the vibration signal of the faulty
system. An analysis of the system response orbits clearly indicates
the perturbations due to the rotor-to-stator contact. The sensitivities
of WT to the variation in system speed have been investigated by
Continuous Wavelet Transform (CWT). The analysis reveals that
features of crack, rubs and unbalance in vibration response can be
useful for condition monitoring. WT reveals its ability to detect nonlinear
signal, and obtained results provide a useful tool method for
detecting machinery faults.
Abstract: This paper presents the development of a mobile
application for students at the Faculty of Information Technology,
Rangsit University (RSU), Thailand. RSU upgrades an enrollment
process by improving its information systems. Students can
download the RSU APP easily in order to access the RSU substantial
information. The reason of having a mobile application is to help
students to access the system regardless of time and place. The objectives of this paper include: 1. To develop an application
on iOS platform for those students at the Faculty of Information
Technology, Rangsit University, Thailand. 2. To obtain the students’
perception towards the new mobile app. The target group is those
from the freshman year till the senior year of the faculty of
Information Technology, Rangsit University. The new mobile application, called as RSU APP, is developed by
the department of Information Technology, Rangsit University. It
contains useful features and various functionalities particularly on
those that can give support to students. The core contents of the app
consist of RSU’s announcement, calendar, events, activities, and ebook.
The mobile app is developed on the iOS platform. The user
satisfaction is analyzed from the interview data from 81 interviewees
as well as a Google application like a Google form which 122
interviewees are involved. The result shows that users are satisfied
with the application as they score it the most satisfaction level at 4.67
SD 0.52. The score for the question if users can learn and use the
application quickly is high which is 4.82 SD 0.71. On the other hand,
the lowest satisfaction rating is in the app’s form, apps lists, with the
satisfaction level as 4.01 SD 0.45.
Abstract: This study focuses on the stress analysis of Mandibular
Advancement Devices (MADs), which are considered as a standard
treatment of snoring that promoted by American Academy of Sleep
Medicine (AASM). Snoring is the most significant feature of
sleep-disordered breathing (SDB). SDB will lead to serious problems
in human health. Oral appliances are ensured in therapeutic effect and
compliance, especially the MADs. This paper proposes a new MAD
design, and the finite element analysis (FEA) is introduced to precede
the stress simulation for this MAD.
Abstract: One of the global combinatorial optimization
problems in machine learning is feature selection. It concerned with
removing the irrelevant, noisy, and redundant data, along with
keeping the original meaning of the original data. Attribute reduction
in rough set theory is an important feature selection method. Since
attribute reduction is an NP-hard problem, it is necessary to
investigate fast and effective approximate algorithms. In this paper,
we proposed two feature selection mechanisms based on memetic
algorithms (MAs) which combine the genetic algorithm with a fuzzy
record to record travel algorithm and a fuzzy controlled great deluge
algorithm, to identify a good balance between local search and
genetic search. In order to verify the proposed approaches, numerical
experiments are carried out on thirteen datasets. The results show that
the MAs approaches are efficient in solving attribute reduction
problems when compared with other meta-heuristic approaches.