Abstract: Segmentation of left ventricle (LV) from cardiac
ultrasound images provides a quantitative functional analysis of the
heart to diagnose disease. Active Shape Model (ASM) is widely used
for LV segmentation, but it suffers from the drawback that
initialization of the shape model is not sufficiently close to the target,
especially when dealing with abnormal shapes in disease. In this work,
a two-step framework is improved to achieve a fast and efficient LV
segmentation. First, a robust and efficient detection based on Hough
forest localizes cardiac feature points. Such feature points are used to
predict the initial fitting of the LV shape model. Second, ASM is
applied to further fit the LV shape model to the cardiac ultrasound
image. With the robust initialization, ASM is able to achieve more
accurate segmentation. The performance of the proposed method is
evaluated on a dataset of 810 cardiac ultrasound images that are mostly
abnormal shapes. This proposed method is compared with several
combinations of ASM and existing initialization methods. Our
experiment results demonstrate that accuracy of the proposed method
for feature point detection for initialization was 40% higher than the
existing methods. Moreover, the proposed method significantly
reduces the number of necessary ASM fitting loops and thus speeds up
the whole segmentation process. Therefore, the proposed method is
able to achieve more accurate and efficient segmentation results and is
applicable to unusual shapes of heart with cardiac diseases, such as left
atrial enlargement.
Abstract: Advances in spatial and spectral resolution of satellite
images have led to tremendous growth in large image databases. The
data we acquire through satellites, radars, and sensors consists of
important geographical information that can be used for remote
sensing applications such as region planning, disaster management.
Spatial data classification and object recognition are important tasks
for many applications. However, classifying objects and identifying
them manually from images is a difficult task. Object recognition is
often considered as a classification problem, this task can be
performed using machine-learning techniques. Despite of many
machine-learning algorithms, the classification is done using
supervised classifiers such as Support Vector Machines (SVM) as the
area of interest is known. We proposed a classification method,
which considers neighboring pixels in a region for feature extraction
and it evaluates classifications precisely according to neighboring
classes for semantic interpretation of region of interest (ROI). A
dataset has been created for training and testing purpose; we
generated the attributes by considering pixel intensity values and
mean values of reflectance. We demonstrated the benefits of using
knowledge discovery and data-mining techniques, which can be on
image data for accurate information extraction and classification from
high spatial resolution remote sensing imagery.
Abstract: We investigate the large scale of networks in the
context of network survivability under attack. We use appropriate
techniques to evaluate and the attacker-based- and the defenderbased-
network survivability. The attacker is unaware of the operated
links by the defender. Each attacked link has some pre-specified
probability to be disconnected. The defender choice is so that to
maximize the chance of successfully sending the flow to the
destination node. The attacker however will select the cut-set with
the highest chance to be disabled in order to partition the network.
Moreover, we extend the problem to the case of selecting the best p
paths to operate by the defender and the best k cut-sets to target by
the attacker, for arbitrary integers p,k>1. We investigate some
variations of the problem and suggest polynomial-time solutions.
Abstract: In this corporate world, the technology of Web
services has grown rapidly and its significance for the development
of web based applications gradually rises over time. The success of
Business to Business integration rely on finding novel partners and
their services in a global business environment. However, the
selection of the most suitable Web service from the list of services
with the identical functionality is more vital. The satisfaction level of
the customer and the provider’s reputation of the Web service are
primarily depending on the range it reaches the customer’s
requirements. In most cases, the customer of the Web service feels
that he is spending for the service which is undelivered. This is
because the customer always thinks that the real functionality of the
web service is not reached. This will lead to change of the service
frequently. In this paper, a framework is proposed to evaluate the
Quality of Service (QoS) and its cost that makes the optimal
correlation between each other. In addition, this research work
proposes some management decision against the functional deviancy
of the web service that is guaranteed at time of selection.
Abstract: In this paper, an approach for the liver tumor detection
in computed tomography (CT) images is represented. The detection
process is based on classifying the features of target liver cell to
either tumor or non-tumor. Fractional differential (FD) is applied for
enhancement of Liver CT images, with the aim of enhancing texture
and edge features. Later on, a fusion method is applied to merge
between the various enhanced images and produce a variety of
feature improvement, which will increase the accuracy of
classification. Each image is divided into NxN non-overlapping
blocks, to extract the desired features. Support vector machines
(SVM) classifier is trained later on a supplied dataset different from
the tested one. Finally, the block cells are identified whether they are
classified as tumor or not. Our approach is validated on a group of
patients’ CT liver tumor datasets. The experiment results
demonstrated the efficiency of detection in the proposed technique.
Abstract: The current web has become a modern encyclopedia,
where people share their thoughts and ideas on various topics around
them. This kind of encyclopedia is very useful for other people who
are looking for answers to their questions. However, with the
growing popularity of social networking and blogging and ever
expanding network services, there has also been a growing diversity
of technologies along with a different structure of individual web
sites. It is therefore difficult to directly find a relevant answer for a
common Internet user. This paper presents a web application for the
real-time end-to-end analysis of selected Internet trends where the
trend can be whatever the people post online. The application
integrates fully configurable tools for data collection and analysis
using selected webometric algorithms, and for its chronological
visualization to user. It can be assumed that the application facilitates
the users to evaluate the quality of various products that are
mentioned online.
Abstract: In this research, we propose to conduct diagnostic and
predictive analysis about the key factors and consequences of urban
population relocation. To achieve this goal, urban simulation models
extract the urban development trends as land use change patterns from
a variety of data sources. The results are treated as part of urban big
data with other information such as population change and economic
conditions. Multiple data mining methods are deployed on this data to
analyze nonlinear relationships between parameters. The result
determines the driving force of population relocation with respect to
urban sprawl and urban sustainability and their related parameters.
This work sets the stage for developing a comprehensive urban
simulation model for catering to specific questions by targeted users. It
contributes towards achieving sustainability as a whole.
Abstract: This paper proposes an APPLE scheme that aims at providing absolute and proportional throughput guarantees, and maximizing system throughput simultaneously for wireless LANs with homogeneous and heterogenous traffic. We formulate our objectives as an optimization problem, present its exact and approximate solutions, and prove the existence and uniqueness of the approximate solution. Simulations validate that APPLE scheme is accurate, and the approximate solution can well achieve the desired objectives already.
Abstract: Scripts are one of the basic text resources to understand
broadcasting contents. Topic modeling is the method to get the
summary of the broadcasting contents from its scripts. Generally,
scripts represent contents descriptively with directions and speeches,
and provide scene segments that can be seen as semantic units.
Therefore, a script can be topic modeled by treating a scene segment
as a document. Because scene segments consist of speeches mainly,
however, relatively small co-occurrences among words in the scene
segments are observed. This causes inevitably the bad quality of
topics by statistical learning method. To tackle this problem, we
propose a method to improve topic quality with additional word
co-occurrence information obtained using scene similarities. The
main idea of improving topic quality is that the information that
two or more texts are topically related can be useful to learn high
quality of topics. In addition, more accurate topical representations
lead to get information more accurate whether two texts are related
or not. In this paper, we regard two scene segments are related
if their topical similarity is high enough. We also consider that
words are co-occurred if they are in topically related scene segments
together. By iteratively inferring topics and determining semantically
neighborhood scene segments, we draw a topic space represents
broadcasting contents well. In the experiments, we showed the
proposed method generates a higher quality of topics from Korean
drama scripts than the baselines.
Abstract: The introduction of a multitude of new and interactive
e-commerce information technology (IT) artifacts has impacted
adoption research. Rather than solely functioning as productivity
tools, new IT artifacts assume the roles of interaction mediators and
social actors. This paper describes the varying roles assumed by IT
artifacts, and proposes and distinguishes between four distinct foci of
how the artifacts are evaluated. It further proposes a theoretical
model that maps the different views of IT artifacts to four distinct
types of evaluations.
Abstract: In order to retrieve images efficiently from a large
database, a unique method integrating color and texture features
using genetic programming has been proposed. Opponent color
histogram which gives shadow, shade, and light intensity invariant
property is employed in the proposed framework for extracting color
features. For texture feature extraction, fast discrete curvelet
transform which captures more orientation information at different
scales is incorporated to represent curved like edges. The recent
scenario in the issues of image retrieval is to reduce the semantic gap
between user’s preference and low level features. To address this
concern, genetic algorithm combined with relevance feedback is
embedded to reduce semantic gap and retrieve user’s preference
images. Extensive and comparative experiments have been conducted
to evaluate proposed framework for content based image retrieval on
two databases, i.e., COIL-100 and Corel-1000. Experimental results
clearly show that the proposed system surpassed other existing
systems in terms of precision and recall. The proposed work achieves
highest performance with average precision of 88.2% on COIL-100
and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3%
on Corel. Thus, the experimental results confirm that the proposed
content based image retrieval system architecture attains better
solution for image retrieval.
Abstract: Ambient Computing or Ambient Intelligence (AmI) is
emerging area in computer science aiming to create intelligently
connected environments and Internet of Things. In this paper, we
propose communication middleware architecture for AmI. This
middleware architecture addresses problems of communication,
networking, and abstraction of applications, although there are other
aspects (e.g. HCI and Security) within general AmI framework.
Within this middleware architecture, any application developer might
address HCI and Security issues with extensibility features of this
platform.
Abstract: With the increasing number of people reviewing
products online in recent years, opinion sharing websites has become
the most important source of customers’ opinions. Unfortunately,
spammers generate and post fake reviews in order to promote or
demote brands and mislead potential customers. These are notably
destructive not only for potential customers, but also for business
holders and manufacturers. However, research in this area is not
adequate, and many critical problems related to spam detection have
not been solved to date. To provide green researchers in the domain
with a great aid, in this paper, we have attempted to create a highquality
framework to make a clear vision on review spam-detection
methods. In addition, this report contains a comprehensive collection
of detection metrics used in proposed spam-detection approaches.
These metrics are extremely applicable for developing novel
detection methods.
Abstract: Fractal based digital image compression is a specific
technique in the field of color image. The method is best suited for
irregular shape of image like snow bobs, clouds, flame of fire; tree
leaves images, depending on the fact that parts of an image often
resemble with other parts of the same image. This technique has
drawn much attention in recent years because of very high
compression ratio that can be achieved. Hybrid scheme incorporating
fractal compression and speedup techniques have achieved high
compression ratio compared to pure fractal compression. Fractal
image compression is a lossy compression method in which selfsimilarity
nature of an image is used. This technique provides high
compression ratio, less encoding time and fart decoding process. In
this paper, fractal compression with quad tree and DCT is proposed
to compress the color image. The proposed hybrid schemes require
four phases to compress the color image. First: the image is
segmented and Discrete Cosine Transform is applied to each block of
the segmented image. Second: the block values are scanned in a
zigzag manner to prevent zero co-efficient. Third: the resulting image
is partitioned as fractals by quadtree approach. Fourth: the image is
compressed using Run length encoding technique.
Abstract: In IEEE 802.11 networks, it is well known that the
traditional time-domain contention often leads to low channel
utilization. The first frequency-domain contention scheme, the time to
frequency (T2F), has recently been proposed to improve the channel
utilization and has attracted a great deal of attention. In this paper, we
present the latest research progress on the weighed frequency-domain
contention. We compare the basic ideas, work principles of these
related schemes and point out their differences. This paper is very
useful for further study on frequency-domain contention.
Abstract: This paper proposes a method of learning topics for
broadcasting contents. There are two kinds of texts related to
broadcasting contents. One is a broadcasting script, which is a series of
texts including directions and dialogues. The other is blogposts, which
possesses relatively abstracted contents, stories, and diverse
information of broadcasting contents. Although two texts range over
similar broadcasting contents, words in blogposts and broadcasting
script are different. When unseen words appear, it needs a method to
reflect to existing topic. In this paper, we introduce a semantic
vocabulary expansion method to reflect unseen words. We expand
topics of the broadcasting script by incorporating the words in
blogposts. Each word in blogposts is added to the most semantically
correlated topics. We use word2vec to get the semantic correlation
between words in blogposts and topics of scripts. The vocabularies of
topics are updated and then posterior inference is performed to
rearrange the topics. In experiments, we verified that the proposed
method can discover more salient topics for broadcasting contents.
Abstract: In the deep south of Thailand, checkpoints for people
verification are necessary for the security management of risk zones,
such as official buildings in the conflict area. In this paper, we
propose an automatic checkpoint system that verifies persons using
information from ID cards and facial features. The methods for a
person’s information abstraction and verification are introduced
based on useful information such as ID number and name, extracted
from official cards, and facial images from videos. The proposed
system shows promising results and has a real impact on the local
society.
Abstract: Energy consumption data, in particular those involving
public buildings, are impacted by many factors: the building structure,
climate/environmental parameters, construction, system operating
condition, and user behavior patterns. Traditional methods for data
analysis are insufficient. This paper delves into the data mining
technology to determine its application in the analysis of building
energy consumption data including energy consumption prediction,
fault diagnosis, and optimal operation. Recent literature are reviewed
and summarized, the problems faced by data mining technology in the
area of energy consumption data analysis are enumerated, and research
points for future studies are given.
Abstract: Although Mobile Wireless Sensor Networks (MWSNs),
which consist of mobile sensor nodes (MSNs), can cover a wide range
of observation region by using a small number of sensor nodes, they
need to construct a network to collect the sensing data on the base
station by moving the MSNs. As an effective method, the network
construction method based on Virtual Rails (VRs), which is referred
to as VR method, has been proposed. In this paper, we propose two
types of effective techniques for the VR method. They can prolong
the operation time of the network, which is limited by the battery
capabilities of MSNs and the energy consumption of MSNs. The
first technique, an effective arrangement of VRs, almost equalizes
the number of MSNs belonging to each VR. The second technique,
an adaptive movement method of MSNs, takes into account the
residual energy of battery. In the simulation, we demonstrate that each
technique can improve the network lifetime and the combination of
both techniques is the most effective.
Abstract: Many cluster based routing protocols have been
proposed in the field of wireless sensor networks, in which a group of
nodes are formed as clusters. A cluster head is selected from one
among those nodes based on residual energy, coverage area, number
of hops and that cluster-head will perform data gathering from
various sensor nodes and forwards aggregated data to the base station
or to a relay node (another cluster-head), which will forward the
packet along with its own data packet to the base station. Here a
Game Theory based Diligent Energy Utilization Algorithm (GTDEA)
for routing is proposed. In GTDEA, the cluster head selection is done
with the help of game theory, a decision making process, that selects
a cluster-head based on three parameters such as residual energy
(RE), Received Signal Strength Index (RSSI) and Packet Reception
Rate (PRR). Finding a feasible path to the destination with minimum
utilization of available energy improves the network lifetime and is
achieved by the proposed approach. In GTDEA, the packets are
forwarded to the base station using inter-cluster routing technique,
which will further forward it to the base station. Simulation results
reveal that GTDEA improves the network performance in terms of
throughput, lifetime, and power consumption.